About Accenture
Accenture is a leading global professional services company that helps the world’s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth, and enhance citizen services - creating tangible value at speed and scale. We are a talent- and innovation-led company with approximately 775,000 people serving clients in more than 120 countries. Technology is at the core of change today, and we are one of the world’s leaders in helping drive that change, with strong ecosystem relationships. We combine our strength in technology and leadership in cloud, data and AI with unmatched industry experience, functional expertise, and global delivery capability. We are uniquely able to deliver tangible outcomes because of our broad range of services, solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Song. These capabilities, together with our culture of shared success and commitment to creating 360° value, enable us to help our clients reinvent and build trusted, lasting relationships. We measure our success by the 360° value we create for our clients, each other, our shareholders, partners and communities. Visit us at www.accenture.com.
As a Data Engineer, you will:
Design, develop, and maintain scalable and efficient data pipelines to support business insights and analytics.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver robust solutions.
Build and optimize data architectures, ensuring high performance, reliability, and scalability.
Implement and manage ETL/ELT processes to integrate data from various sources into centralized systems.
Ensure data security, quality, and governance in compliance with best practices and regulatory requirements.
Troubleshoot and resolve data-related issues and provide ongoing support to analytics and business teams.
To thrive in this role, you should bring:
Proficiency in Programming Languages: Python, Java, or Scala for data manipulation and pipeline development.
Data Pipeline Expertise: Hands-on experience with ETL/ELT tools and frameworks such as Apache Airflow, Talend, or Informatica.
Cloud Platforms: Knowledge of cloud services such as AWS (Glue, Redshift, S3), Azure (Data Factory, Synapse), or Google Cloud (BigQuery, Dataflow).
Data Warehousing: Experience with modern data warehousing solutions like Snowflake, Databricks, or similar.
Database Management: Strong SQL skills and familiarity with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra).
Big Data Technologies: Familiarity with Apache Spark, Hadoop, or Kafka is a plus.
Version Control and CI/CD: Experience with Git and CI/CD pipelines for automated deployment and testing.
Problem Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues effectively.
Preferred Qualifications:
Bachelor’s degree in Computer Science, Engineering, or a related field.
Relevant experience in data engineering or a similar role.
Knowledge of data visualization tools (e.g., Tableau, Power BI) is advantageous.
Familiarity with machine learning frameworks and tools is a plus.