Develop reusable workflows that encompass data ingestion, quality, transformation, and optimization.
Build scalable data pipelines utilizing various tools for extraction, transformation, and loading.
Deploy data solutions into production environments while ensuring efficiency and reliability.
Migrate data from legacy data warehouses, applying cloud architecture principles.
Automate data flow processes to enhance consumption and accessibility.
HERE'S WHAT YOU WILL NEED:
Advanced proficiency in PySpark.
Expert proficiency in Data Engineering.
A minimum of 4 years of experience in relevant related skills.
Bachelor's Degree in relevant field of studies.
BONUS POINTS IF YOU HAVE:
Expert proficiency in Cloudera Data Platform.
Advanced proficiency in Data Migrations.
Advanced proficiency in Data Pipelines.
Laporkan lowongan