Develop reusable workflows that encompass data ingestion, quality, transformation, and optimization.
Build scalable data pipelines utilizing various tools for extraction, transformation, and loading.
Deploy data solutions into production environments with a focus on efficiency and reliability.
Migrate data from legacy data warehouses while adhering to cloud architecture principles.
Automate data flow processes to enhance accessibility and usability for stakeholders.
HERE'S WHAT YOU WILL NEED:
Expert proficiency in Apache Spark.
Advanced proficiency in Python (Programming Language).
A minimum of 1 year of experience in relevant related skills.
Bachelor's Degree in relevant field of studies.
BONUS POINTS IF YOU HAVE:
Expert proficiency in Database Architecture.
Expert proficiency in ETL Pipelines.
Expert proficiency in Kubeflow.
Expert proficiency in Machine Learning.
Laporkan lowongan