THE WORK: Ignite your passion for innovation! You will have the opportunity to perform independently and become a subject matter expert while actively participating in discussions that contribute to providing solutions for work-related challenges. Your expertise in Apache Spark will shine as you engage in exciting projects that drive impactful results. We are eager to see how your contributions will make a difference in our dynamic environment!
Develop reusable workflows that encompass data ingestion, quality, transformation, and optimization.
Build scalable data pipelines utilizing various tools for extraction, transformation, and loading.
Deploy data solutions into production environments with a focus on efficiency and reliability.
Migrate data from legacy data warehouses while adhering to cloud architecture principles.
Automate data flow processes to enhance accessibility and usability for stakeholders.
HERE'S WHAT YOU WILL NEED:
Expert proficiency in Apache Spark.
Advanced proficiency in Python (Programming Language).
A minimum of 1 year of experience in relevant related skills.
Bachelor's Degree in relevant field of studies.
BONUS POINTS IF YOU HAVE:
Expert proficiency in Database Architecture.
Expert proficiency in ETL Pipelines.
Expert proficiency in Kubeflow.
Expert proficiency in Machine Learning.