Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible.
Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world.
Location: Colombia
ResponsibilitiesDesign, develop, and maintain ETL pipelines using Azure Data Factory to transform and load data.Process and analyze large datasets using Apache Spark and Python in a distributed environment.Develop and deploy machine learning models in production, particularly using Azure Databricks.Collaborate with cross-functional teams to integrate data solutions and ensure accessibility of actionable insights.Optimize data workflows for scalability, performance, and efficiency.RequirementsAdvanced English skillsExperience in the creation and maintenance of ETL pipelines in Azure Data Factory.Advanced knowledge of Apache Spark, Python, and PySpark.Experience in creating and deploying machine learning models in production, especially on platforms like Azure Databricks.Familiarity with data visualization tools like Power BI or Tableau is a plus.Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).Certifications in Azure and Databricks are desirable.What You'll Love About Working HereWe recognize the significance of flexible work arrangements to provide support.
Be it remote work, or flexible work hours, you will get an environment to maintain a healthy work-life balance.At the heart of our mission is your career growth.
Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities.Equip yourself with valuable certification in the latest technologies.When you join Capgemini, you don't just start a new job.
You become part of something bigger.
#J-18808-Ljbffr