What you will do Communicating with the different teams (Analytics, Product, Engineering) to understand their Data needs and business goals; Perform the necessary changes on the client's data lake/pipelines to accomplish their business goals and needs; Building and maintaining complex pipelines to connect different sources of data together; Identifying and optimizing performance bottlenecks; Designing data structures for analytics; Writing clear technical documentation. Must haves 5+ years experience in software development as Data Engineer; Strong skills and proven experience working with Data Lakes & Warehouse; Practical experience building and maintaining ETL/ELT data pipelines; Great communication skills as well as stakeholder management and requirements gathering; Strong experience with BigQuery, Airflow (AWS MWAA), AWS Athena and AWS Glue; Proficient in Python and SQL; Practical experience in building the data collection, validation and normalization; Advanced English; A strong sense of ownership and willingness to overcome every challenge with the same level of energy, regardless of the complexity or the end goal; Experience working in an environment leveraging remote communication and collaboration tools (e.g. Github, Slack, JIRA). Nice to haves Experience working with DBT and Redshift is a high plus; Infrastructure experience as a DevOps Engineer; Experience working with PowerBI; You are passionate about writing clean, modern, maintainable, & highly-performant code; You have a pro-active ability and a self-starter attitude to troubleshoot and solve problems; You have experience working in an Agile environment; You have strong communication skills with excellent interpersonal effectiveness, in one-on-one interactions and presenting to a room; You have self-awareness and a desire to continually improve. #J-18808-Ljbffr