Intermediate Data Engineer (Python, SQL) to build and support data pipelines with Azure Data Factory
Our client is seeking an Intermediate Data Engineer (Python, SQL) to build and support data pipelines with Azure Data Factory.
Calgary-based client. Local candidates preferred, but can be 100% remote in Canada
- 5+ years of Data Engineering experience, or experience as a Object Oriented Developer with data-related projects.
- Azure Data Factory experience (Facts, Dims) with data modelling (Kimball Methodology)
- Experience building data pipelines (Synapse) and Python Scripting
- Experience with SQL
- Previous experience with Streaming Data programs
Nice to haves:
- Experience with Data Bricks
- Experience with Microsoft Technologies (PowerBI, etc)
- Experience with Big Data Technologies including Spark, Hadoop, Hive
Our client is helping to transition organizations into the cloud (Azure). The ideal candidate will be working with Microsoft tools to develop and maintain data pipelines that follow a modern data warehouse architecture pattern (Dimensions, Facts). The focus will be on deploying code using CI/CD pipelines, including ETL and data modeling.