Scheduled Maintenance: will be Unavailable from Friday, Dec 9th @ 8:00 PM MST - Saturday, Dec 10th @ 8:30 AM MST for regularly scheduled maintenance.
Login | Register

Intermediate Data Engineer to build and support data pipelines with Azure Data Factory with our integration client

Job Type: Contract
Positions to fill: 1
Start Date: Dec 12, 2022
Job End Date: Jun 30, 2023
Pay Rate: Hourly: Negotiable
Job ID: 125573
Location: Calgary, Edmonton, Halifax, Montreal, Ottawa, Regina, Toronto, Vancouver, Victoria, Winnipeg
Our client is seeking a Intermediate Data Engineer to build and support data pipelines with Azure Data Factory 

The Story behind the need:

Our client is helpling to transition an enterprise organization into the cloud (Azure). The ideal candidate will be working with Microsoft tools to devleop and maintian data pipelines that follow a modern data warehouse architecture pattern (Dimentions, Facts). The focus will be on deploying code using CI/CD pipelines, including ETL and data modeling. 

Typical Day in this Role:
  • Working closely with stakeholders to understand and solve their data needs
  • Designing and developing data pipelines
  • Ability to build and maintain high quality code
  • Working with ETL processes and ability to implement new processes
  • Working with with data modeling techniques, including dimentions and facts 
  • Working with tools including Azure Data Factory, DataBricks, SQL, Data Lake Store, Power BI and Azure DevOps. 

Calgary-based client. Can be 100% remote in Canada 

Must have's for the role:
  • 4+ years of Data Engineering experience, or experience as a Object Oriented Developer with data-related project experience.
  • Experience with Azure Data Factory (Facts, Dims)
  • Experience with Synapse Pipelines and Python Scripting
  • Experience with SQL 
  • Previous experience with building/supporting Data Pipelines or with Streaming Data programs

Nice to have's for the role:
  • Experience with Data Bricks 
  • Experience with Microsoft Technologies (PowerBI, etc)
  • Experience with Big Data Technologies including Spark, Hadoop, Hive