Login | Register

Two Data Engineers needed to help build data pipelines in EDW Migration project

Job Type: Contract
Positions to fill: 1
Start Date: Jun 06, 2022
Job End Date: Mar 31, 2023
Pay Rate: Hourly: Negotiable
Job ID: 119235
Location: Vancouver
Our client is looking for Two Data Engineers to help build data pipelines in EDW Migration project and testing of Basic Optional Claim Cost and Loss Reserving snapshots for Corporate Actuary.

Key Job Accountabilities and Responsibilities: 
  • Analyze the data and develop Scala/Spark programs and related components in the work areas of Information Management ("IM") relating to claims 
  • Use data pipelines to extract data from data sources which are in various formats (viz - flat files, XML, relational tables, Oracle logs), and use tools (such as StreamSets, Scala and Spark programs) to transform and store the data in big data platform (Data Lake) after data validation.
  • Develop StreamSets, Spark and Scala programs for data ingestion 
  • Data transformation as per the mapping document 
  • Data Analysis as per requirements and develop data models/mapping 
  • Data Validation 
Position Specific Skills or Qualifications required: 
  • Extensive experience working with complex data
  • Good programming skill in Scala 
  • Applied knowledge in Big Data platforms, ideally with exposure to Hadoop ecosystem (HDFS, Pig, Hive, SPARK, Big SQL, NoSQL, YARN) 
  • Experience in designing efficient and robust data pipelines 
  • Solid skills in SQL 
  • Knowledge of data validations and DataOps 
  • Knowledge in enterprise systems such as Guidewire Claim Center would be an asset