Job Description |
A bank in Downtown Toronto is seeking a Data Engineer for a 6 month contract (renewable).. Enterprise Data Services is seeking a Data Engineer to enable their long-term data goals, seeking a future where data lives on cloud-native infrastructure. They are looking to hire a data engineer to help make this vision a reality, by providing data engineering and development support, and implementing data integration tooling. Reporting to the Senior Manager, Development, and working with a diverse team of data engineers, analysts, and developers, the role will be at the heart of the Plato organization, shaping what the next generation of data tooling looks like.
• Reason for request: hiring to assist with the implementation of Aurora for the purposes of creating the OSDs in helping our business partners in the Data Office
* Candidate Value Proposition: Working on cloud technology
Typical day in role:
• Design and implement technical components in cloud-native Big Data (and Massive Data) infrastructure
• Build data pipelines to enable cloud-based data analysis and reporting for end users
• Research and test the latest data technologies, make recommendations, and implement
• Provide technical expertise in designing and implementing data models
• Provide proper documentation and generating data features required for modelling, reporting and analysis
• Work with management and architects to break down, scope and estimate tasks
• Participate in planning and retrospective sessions, attend stand-ups, etc.
• Ongoing personal development of both technical and non-technical competencies.
|
Job Requirements |
Nice to have skills:
1. 3-5 years of experience as a Data Engineer
2. 2+ years’ experience in development using Java, Node.js,
3. 2+ years’ experience with scripting languages (eg, Java Scripting, Python, Scala, Shell Scripting, etc)
4. 2+ years’ experience designing, developing and implementing in an agile environment
5. 2 years’ experience with cloud-based Big Data ecosystem tools (eg, Google BigQuery, Amazon RedShift, Spark, Apache Airflow, etc)
6. Experience mapping, transforming and visualizing large data sets working with various data formats
• Data engineering experience with both structured and unstructured data
Nice to have:
• 5+ years’ experience and/or demonstrated proficiency in designing and developing cloud-native Big Data toolsets
• Deep knowledge of API-based ecosystem interaction
• Demonstrated proficiency in adapting to fast-changing working environment, and working in an agile development environment
• Working knowledge and experience in development using CI/CD principles and toolsets
• Experience in RTC, Jenkins, Confluence, JIRA
Degrees or certifications:
• Bachelor's degree in a technical field such as computer science, computer engineering or related field required
|