Back to jobs
R

Data Developer

馃嚚馃嚘RBC

16 YORK ST:TORONTO0 applicants
Posted 1d agoApr 30, 2026, 12:00 AMApply by Sun, May 31, 2026
Full TimeMid-level

Job Description

Job Description What is the opportunity? At RBC, our data engineering team enhances visibility into assets across the Public Cloud and Application Security landscape. Our mission is to provide clear insights into digital infrastructure, enabling effective identification and management of security risks. As a Data Developer, you will be a vital member of our team, driving the development of a cloud-based data platform that powers analytics and operational reporting. We harness industry-leading tools like Databricks, Python, and SQL, transforming data into strategic assets. Your experience will enable you to design reliable ingestion pipelines and facilitate data accessibility while maintaining robust security measures. Collaboration is key to our success, fostering an innovative environment where team members leverage their technical skills to drive continuous advancements in cloud security and data utilization across the organization. What will you do? Develop and maintain Databricks-based data platform using Azure Databricks, leveraging Python, PySpark, and Spark SQL to support analytics and operational reporting Design robust data ingestion and transformation pipelines using Python and PySpark to efficiently process large datasets Build and manage CDC (Change Data Capture) pipelines leveraging Python for real-time data synchronization and incremental data loads Develop and optimize ELT/ETL workflows using Databricks Workflows or Apache Airflow, with Python-based orchestration and automation Design and manage Delta Lake solutions for data versioning, efficient data storage, and schema evolution Write production-grade Python code for data processing, pipeline automation, and custom data transformations Ensure datasets are clean, reliable, and ready for consumption by implementing data quality checks and validation processes using Python and SQL Implement data governance and compliance standards using Unity Catalog for access management and data lineage tracking Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights Monitor, troubleshoot, and optimize Spark jobs for performance, addressing pipeline bottlenecks and ensuring cost efficiency Implement CI/CD methodologies for automated deployment and testing of data pipelines using Python-based frameworks Develop reusable Python libraries and frameworks to accelerate data platform development Develop and maintain comprehensive documentation for data pipelines, transformations, and data models Contribute to data platform enhancements that drive excellence across multiple business units What do you need to succeed? Must-Have Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a related field Minimum 3 years of experience in data development, preferably in cloud-based environments Expert-level proficiency in Python including advanced features, object-oriented progra

Read original posting

Required Skills

R
R

RBC