

Sr. Databricks Engineer @ Toronto, ON, Canada.(Must Have: PySpark, Python, SQL, ETL, ML, Cloud Platforms)12 Months Contract - Direct Client...
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Databricks Engineer with a 12-month contract in Toronto, ON, Canada. Key skills include PySpark, Python, SQL, ETL, and ML. Candidates must have 5+ years of data engineering experience and familiarity with cloud platforms.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 3, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Toronto, OH
-
π§ - Skills detailed
#Datasets #SQL (Structured Query Language) #Data Science #Cloud #Data Pipeline #Deployment #AWS (Amazon Web Services) #Databricks #GCP (Google Cloud Platform) #Python #Data Engineering #Spark (Apache Spark) #Azure #"ETL (Extract #Transform #Load)" #dbt (data build tool) #Data Quality #MLflow #Airflow #Monitoring #PySpark #Scala #ML (Machine Learning)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Accion Labs, is seeking the following. Apply via Dice today!
Who can apply? - Candidate located in CANADA only. 100% remote opportunity.
Requirement-
β’ 5+ years of experience in a data engineering or ML engineering role
β’ Strong proficiency in Python, PySpark, and SQL
β’ Hands-on experience with ETL development and data pipeline orchestration
β’ Familiarity with basic machine learning concepts and model lifecycle
β’ Solid understanding of data warehousing and distributed systems
Nice to have
β’ Experience with cloud platforms (e.g., AWS, Azure, Google Cloud Platform)
β’ Exposure to tools like Airflow, dbt, or MLflow
β’ A strong sense of data ownership and intellectual curiosity.
Key Responsibilities:-
β’ Develop and maintain scalable data pipelines using Python, PySpark, and SQL
β’ Design, implement, and optimize ETL workflows to support analytics and ML models
β’ Collaborate with data scientists and analysts to ensure high-quality, accessible data
β’ Support the deployment of ML models into production environments- Implement data quality checks, monitoring, and validation
β’ Stay curious explore new datasets, uncover patterns, and contribute to data-driven innovation