

TruStart IT LLC
Senior Databricks Developer / Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Developer/Data Engineer with 8+ years of experience, focusing on migrating legacy ETL workloads to Databricks. Contract length is over 6 months, with a pay rate of $113,570.55 - $136,773.13 per year, on-site.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
621
-
🗓️ - Date
March 18, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Plano, TX 75075
-
🧠 - Skills detailed
#Azure #Azure Databricks #ADF (Azure Data Factory) #Scala #SSIS (SQL Server Integration Services) #Programming #Oracle #Cloud #Azure Data Factory #Informatica #Airflow #PySpark #Teradata #ADLS (Azure Data Lake Storage) #Spark SQL #Python #Talend #"ETL (Extract #Transform #Load)" #Databricks #Kafka (Apache Kafka) #Delta Lake #SQL (Structured Query Language) #Migration #Spark (Apache Spark) #Data Warehouse #Data Engineering #Data Processing #Hadoop #Data Pipeline
Role description
Experience Level: Senior (8+ years in Data Engineering)
Must-Have Skills
Strong hands-on experience with Databricks (PySpark / Spark SQL)
Proven experience migrating legacy ETL/data warehouse workloads into Databricks
Experience implementing Change Data Capture (CDC) pipelines
Experience building data engineering pipelines using Spark
Experience creating and managing Databricks jobs, workflows, and scheduling
Experience with pipeline orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, etc.)
Strong SQL and Python programming skills
Experience working with large-scale data sets and distributed data processing
Migration Experience (Very Important)
The candidate should have real project experience migrating existing data platforms or ETL pipelines into Databricks, such as migrations from:
Informatica / SSIS / Talend
Teradata / Oracle / Netezza
Hadoop or legacy data warehouse environments
They should understand how to convert legacy ETL logic into Spark-based transformations and Delta Lake pipelines.
Preferred Skills
Experience working with Delta Lake / Lakehouse architecture
Familiarity with CDC tools and streaming technologies (Kafka, Debezium, SQL CDC, etc.)
Experience with Azure Databricks ecosystem (ADLS, Azure Data Factory) or similar cloud environments
Experience implementing incremental ingestion frameworks and data validation
Candidate Profile
We are specifically looking for candidates who have hands-on experience delivering Databricks migration projects, not just maintaining existing pipelines.
Ideal candidates should be comfortable:
Designing scalable data pipelines
Migrating legacy ETL workflows to Databricks
Implementing CDC-based ingestion patterns
Building and orchestrating production-grade Databricks jobs
Please prioritize candidates who can clearly describe their role in Databricks migration projects.
Job Types: Full-time, Contract
Pay: $113,570.55 - $136,773.13 per year
Application Question(s):
Do you have Databricks certification?
Work Location: In person
Experience Level: Senior (8+ years in Data Engineering)
Must-Have Skills
Strong hands-on experience with Databricks (PySpark / Spark SQL)
Proven experience migrating legacy ETL/data warehouse workloads into Databricks
Experience implementing Change Data Capture (CDC) pipelines
Experience building data engineering pipelines using Spark
Experience creating and managing Databricks jobs, workflows, and scheduling
Experience with pipeline orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, etc.)
Strong SQL and Python programming skills
Experience working with large-scale data sets and distributed data processing
Migration Experience (Very Important)
The candidate should have real project experience migrating existing data platforms or ETL pipelines into Databricks, such as migrations from:
Informatica / SSIS / Talend
Teradata / Oracle / Netezza
Hadoop or legacy data warehouse environments
They should understand how to convert legacy ETL logic into Spark-based transformations and Delta Lake pipelines.
Preferred Skills
Experience working with Delta Lake / Lakehouse architecture
Familiarity with CDC tools and streaming technologies (Kafka, Debezium, SQL CDC, etc.)
Experience with Azure Databricks ecosystem (ADLS, Azure Data Factory) or similar cloud environments
Experience implementing incremental ingestion frameworks and data validation
Candidate Profile
We are specifically looking for candidates who have hands-on experience delivering Databricks migration projects, not just maintaining existing pipelines.
Ideal candidates should be comfortable:
Designing scalable data pipelines
Migrating legacy ETL workflows to Databricks
Implementing CDC-based ingestion patterns
Building and orchestrating production-grade Databricks jobs
Please prioritize candidates who can clearly describe their role in Databricks migration projects.
Job Types: Full-time, Contract
Pay: $113,570.55 - $136,773.13 per year
Application Question(s):
Do you have Databricks certification?
Work Location: In person






