

Motion Recruitment
Principal Data Engineer - Databricks & Azure Performance Optimization
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Principal Data Engineer specializing in Databricks and Azure performance optimization. It offers a 6+ month contract at $103/hr in Houston, TX (hybrid). Required skills include advanced Python, SQL, and cloud-native streaming data experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
824
-
ποΈ - Date
February 28, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Houston, TX
-
π§ - Skills detailed
#Programming #Azure #Automation #Data Engineering #SonarQube #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #PySpark #Cloud #AWS (Amazon Web Services) #Redshift #Airflow #SQL (Structured Query Language) #Pytest #Azure cloud #Data Pipeline #SAP #Python #Databricks
Role description
β’ Must be located/authorized to work in the US without visa sponsorship or transfer now or in the future. No C2C inquiries, please
LOCAL CANDIDATES ONLY β HYBRID ROLES
Senior/Principal Data Engineer β Databricks & Azure Performance Optimization
Hybrid Roles in Houston, TX β 2 days Onsite/3 days Remote - LOCAL CANDIDATES ONLY
Pay Rate: $103/hr on W2
Duration: 6+ months with possibility of longer-term extensions
If interested, please email your resume to grace.johnson@motionrecruitment.com
β’ Must be located/authorized to work in the US without visa sponsorship or transfer now or in the future. No C2C inquiries, please
Role:
We are looking for a hands-on, technical expert, a βDatabricks performance surgeonβ a Senior / Principal-level, hands-on Data Engineer from high-volume, streaming data environments.
Must understand platform-level optimization, not just development
Key Responsibilities:
Analyze, optimize, and troubleshoot Databricks and Azure cloud data platforms for peak performance and cost effectiveness
Review and enhance existing data pipelines, including streaming data flows, high-volume data sets, and real-time transformations
Build (POCs) proofs of concept to test alternative technical approaches and drive innovative solutions
Investigate and diagnose Spark performance bottlenecks and Databricks DLT cost spikes
Write and validate robust Python/PySpark and complex SQL code
Collaborate closely with engineering teams, providing expert technical guidance on pipeline architecture and optimization
Lead technical change initiatives by improving solution design and implementation (excluding timeline/change ceremony management)
Maintain best practices in pipeline development, automation, and testing
Improve inefficient crude vs product data segregation logic
Required Skills & Experience:
Deep, hands-on platform expertise with Databricks (beyond just notebook use) and Azure (preferred over AWS)
Mastery of Spark internals, including performance tuning and optimization techniques
Advanced Core Python programming skills (PySpark and Python fundamentals)
Experience in SQL, including complex query writing and tuning for performance
Strong experience with cloud-native streaming data pipelines and ELT processes (not ETL)
Track record of driving efficiency, cost savings, or significant performance improvements in production cloud data environments
Capability to explain Databricks cost drivers and optimization strategies
Preferred Skills:
Familiarity with AWS, SAP, Airflow, Glue, Kinesis, Redshift, SonarQube, or PyTest
Previous experience in Oil & Gas or Trading industries is helpful, but not required
β’ Must be located/authorized to work in the US without visa sponsorship or transfer now or in the future. No C2C inquiries, please
LOCAL CANDIDATES ONLY β HYBRID ROLES
Senior/Principal Data Engineer β Databricks & Azure Performance Optimization
Hybrid Roles in Houston, TX β 2 days Onsite/3 days Remote - LOCAL CANDIDATES ONLY
Pay Rate: $103/hr on W2
Duration: 6+ months with possibility of longer-term extensions
If interested, please email your resume to grace.johnson@motionrecruitment.com
β’ Must be located/authorized to work in the US without visa sponsorship or transfer now or in the future. No C2C inquiries, please
Role:
We are looking for a hands-on, technical expert, a βDatabricks performance surgeonβ a Senior / Principal-level, hands-on Data Engineer from high-volume, streaming data environments.
Must understand platform-level optimization, not just development
Key Responsibilities:
Analyze, optimize, and troubleshoot Databricks and Azure cloud data platforms for peak performance and cost effectiveness
Review and enhance existing data pipelines, including streaming data flows, high-volume data sets, and real-time transformations
Build (POCs) proofs of concept to test alternative technical approaches and drive innovative solutions
Investigate and diagnose Spark performance bottlenecks and Databricks DLT cost spikes
Write and validate robust Python/PySpark and complex SQL code
Collaborate closely with engineering teams, providing expert technical guidance on pipeline architecture and optimization
Lead technical change initiatives by improving solution design and implementation (excluding timeline/change ceremony management)
Maintain best practices in pipeline development, automation, and testing
Improve inefficient crude vs product data segregation logic
Required Skills & Experience:
Deep, hands-on platform expertise with Databricks (beyond just notebook use) and Azure (preferred over AWS)
Mastery of Spark internals, including performance tuning and optimization techniques
Advanced Core Python programming skills (PySpark and Python fundamentals)
Experience in SQL, including complex query writing and tuning for performance
Strong experience with cloud-native streaming data pipelines and ELT processes (not ETL)
Track record of driving efficiency, cost savings, or significant performance improvements in production cloud data environments
Capability to explain Databricks cost drivers and optimization strategies
Preferred Skills:
Familiarity with AWS, SAP, Airflow, Glue, Kinesis, Redshift, SonarQube, or PyTest
Previous experience in Oil & Gas or Trading industries is helpful, but not required






