

Databricks Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer with a long-term contract in Plano, Texas (Hybrid). Requires 7+ years in data engineering, proficiency in Scala and Python, and hands-on experience with Databricks and Spark. Strong data science integration skills are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Plano, TX
-
π§ - Skills detailed
#Data Quality #Scala #Cloud #Azure #GCP (Google Cloud Platform) #ML (Machine Learning) #Delta Lake #Spark (Apache Spark) #Model Deployment #Data Engineering #AWS (Amazon Web Services) #Data Science #"ETL (Extract #Transform #Load)" #Data Pipeline #Data Processing #Security #Apache Spark #Batch #Python #Data Framework #Databricks #Programming #Deployment
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
W2 requirement
Who We Are
ConnectedX is a digital transformation and product engineering services firm that enables enterprises to achieve operational excellence and technological advancement. Headquartered in Dallas, Texas, we serve Fortune 1000 companies across industries, offering deep technical expertise, a consultative approach, and a track record of successful large-scale implementations.
Position: Data Engineer (Databricks + Scala + Python + Data Science)
Location:Plano,Texas (Hybrid Model)
Duration: Long-Term Contract
Must-Have Skills
β’ Databricks (hands-on experience building scalable pipelines)
β’ Proficient in Scala and Python for data engineering tasks
β’ Strong understanding of Data Science concepts and integration into pipelines
β’ Experience with large-scale data processing and distributed systems
β’ Familiarity with Delta Lake, Spark, and performance optimization
Key Responsibilities
β’ Design, build, and maintain high-performance data pipelines using Databricks and Apache Spark
β’ Develop robust ETL/ELT solutions in Scala and Python
β’ Collaborate with Data Scientists to productionize models and enable ML workflows
β’ Optimize data workflows for speed, reliability, and scalability
β’ Integrate structured and unstructured data from various sources
β’ Ensure data quality, security, and governance across platforms
β’ Work cross-functionally with product, analytics, and engineering teams to meet data needs
Required Qualifications
β’ 7+ years of experience in data engineering roles
β’ Strong programming skills in Scala and Python
β’ Hands-on experience with Databricks, Spark, and distributed data frameworks
β’ Knowledge of data science workflows, machine learning model deployment, or MLOps
β’ Solid understanding of data warehousing, batch/streaming pipelines, and cloud platforms (AWS/Azure/GCP)
β’ Strong communication skills and ability to work in a collaborative environment