

Databricks Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Developer with a contract length of 12+ months, offering a hybrid work location in Dallas, TX. Key skills include proficiency in ETL, Python, PySpark, Scala, and experience with MongoDB and Oracle databases.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 8, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Databases #Data Pipeline #RDBMS (Relational Database Management System) #Data Integrity #Spark (Apache Spark) #Data Architecture #NoSQL #Azure Databricks #Python #Data Processing #Azure #Data Analysis #Data Quality #Scala #PySpark #Databricks #Oracle #Monitoring #Data Science #"ETL (Extract #Transform #Load)"
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Databricks Developer (with Python)
Location: 208 S Akard St Dallas, TX 75202 (Hybrid)
Duration: 12+ Months with possible extension/conversion
Must-Have Skillsets:
β’ Proficiency in ETL between different RDBMSs (e.g., Oracle) to Databricks and NoSQL databases
β’ Experience working with Mongo DB
β’ Working knowledge in Python, PySpark, and Scala
Job Responsibilities:
β’ Design and implement scalable data processing solutions using Azure Databricks.
β’ Collaborate with data scientists and data analysts to understand data needs.
β’ Optimize data pipelines and workflows for performance and scalability.
β’ Ensure data quality and integrity throughout data transformation and load processes.
β’ Develop and maintain data architecture and best practices.
β’ Troubleshoot and resolve data-related issues in a timely manner.
β’ Set up performance monitoring and alerting for pipeline and data integrity.