

Tek Leaders Inc
Data Bricks Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Bricks Developer with a long-term contract in Cincinnati, OH. Requires 5+ years managing Databricks, expertise in Python and Spark, and skills in monitoring and optimization. Onsite work, 5 days a week.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 24, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Cincinnati, OH
-
π§ - Skills detailed
#Dynatrace #Automation #Data Bricks #Python #Databricks #Logging #Scala #Data Processing #Spark (Apache Spark) #Monitoring
Role description
Data bricks developer
Cincinnati, OH
Onsite 5 days in a Week
Long term contract role
Role Description
A Data bricks developer to design, develop, and maintain Data bricks pipelines
Required Skills
β’ 5+ years of managing and administering the Databricks environment, including clusters, workspaces, and notebooks, to ensure optimal performance, reliability, and scalability.
β’ Write clean, modular Python code for data processing, orchestration, automation, and integration with internal and external systems.
β’ Implement and tune Spark jobs for performance, reliability, and cost-efficiency, including partitioning, caching, and cluster configuration.
β’ Admin capabilities to setup infrastructure in cost effective way.
β’ Configure and optimize Databricks clusters and resources based on workload requirements and best practices.
β’ Monitor system performance, resource utilization, and availability using Databricks monitoring and logging tools. Preferably Dynatrace.
Data bricks developer
Cincinnati, OH
Onsite 5 days in a Week
Long term contract role
Role Description
A Data bricks developer to design, develop, and maintain Data bricks pipelines
Required Skills
β’ 5+ years of managing and administering the Databricks environment, including clusters, workspaces, and notebooks, to ensure optimal performance, reliability, and scalability.
β’ Write clean, modular Python code for data processing, orchestration, automation, and integration with internal and external systems.
β’ Implement and tune Spark jobs for performance, reliability, and cost-efficiency, including partitioning, caching, and cluster configuration.
β’ Admin capabilities to setup infrastructure in cost effective way.
β’ Configure and optimize Databricks clusters and resources based on workload requirements and best practices.
β’ Monitor system performance, resource utilization, and availability using Databricks monitoring and logging tools. Preferably Dynatrace.





