Cenicor Technologies Inc.

Databricks Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Developer, offering a 12+ month remote contract with a focus on ETL/ELT pipelines, Apache Spark, and cloud platforms (Azure, AWS, GCP). Key skills include PySpark, SQL, and data engineering experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 3, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Scrum #Storage #Data Ingestion #Version Control #Apache Spark #ADLS (Azure Data Lake Storage) #Agile #Data Architecture #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Spark SQL #GCP (Google Cloud Platform) #Data Pipeline #Data Processing #Data Transformations #Scala #Azure #Databricks #Data Engineering #GIT #Datasets #Data Governance #SQL (Structured Query Language) #Delta Lake #Data Quality #Logging #Monitoring #PySpark #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Databases #Cloud #Kafka (Apache Kafka)
Role description
Job Title: Databricks Developer Location: Remote Duration: 12+ Months Interview Mode: Video Job Summary We are seeking an experienced Databricks Developer to design, develop, and optimize scalable data pipelines and analytics solutions using Databricks and Apache Spark. The ideal candidate will work closely with data engineers, architects, and business teams to deliver high-quality data solutions in a cloud environment. Roles & Responsibilities • Develop and maintain ETL/ELT pipelines using Databricks and Apache Spark. • Build and optimize Databricks notebooks using PySpark, Spark SQL, or Scala. • Implement data transformations, validations, and aggregations for large datasets. • Integrate Databricks with cloud storage services (ADLS, S3, GCS). • Optimize Spark jobs for performance, scalability, and cost efficiency. • Implement Delta Lake for reliable and scalable data processing. • Support data ingestion from multiple sources (databases, APIs, files, streams). • Collaborate with data architects and business stakeholders to understand data requirements. • Troubleshoot and resolve production data issues. • Participate in Agile/Scrum development activities. Required Skills & Qualifications • Strong hands-on experience as a Databricks Developer. • Expertise in Apache Spark and PySpark / Spark SQL / Scala. • Strong experience with ETL/ELT and data engineering concepts. • Experience working on cloud platforms (Azure, AWS, or GCP). • Strong SQL skills and experience with large datasets. • Familiarity with version control (Git) and CI/CD pipelines. • Good analytical and problem-solving skills. Nice to Have • Experience with Delta Lake and Lakehouse architecture. • Knowledge of Unity Catalog and data governance. • Experience with streaming technologies (Kafka, Event Hubs, Kinesis). • Exposure to data quality, monitoring, and logging frameworks. • Databricks or cloud certifications. Kind Regards, Simran Kaur Sr, US IT Recruiter | Cenicor Technologies Inc. Phone: (510)-956-8882 Email: Simran@cenicortech.com Web: www.cenicortech.com • 5941 81st PL N, Pinellas Park, FL 33781