

Cystems Logic
Databricks Engineer- Remote
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Databricks Engineer with 10+ years in IT, 4+ years in Databricks and Apache Spark, and expertise in ETL/ELT pipelines. It is a 12-month remote contract with a competitive pay rate.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 15, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
1099 Contractor
-
π - Security
Unknown
-
π - Location detailed
Houston, TX
-
π§ - Skills detailed
#Monitoring #Databricks #SQL (Structured Query Language) #Delta Lake #Deployment #Data Engineering #Spark SQL #Data Processing #Data Quality #Compliance #Microsoft Power BI #Visualization #Jenkins #Security #Azure Data Factory #Scrum #SQL Queries #AWS (Amazon Web Services) #AWS Glue #Terraform #Cloud #Azure #"ETL (Extract #Transform #Load)" #PySpark #Big Data #DevOps #Apache Spark #Agile #Scala #Kafka (Apache Kafka) #Spark (Apache Spark) #Data Pipeline #Data Warehouse #GIT #Data Modeling #Python #BI (Business Intelligence) #Data Architecture #Tableau #GCP (Google Cloud Platform) #Logging #Storage #ML (Machine Learning) #ADF (Azure Data Factory) #Data Lake
Role description
Job Description
Hi,
10 years of experience in IT industry.
Job Title: Sr. Databricks Engineer
Location: Remote
Duration: 12 Months Contract
We have below longterm job opening.
If you are interested , Please send your updated resume with below details.
Your current location:
Visa status:
Availability:
Expected rate all inc on c2c /1099 :
Job Summary:
We are seeking a highly skilled Sr. Databricks Engineer to design develop and optimize scalable big data and analytics solutions. The ideal candidate will have extensive experience with Databricks Spark cloud-based data platforms and modern ETL/ELT frameworks. This role requires strong expertise in building high-performing data pipelines supporting enterprise analytics and collaborating with cross-functional teams in a remote environment.
Must Have Technical/Functional Skills
10+ years of overall experience in data engineering or related fields
4+ years of hands-on experience with Databricks and Apache Spark
Strong proficiency in PySpark SQL and performance tuning
Experience with ETL/ELT pipeline development and orchestration
Expertise in Delta Lake data modeling and optimization
Strong experience with cloud platforms such as Azure AWS or GCP
Familiarity with Python or Scala for data engineering tasks
Experience with CI/CD pipelines and DevOps practices
Strong analytical problem-solving and communication skills
Roles & Responsibilities
Design develop and maintain scalable ETL/ELT pipelines using Databricks PySpark and Spark SQL
Build and optimize Delta Lake architectures and data workflows
Develop reusable frameworks for ingestion transformation and validation
Collaborate with data architects analysts and business stakeholders to deliver data solutions
Optimize Spark jobs and SQL queries for performance and scalability
Implement data quality monitoring logging and alerting mechanisms
Develop and maintain CI/CD pipelines for Databricks notebooks jobs and workflows
Work with cloud-based storage and compute services
Support production deployments troubleshooting and incident resolution
Ensure security governance and compliance standards are followed
Required Skills & Experience
Strong hands-on experience with Databricks Apache Spark and PySpark
Experience with Azure Data Factory AWS Glue or similar orchestration tools
Expertise in SQL query optimization and large-scale data processing
Experience with data lakes data warehouses and modern analytics platforms
Knowledge of Git Jenkins Terraform or similar DevOps tools
Familiarity with Agile and Scrum methodologies
Ability to work independently in a remote setup
Nice to Have
Databricks Certification
Experience With Power BI Tableau Or Other Visualization Tools
Knowledge of streaming technologies such as Kafka or Spark Streaming
Exposure to machine learning workflows and MLOps
Experience in insurance banking healthcare or retail domains
"U.S. Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time."
Thanks & Regards,
Girish Kumar
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Description
Hi,
10 years of experience in IT industry.
Job Title: Sr. Databricks Engineer
Location: Remote
Duration: 12 Months Contract
We have below longterm job opening.
If you are interested , Please send your updated resume with below details.
Your current location:
Visa status:
Availability:
Expected rate all inc on c2c /1099 :
Job Summary:
We are seeking a highly skilled Sr. Databricks Engineer to design develop and optimize scalable big data and analytics solutions. The ideal candidate will have extensive experience with Databricks Spark cloud-based data platforms and modern ETL/ELT frameworks. This role requires strong expertise in building high-performing data pipelines supporting enterprise analytics and collaborating with cross-functional teams in a remote environment.
Must Have Technical/Functional Skills
10+ years of overall experience in data engineering or related fields
4+ years of hands-on experience with Databricks and Apache Spark
Strong proficiency in PySpark SQL and performance tuning
Experience with ETL/ELT pipeline development and orchestration
Expertise in Delta Lake data modeling and optimization
Strong experience with cloud platforms such as Azure AWS or GCP
Familiarity with Python or Scala for data engineering tasks
Experience with CI/CD pipelines and DevOps practices
Strong analytical problem-solving and communication skills
Roles & Responsibilities
Design develop and maintain scalable ETL/ELT pipelines using Databricks PySpark and Spark SQL
Build and optimize Delta Lake architectures and data workflows
Develop reusable frameworks for ingestion transformation and validation
Collaborate with data architects analysts and business stakeholders to deliver data solutions
Optimize Spark jobs and SQL queries for performance and scalability
Implement data quality monitoring logging and alerting mechanisms
Develop and maintain CI/CD pipelines for Databricks notebooks jobs and workflows
Work with cloud-based storage and compute services
Support production deployments troubleshooting and incident resolution
Ensure security governance and compliance standards are followed
Required Skills & Experience
Strong hands-on experience with Databricks Apache Spark and PySpark
Experience with Azure Data Factory AWS Glue or similar orchestration tools
Expertise in SQL query optimization and large-scale data processing
Experience with data lakes data warehouses and modern analytics platforms
Knowledge of Git Jenkins Terraform or similar DevOps tools
Familiarity with Agile and Scrum methodologies
Ability to work independently in a remote setup
Nice to Have
Databricks Certification
Experience With Power BI Tableau Or Other Visualization Tools
Knowledge of streaming technologies such as Kafka or Spark Streaming
Exposure to machine learning workflows and MLOps
Experience in insurance banking healthcare or retail domains
"U.S. Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time."
Thanks & Regards,
Girish Kumar
Additional Information
All your information will be kept confidential according to EEO guidelines.






