

Cloud Data Engineer- Databricks
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Data Engineer specializing in Databricks, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include 4+ years in data engineering, proficiency in Spark and Python, and experience with AWS, Azure, or GCP. A degree in a related field and relevant cloud certifications are preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
McLean, VA
-
π§ - Skills detailed
#Airflow #Apache Airflow #Data Processing #Batch #AWS (Amazon Web Services) #Data Pipeline #Python #Automation #DevOps #Data Modeling #Cloud #Azure #Documentation #Computer Science #dbt (data build tool) #GCP (Google Cloud Platform) #Data Orchestration #DataOps #Databricks #AI (Artificial Intelligence) #Scala #Spark (Apache Spark) #Data Engineering
Role description
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. Youβll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
β’ Design and implement robust, scalable data engineering solutions.
β’ Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
β’ Collaborate with analytics and AI teams to enable real-time and batch data workflows.
β’ Support and improve cloud-native data platforms (AWS, Azure, GCP).
β’ Ensure adherence to best practices in data modeling, warehousing, and governance.
β’ Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
β’ Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
β’ 4+ years of experience in data engineering with a focus on scalable solutions.
β’ Strong hands-on experience with Databricks in a cloud environment.
β’ Proficiency in Spark and Python for data processing.
β’ Solid understanding of data modeling, data warehousing, and architecture principles.
β’ Experience working with at least one major cloud provider (AWS, Azure, or GCP).
β’ Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
β’ Direct experience with Unity Catalog and Mosaic AI within Databricks.
β’ Working knowledge of DevOps/DataOps principles in a data engineering context.
β’ Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Data Engineering, Information Systems, or a related field.
β’ Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
β’ Able to consult, write, and present persuasively
β’ Able to work in a self-organized and cross-functional team
β’ Able to iterate based on new information, peer reviews, and feedback
β’ Able to work seamlessly with clients across multiple geographies
β’ Research focused mindset
β’ Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. Youβll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
β’ Design and implement robust, scalable data engineering solutions.
β’ Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
β’ Collaborate with analytics and AI teams to enable real-time and batch data workflows.
β’ Support and improve cloud-native data platforms (AWS, Azure, GCP).
β’ Ensure adherence to best practices in data modeling, warehousing, and governance.
β’ Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
β’ Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
β’ 4+ years of experience in data engineering with a focus on scalable solutions.
β’ Strong hands-on experience with Databricks in a cloud environment.
β’ Proficiency in Spark and Python for data processing.
β’ Solid understanding of data modeling, data warehousing, and architecture principles.
β’ Experience working with at least one major cloud provider (AWS, Azure, or GCP).
β’ Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
β’ Direct experience with Unity Catalog and Mosaic AI within Databricks.
β’ Working knowledge of DevOps/DataOps principles in a data engineering context.
β’ Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Data Engineering, Information Systems, or a related field.
β’ Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
β’ Able to consult, write, and present persuasively
β’ Able to work in a self-organized and cross-functional team
β’ Able to iterate based on new information, peer reviews, and feedback
β’ Able to work seamlessly with clients across multiple geographies
β’ Research focused mindset
β’ Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."