

Think Consulting
Databricks Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer, fully remote with quarterly travel to Gaithersburg, offering $55-$60/hr. Requires U.S. citizenship or Green Card, a BS in Computer Science, 3+ years of relevant experience, and Databricks certification.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
February 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Python #Data Quality #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Databricks #Agile #Azure cloud #Data Pipeline #Data Management #Data Engineering #Consulting #Data Catalog #Data Governance #Spark (Apache Spark) #Data Lake #Microsoft Azure #"ETL (Extract #Transform #Load)" #Security #.Net #Automation #Azure #Data Security #PySpark #R #Metadata #Cloud #Computer Science
Role description
Think Consulting is seeking a Databricks Data Engineer to develop/support new/existing data pipelines, and data analytics environments in Azure cloud-based data lake. As a data engineer, you will translate business requirements to data engineering solutions to support an enterprise scale Microsoft Azure based data analytics platform. You will support continued maintenance of ETL operations and development of new pipelines ensuring data quality and data management. The ideal candidate will bring deep expertise in Databricks, a solid foundation in advanced AI technologies, and applies critical thinking to create innovative functions and solve technical issues by cross-functional team collaboration.
• Candidate must be a U.S. Citizen or Green Card Holder
• Fully Remote Position- Quarterly travel to Gaithersburg is required
• Compensation range is $55 - $60/hr.
• Must be able to obtain a Public Trust clearance
Qualifications
• BS degree in Computer Science or related field and 3+ years or Master's degree with 2+ years of experience.
Required Skills
• 3+ years of experience developing and designing Ingestion flows (structured, streaming, and unstructured data) using cloud platform services with data quality.
• Databricks Data Engineer certification and 2+ years of experience maintaining Databricks platform and development in Spark.
• Ability to work directly with clients and act as front line support for requests coming in from clients. Clearly document and express the solution in form of architecture and interface diagrams.
• 3+ years of experience with Python, Spark/PySpark and R are essential. .NET based development is a plus.
• Knowledge and experience with data governance, including metadata management, enterprise data catalog, design standards, data quality governance, and data security.
• Experience with Agile process methodology, CI/CD automation, and cloud-based developments (Azure, AWS).
Preferred Skills
• Not required, but additional education, certifications, and/or experience are a plus: Certifications in Azure cloud, Knowledge of FinOps principles and cost management.
Equal Opportunity Employer, including disability and protected veteran status.
Think Consulting is seeking a Databricks Data Engineer to develop/support new/existing data pipelines, and data analytics environments in Azure cloud-based data lake. As a data engineer, you will translate business requirements to data engineering solutions to support an enterprise scale Microsoft Azure based data analytics platform. You will support continued maintenance of ETL operations and development of new pipelines ensuring data quality and data management. The ideal candidate will bring deep expertise in Databricks, a solid foundation in advanced AI technologies, and applies critical thinking to create innovative functions and solve technical issues by cross-functional team collaboration.
• Candidate must be a U.S. Citizen or Green Card Holder
• Fully Remote Position- Quarterly travel to Gaithersburg is required
• Compensation range is $55 - $60/hr.
• Must be able to obtain a Public Trust clearance
Qualifications
• BS degree in Computer Science or related field and 3+ years or Master's degree with 2+ years of experience.
Required Skills
• 3+ years of experience developing and designing Ingestion flows (structured, streaming, and unstructured data) using cloud platform services with data quality.
• Databricks Data Engineer certification and 2+ years of experience maintaining Databricks platform and development in Spark.
• Ability to work directly with clients and act as front line support for requests coming in from clients. Clearly document and express the solution in form of architecture and interface diagrams.
• 3+ years of experience with Python, Spark/PySpark and R are essential. .NET based development is a plus.
• Knowledge and experience with data governance, including metadata management, enterprise data catalog, design standards, data quality governance, and data security.
• Experience with Agile process methodology, CI/CD automation, and cloud-based developments (Azure, AWS).
Preferred Skills
• Not required, but additional education, certifications, and/or experience are a plus: Certifications in Azure cloud, Knowledge of FinOps principles and cost management.
Equal Opportunity Employer, including disability and protected veteran status.






