

Senior Data Engineer Databrick/AI/ML/ Data Governance
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer specializing in Databricks, AI/ML, and Data Governance, located in Philadelphia, PA. The 6+ month W2 position requires 8–10 years of experience, including 5–6 years in Data Engineering and expertise in Azure Data Lake and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 25, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Philadelphia, PA
-
🧠 - Skills detailed
#Security #MLflow #Deployment #Compliance #Data Engineering #Data Governance #Data Pipeline #Databricks #ThoughtSpot #"ETL (Extract #Transform #Load)" #FastAPI #Azure #Data Processing #Data Lake #Scala #AI (Artificial Intelligence) #Data Quality #ML (Machine Learning) #Spark (Apache Spark) #Datasets #PySpark #Python
Role description
W2 position only USC and GC
Title: Senior Data Engineer- Databrick/AI/ML/ Data Governance
Location: Philadelphia, PA (3 days onsite - MUST)-
Duration: 6+ Months
About the Role
We are seeking a highly skilled Senior Data Engineer with 8–10 years of overall experience, including 5–6 years of hands-on experience with Data Engineering tools. The ideal candidate will have a strong background in ML/AI and play a critical role in structuring and optimizing datasets in Databricks to support key initiatives such as Genie and ThoughtSpot. This role requires expertise in modern data platforms, scalable pipelines, and advanced analytics solutions.
Key Responsibilities
• Design, build, and optimize data pipelines and workflows in Databricks to enable advanced analytics and ML/AI initiatives.
• Structure and manage datasets in Azure Data Lake to support business reporting and analytical use cases.
• Implement and manage Unity Catalog for data governance, cataloging, and access control.
• Leverage MLflow to manage machine learning experiments, model training, and deployment.
• Develop scalable solutions using Python and PySpark for data processing and transformation.
• Collaborate with cross-functional teams to integrate data solutions with Genie, ThoughtSpot, and other enterprise applications.
• Ensure data quality, security, and compliance across all pipelines and datasets.
• (Good to have) Develop APIs using FastAPI for enabling data services and integrations.
Required Skills & Experience
• 8–10 years of total experience, with 5–6 years in Data Engineering.
• Proven experience in delivering ML/AI-driven projects.
• Hands-on expertise with Azure Data Lake and Databricks.
• Strong knowledge of Python and PySpark for large-scale data processing.
• Experience implementing Unity Catalog for data governance.
• Proficiency with MLflow for managing ML lifecycle.
W2 position only USC and GC
Title: Senior Data Engineer- Databrick/AI/ML/ Data Governance
Location: Philadelphia, PA (3 days onsite - MUST)-
Duration: 6+ Months
About the Role
We are seeking a highly skilled Senior Data Engineer with 8–10 years of overall experience, including 5–6 years of hands-on experience with Data Engineering tools. The ideal candidate will have a strong background in ML/AI and play a critical role in structuring and optimizing datasets in Databricks to support key initiatives such as Genie and ThoughtSpot. This role requires expertise in modern data platforms, scalable pipelines, and advanced analytics solutions.
Key Responsibilities
• Design, build, and optimize data pipelines and workflows in Databricks to enable advanced analytics and ML/AI initiatives.
• Structure and manage datasets in Azure Data Lake to support business reporting and analytical use cases.
• Implement and manage Unity Catalog for data governance, cataloging, and access control.
• Leverage MLflow to manage machine learning experiments, model training, and deployment.
• Develop scalable solutions using Python and PySpark for data processing and transformation.
• Collaborate with cross-functional teams to integrate data solutions with Genie, ThoughtSpot, and other enterprise applications.
• Ensure data quality, security, and compliance across all pipelines and datasets.
• (Good to have) Develop APIs using FastAPI for enabling data services and integrations.
Required Skills & Experience
• 8–10 years of total experience, with 5–6 years in Data Engineering.
• Proven experience in delivering ML/AI-driven projects.
• Hands-on expertise with Azure Data Lake and Databricks.
• Strong knowledge of Python and PySpark for large-scale data processing.
• Experience implementing Unity Catalog for data governance.
• Proficiency with MLflow for managing ML lifecycle.