

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience, focusing on Python ETL and Azure Databricks. It’s a 6-month hybrid contract, requiring skills in Azure services, SQL, and CI/CD. Insurance industry experience is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date discovered
September 11, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Richmond, VA
-
🧠 - Skills detailed
#Data Modeling #Kafka (Apache Kafka) #Python #GIT #ADF (Azure Data Factory) #Cloud #Monitoring #Data Ingestion #Storage #Data Lake #Azure cloud #Version Control #Airflow #Azure DevOps #Data Engineering #Apache Spark #Spark (Apache Spark) #Data Governance #Agile #"ETL (Extract #Transform #Load)" #Data Science #Azure #PySpark #Azure Databricks #Data Quality #Scala #Databricks #SQL (Structured Query Language) #Azure Data Factory #Datasets #Delta Lake #Synapse #DevOps
Role description
Insight Global is seeking a highly skilled Data Engineer to support one of our insurance clients in a dynamic hybrid environment. This is a 6-month contract opportunity with the potential for extension. This team has a hole in their Python and Databricks experience, and searching for resources that can fill that gap! The ideal candidate will bring 5+ years of data engineering experience, with a strong focus on Python-based ETL development and Azure Databricks.
Responsibilities
• Design, build, and maintain scalable ETL pipelines using Python and Azure Data Factory
• Develop and optimize Databricks notebooks for data ingestion, transformation, and analytics
• Collaborate with data scientists, analysts, and business stakeholders to deliver clean, reliable datasets
• Implement data quality checks, monitoring, and governance practices
• Work with cloud-native tools to manage data lakes, warehouses, and real-time data streams
• Participate in Agile ceremonies and contribute to sprint planning and retrospectives
Required Skills
• 5+ years of hands-on experience in data engineering
• Strong proficiency in Python, especially for ETL workflows
• Deep experience with Azure Databricks (including PySpark, Delta Lake, and notebook orchestration)
• Solid understanding of Azure cloud services (Data Factory, Blob Storage, Synapse, etc.)
• Experience with SQL and data modeling
• Familiarity with CI/CD pipelines and version control (e.g., Git, Azure DevOps)
Preferred Qualifications
• Experience in the insurance or financial services industry
• Knowledge of data governance tools like Unity Catalog or Microsoft Purview
• Exposure to Apache Spark, Kafka, or Airflow
Insight Global is seeking a highly skilled Data Engineer to support one of our insurance clients in a dynamic hybrid environment. This is a 6-month contract opportunity with the potential for extension. This team has a hole in their Python and Databricks experience, and searching for resources that can fill that gap! The ideal candidate will bring 5+ years of data engineering experience, with a strong focus on Python-based ETL development and Azure Databricks.
Responsibilities
• Design, build, and maintain scalable ETL pipelines using Python and Azure Data Factory
• Develop and optimize Databricks notebooks for data ingestion, transformation, and analytics
• Collaborate with data scientists, analysts, and business stakeholders to deliver clean, reliable datasets
• Implement data quality checks, monitoring, and governance practices
• Work with cloud-native tools to manage data lakes, warehouses, and real-time data streams
• Participate in Agile ceremonies and contribute to sprint planning and retrospectives
Required Skills
• 5+ years of hands-on experience in data engineering
• Strong proficiency in Python, especially for ETL workflows
• Deep experience with Azure Databricks (including PySpark, Delta Lake, and notebook orchestration)
• Solid understanding of Azure cloud services (Data Factory, Blob Storage, Synapse, etc.)
• Experience with SQL and data modeling
• Familiarity with CI/CD pipelines and version control (e.g., Git, Azure DevOps)
Preferred Qualifications
• Experience in the insurance or financial services industry
• Knowledge of data governance tools like Unity Catalog or Microsoft Purview
• Exposure to Apache Spark, Kafka, or Airflow