

Insight Global
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer for a 6-month contract, offering a pay rate based on experience. Key skills include 5+ years in data engineering, proficiency in Python, and experience with Azure Data Factory and Databricks. Remote work is possible for East Coast residents.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date
October 22, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#ADF (Azure Data Factory) #Azure Data Factory #Code Reviews #Data Engineering #Python #Data Architecture #Cloud #Data Pipeline #Azure #Databricks #Synapse #Azure SQL #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Scala
Role description
Insight Global is seeking a Data Engineer to support one of our insurance clients in the Richmond, VA area and help scale data operations. This role is ideal for someone who thrives in a collaborative environment and is passionate about building robust data pipelines using modern cloud technologies. This will be a 6-month contract, with extensions.
This position has flexibility to be remote for the right candidate, but they must reside on the east coast.
You will play a key role in developing ETL processes using Python, integrating with Azure Data Factory, and leveraging Databricks for scalable data solutions. The team currently lacks deep expertise in Python and Databricks, so your contributions will be instrumental in filling this gap and elevating the teamβs technical capabilities.
Key Responsibilities:
Design, develop, and maintain ETL pipelines using Python.
Integrate data workflows into Azure Data Factory.
Build and optimize scalable data solutions in Databricks.
Collaborate with team members to understand data requirements and deliver actionable insights.
Provide mentorship and technical guidance on Python and Databricks best practices.
Troubleshoot and resolve data-related issues across the pipeline.
Participate in sprint planning, code reviews, and team meetings.
Must Haves:
5+ years of experience in data engineering roles.
Strong proficiency in Python, especially for ETL development.
Hands-on experience with Azure Data Factory and Databricks.
Solid understanding of cloud-based data architecture and workflows.
Plus:
Experience in the insurance or financial services industry.
Familiarity with Excess and Surplus lines data workflows.
Exposure to other Azure services such as Azure Synapse or Azure SQL.
Exact compensation may vary based on several factors, including skills, experience, and education.
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.
Insight Global is seeking a Data Engineer to support one of our insurance clients in the Richmond, VA area and help scale data operations. This role is ideal for someone who thrives in a collaborative environment and is passionate about building robust data pipelines using modern cloud technologies. This will be a 6-month contract, with extensions.
This position has flexibility to be remote for the right candidate, but they must reside on the east coast.
You will play a key role in developing ETL processes using Python, integrating with Azure Data Factory, and leveraging Databricks for scalable data solutions. The team currently lacks deep expertise in Python and Databricks, so your contributions will be instrumental in filling this gap and elevating the teamβs technical capabilities.
Key Responsibilities:
Design, develop, and maintain ETL pipelines using Python.
Integrate data workflows into Azure Data Factory.
Build and optimize scalable data solutions in Databricks.
Collaborate with team members to understand data requirements and deliver actionable insights.
Provide mentorship and technical guidance on Python and Databricks best practices.
Troubleshoot and resolve data-related issues across the pipeline.
Participate in sprint planning, code reviews, and team meetings.
Must Haves:
5+ years of experience in data engineering roles.
Strong proficiency in Python, especially for ETL development.
Hands-on experience with Azure Data Factory and Databricks.
Solid understanding of cloud-based data architecture and workflows.
Plus:
Experience in the insurance or financial services industry.
Familiarity with Excess and Surplus lines data workflows.
Exposure to other Azure services such as Azure Synapse or Azure SQL.
Exact compensation may vary based on several factors, including skills, experience, and education.
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.