

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in New Jersey (Hybrid, 3 days onsite) with a contract length of "unknown" and a pay rate of "unknown." Key skills include Azure Data Factory, Databricks, and ETL processes, with preferred experience in the insurance domain.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New Jersey, United States
-
π§ - Skills detailed
#Azure Data Factory #Data Engineering #Data Mining #"ETL (Extract #Transform #Load)" #Deployment #Azure Databricks #Normalization #Scala #Agile #Spark (Apache Spark) #Documentation #Azure Analysis Services #BI (Business Intelligence) #Data Lake #MongoDB #Data Warehouse #Automation #Azure #Azure SQL #ADLS (Azure Data Lake Storage) #Microsoft Azure #Data Pipeline #PySpark #SQL (Structured Query Language) #Databricks #Data Modeling #dbt (data build tool) #SSAS (SQL Server Analysis Services) #ADF (Azure Data Factory)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Data Engineer
Location: USA, New Jersey (Hybrid β Onsite 3 Days/Week)
Overview
We are seeking a skilled and motivated Data Engineer to join our client's team in NJ, working on-site three days a week. This role focuses on designing, building, and optimizing scalable data pipelines and business intelligence solutions within the Azure ecosystem.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Azure Data Factory and Azure Databricks.
β’ Build, transform, and optimize data across multiple layersβstaging, bronze, silver, and gold zones.
β’ Develop Databricks notebooks to process data from raw ingestion through to curated output layers.
β’ Implement robust ETL processes across both homogeneous and heterogeneous systems using Azure BI tools.
β’ Develop and deploy tabular models with Azure Analysis Services and automate deployments with Azure Automation Runbooks.
β’ Create and manage SSAS cubes (tabular and multidimensional), including KPIs, measures, aggregations, partitions, and data mining models.
β’ Collaborate with stakeholders to build data solutions that support actuarial analytics, pricing models, and broker/client-facing reporting tools.
β’ Maintain data warehouse environments and support the entire lifecycle from architecture through deployment and ongoing enhancements.
β’ Participate in Agile development teams, contributing to sprint planning, reviews, and collaborative problem-solving.
β’ Ensure high-quality documentation and adherence to SDLC practices.
Key Qualifications:
Technical Skills:
β’ Strong hands-on experience with Microsoft Azure BI stack: Azure Data Factory (ADF), Azure Databricks, Azure Analysis Services (SSAS), Azure Data Lake Analytics, Azure Data Lake Store (ADLS)
β’ Azure Integration Runtime, Event Hubs, Stream Analytics, DBT
β’ Proficiency in database technologies including: Azure SQL, MongoDB, PySpark
β’ Deep understanding of data modeling, normalization/denormalization techniques, and data warehouse architecture.
β’ Expertise in developing and maintaining ETL processes and managing multi-layered data environments.
Preferred Domain Experience:
β’ Prior experience in the insurance or actuarial domain is highly desirable.
β’ Familiarity with reinsurance broking data such as placements, treaty structures, renewal workflows, and client hierarchies.
β’ Understanding of actuarial rating inputs/outputs including experience and exposure data, program layers, and data tags.
β’ Experience creating pipelines that support actuarial systems, pricing analytics, and reporting dashboards.