

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Princeton, NJ (Hybrid – 3 Days Onsite) on a contract basis for over 6 months, offering a competitive pay rate. Key skills include Azure Data Factory, Databricks, and ETL processes, with preferred experience in the actuarial or insurance domains.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date discovered
July 4, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Princeton, NJ
-
🧠 - Skills detailed
#Scrum #Azure Analysis Services #MongoDB #Databases #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #SQL Server #Azure Event Hubs #Programming #dbt (data build tool) #Data Lake #Azure cloud #SQL (Structured Query Language) #Databricks #SSAS (SQL Server Analysis Services) #ADF (Azure Data Factory) #Azure Databricks #Data Engineering #SSRS (SQL Server Reporting Services) #Azure #Agile #BI (Business Intelligence) #SSIS (SQL Server Integration Services) #Azure Stream Analytics #Cloud #Azure Data Factory #Data Architecture #Data Science #Microsoft Azure #Data Pipeline #ADLS (Azure Data Lake Storage) #PySpark #Data Quality #Base #Storage #Scala #Azure SQL
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer
Location: Princeton, NJ (Hybrid – 3 Days Onsite)
Employment Type: Contract / Full-Time (NOT C2C)
Position Summary
We are looking for a highly skilled Data Engineer to design and develop scalable data solutions using Microsoft Azure technologies. The ideal candidate will have hands-on experience in building modern data pipelines, implementing BI solutions, and working with structured and unstructured data. This position requires a strong background in data engineering, Azure cloud services, and a collaborative mindset for working in cross-functional Agile teams.
Key Responsibilities
• Develop and maintain end-to-end data pipelines using Azure Data Factory, Azure Databricks, and PySpark.
• Design and implement robust data models supporting data staging (bronze), refined (silver), and curated (gold) layers.
• Perform ETL/ELT processing of data from multiple sources including flat files, relational databases, and cloud storage.
• Build and deploy tabular models using Azure Analysis Services (SSAS) and support reporting systems with optimized queries.
• Leverage Azure Data Lake Analytics and Azure Data Lake Store to manage and process large-scale data sets.
• Create and automate workflows using Azure Integration Runtime, Event Hubs, and Stream Analytics.
• Collaborate with analysts, data scientists, and business stakeholders to understand requirements and deliver effective data solutions.
• Write efficient and maintainable code in notebooks for data movement and transformation in Databricks.
• Implement data quality, validation, and governance best practices throughout the pipeline lifecycle.
Required Technical Skills
Cloud & Data Engineering:
• Azure Data Factory (ADF)
• Azure Databricks
• Azure Data Lake (ADLS, ADLA)
• Azure Analysis Services (SSAS)
• Azure Integration Runtime
• Azure Event Hubs
• Azure Stream Analytics
• DBT (Data Build Tool)
Database & Programming
• Azure SQL
• MongoDB
• PySpark
ETL & Reporting Tools
• SQL Server Integration Services (SSIS)
• SQL Server Reporting Services (SSRS)
Preferred Domain Experience
• Familiarity with data from actuarial or insurance domains, including:
• Client hierarchies, treaty structures, renewal processes
• Exposure and experience data, pricing models, and program structures
• Experience building pipelines that support analytics, pricing tools, and reporting
Team & Collaboration Skills
• Strong written and verbal communication skills with the ability to explain technical details to non-technical stakeholders.
• Experience working in Agile/Scrum development environments.
• Demonstrated ability to work both independently and in a collaborative team environment.
• Strong decision-making skills and the ability to manage priorities in high-pressure scenarios.
What You’ll Gain
• Opportunity to work on modern data architecture using the latest Azure technologies
• A dynamic and collaborative work culture focused on innovation and problem-solving
• Flexible hybrid work schedule with a base in Princeton, NJ
Skills: sql server intergration services,azure analysis services,azure data lake,server reporting,azure integration runtime,sql server integration services,microsoft bi/azure bi solutons,azure data factory,azure sql,dbt,pyspark,azure analysis,azure stream analytics,azure databricks,sql server reporting services,azure event hubs,mongodb