JCW Group

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It is a hybrid position in NYC, requiring expertise in Azure data services, Python, SQL, and experience with data governance and CI/CD practices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Scala #Terraform #Microsoft Power BI #SQL (Structured Query Language) #Datasets #Data Architecture #Infrastructure as Code (IaC) #Strategy #Cloud #"ETL (Extract #Transform #Load)" #Batch #Data Transformations #PySpark #Azure #Data Quality #Data Pipeline #Data Governance #Monitoring #Data Processing #GIT #Spark (Apache Spark) #Data Strategy #BI (Business Intelligence) #Python #Data Engineering
Role description
JCW Group has partnered with a leading financial services firm to expand its data platform capabilities. They are seeking a Data Engineer to join their team in NYC (3 days/week). In this role, you will design and implement scalable, cloud-native data solutions that support analytics, reporting, and real-time business insights. Responsibilities: • Build and maintain Azure and Microsoft Fabric data pipelines, including batch and streaming workflows using Data Factory, Event Hubs, Eventstreams, and Eventhouse • Develop ETL/ELT processes, data transformations, and dimensional models for analytics consumption • Support Power BI reporting with views, stored procedures, and semantic-ready datasets • Implement data quality, monitoring, and automated workflows with serverless Azure Functions • Manage Terraform-based Infrastructure as Code and CI/CD for data pipelines and environments • Collaborate with Data Architects and stakeholders to ensure alignment with enterprise data strategy and governance Requirements: • Hands-on experience with Azure data services and Microsoft Fabric • Strong Python, PySpark, and SQL skills for data processing and analytics • Experience with event-driven architectures, incremental ingestion, and medallion/lakehouse patterns • Familiarity with CI/CD, Git workflows, and Infrastructure as Code • Knowledge of data governance practices If this sounds like you, feel free to reach out!