

InvestM Technology LLC
Senior Data Engineer (Airflow/DBT/Openshift)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Jersey City, NJ, for 12 months+ at a pay rate of "unknown." Required skills include Apache Airflow, dbt Core, Kubernetes/OpenShift, Python, and SQL, with preferred experience in financial services and enterprise migrations.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 26, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#AutoScaling #Kubernetes #Documentation #SQL (Structured Query Language) #Data Engineering #Scala #Data Pipeline #Monitoring #Macros #Data Warehouse #Data Modeling #Datasets #GIT #Logging #Migration #Data Architecture #"ETL (Extract #Transform #Load)" #Data Processing #Oracle #Airflow #Batch #Python #dbt (data build tool) #Apache Airflow
Role description
Position: Senior Data Engineer – Airflow, dbt Core, Kubernetes/OpenShift
Location: Jersey City NJ
Duration: 12 Months + EXT
Job Overview
We are seeking a highly skilled Senior Data Engineer with strong hands-on experience in Apache Airflow, dbt Core, and Kubernetes/OpenShift (on-prem environments).
This role is critical in building, operating, and optimizing scalable data pipelines supporting financial and accounting platforms, including enterprise system migrations and high-volume data processing workloads.
The ideal candidate will combine strategic thinking with hands-on execution, helping improve existing systems while driving new data engineering initiatives.
Key Responsibilities
Data Pipeline & Orchestration
• Design, develop, and maintain complex Airflow DAGs for batch and event-driven pipelines
• Implement best practices for scheduling, retries, SLA monitoring, and alerting
• Optimize Airflow components for high-concurrency workloads
dbt Core & Data Modeling
• Lead dbt Core implementation, including project structure and CI/CD integration
• Develop and maintain scalable dbt models (staging, intermediate, marts)
• Implement testing, documentation, and performance optimization
• Optimize transformations for large-scale datasets
Kubernetes / OpenShift
• Deploy and manage data workloads on Kubernetes/OpenShift (on-prem)
• Implement workload distribution, autoscaling, and resource optimization strategies
• Troubleshoot container-level performance and resource issues
Performance & Reliability
• Monitor and optimize end-to-end pipeline performance
• Identify bottlenecks across orchestration, queries, and infrastructure
• Implement logging, monitoring, and alerting frameworks
• Ensure high availability and resiliency of data pipelines
Collaboration & Stakeholder Management
• Partner with data architects, platform engineers, and business teams
• Support financial, accounting, and regulatory data use cases
• Enforce data engineering standards and governance
Required Skills & Qualifications
• 8-10+ years of experience in data engineering or related roles
• Strong expertise in Apache Airflow (DAG development & optimization)
• Hands-on experience with dbt Core (modeling, testing, macros)
• Strong proficiency in Python and SQL
• Deep understanding of Kubernetes and/or OpenShift
• Experience working with on-prem or hybrid environments
• Strong communication and problem-solving skills
Preferred Qualifications
• Experience in financial services or accounting platforms
• Exposure to enterprise system migrations
• Experience with data warehouses such as Oracle
• Familiarity with CI/CD pipelines and Git-based workflows
Position: Senior Data Engineer – Airflow, dbt Core, Kubernetes/OpenShift
Location: Jersey City NJ
Duration: 12 Months + EXT
Job Overview
We are seeking a highly skilled Senior Data Engineer with strong hands-on experience in Apache Airflow, dbt Core, and Kubernetes/OpenShift (on-prem environments).
This role is critical in building, operating, and optimizing scalable data pipelines supporting financial and accounting platforms, including enterprise system migrations and high-volume data processing workloads.
The ideal candidate will combine strategic thinking with hands-on execution, helping improve existing systems while driving new data engineering initiatives.
Key Responsibilities
Data Pipeline & Orchestration
• Design, develop, and maintain complex Airflow DAGs for batch and event-driven pipelines
• Implement best practices for scheduling, retries, SLA monitoring, and alerting
• Optimize Airflow components for high-concurrency workloads
dbt Core & Data Modeling
• Lead dbt Core implementation, including project structure and CI/CD integration
• Develop and maintain scalable dbt models (staging, intermediate, marts)
• Implement testing, documentation, and performance optimization
• Optimize transformations for large-scale datasets
Kubernetes / OpenShift
• Deploy and manage data workloads on Kubernetes/OpenShift (on-prem)
• Implement workload distribution, autoscaling, and resource optimization strategies
• Troubleshoot container-level performance and resource issues
Performance & Reliability
• Monitor and optimize end-to-end pipeline performance
• Identify bottlenecks across orchestration, queries, and infrastructure
• Implement logging, monitoring, and alerting frameworks
• Ensure high availability and resiliency of data pipelines
Collaboration & Stakeholder Management
• Partner with data architects, platform engineers, and business teams
• Support financial, accounting, and regulatory data use cases
• Enforce data engineering standards and governance
Required Skills & Qualifications
• 8-10+ years of experience in data engineering or related roles
• Strong expertise in Apache Airflow (DAG development & optimization)
• Hands-on experience with dbt Core (modeling, testing, macros)
• Strong proficiency in Python and SQL
• Deep understanding of Kubernetes and/or OpenShift
• Experience working with on-prem or hybrid environments
• Strong communication and problem-solving skills
Preferred Qualifications
• Experience in financial services or accounting platforms
• Exposure to enterprise system migrations
• Experience with data warehouses such as Oracle
• Familiarity with CI/CD pipelines and Git-based workflows






