

Aptonet Inc
Lead Data Engineer (No C2C)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer in New York, offering a contract of unspecified length at a competitive pay rate. Requires 8+ years in data engineering, strong Snowflake and Airflow expertise, and excellent leadership and communication skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Corp-to-Corp (C2C)
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#BI (Business Intelligence) #GCP (Google Cloud Platform) #Leadership #Data Ingestion #Schema Design #SQL (Structured Query Language) #Apache Airflow #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Governance #AWS (Amazon Web Services) #Dimensional Modelling #Data Science #Data Warehouse #Version Control #Storage #Vault #Documentation #Observability #Security #Cloud #Data Processing #Automated Testing #Data Quality #Monitoring #Snowflake #Java #Scala #Programming #Python #Logging #Batch #Azure #Airflow #Data Vault #Data Engineering
Role description
Title: Lead Data Engineer
Location: New York (Onsite)
The Role
As the Lead Data Engineer, you will:
• Own the design, implementation, and operation of our core data platform (warehouse, pipelines, ingestion, CDC).
• Lead projects
• Partner with analytics, data science, product, and business stakeholders to translate business needs into data solutions.
• Drive architecture decisions and roadmap for data ingestion, transformation, storage, and serving layers.
• Ensure our platform supports both batch and near-real-time data flows — particularly around change-data-capture for source systems.
• Design and maintain orchestration workflows with Apache Airflow and ensure reliability, observability, and alerts.
• Develop and manage our data warehouse in Snowflake including modeling, performance tuning, security and governance.
• Define and maintain data quality, lineage, catalog, and documentation practices.
• Establish and track key metrics around data availability, latency, reliability, cost, and processing throughput.
• Champion data engineering best practices: version control, CI/CD, automated testing, monitoring, and cost-efficient cloud usage.
Key Responsibilities
• Architect and lead implementation of a scalable cloud data platform built on Snowflake for analytics, reporting and advanced use cases.
• Build robust ingestion pipelines using CDC techniques to capture changes from transactional systems into the data warehouse or lake.
• Develop and maintain orchestration workflows in Airflow: schedule management, complex DAGs, dependency handling, back-fills, recoverability, alerting.
• Implement data modelling (star/snowflake schemas, data vault, dimensional modelling) aligned with business domains and evolving requirements.
• Manage data pipeline lifecycles: ingestion → transformation → storage → serving, ensuring SLAs for latency, throughput, and reliability..
• Collaborate cross-functionally with analytics, BI, data science, operations, product and business units to deliver data products.
• Establish and enforce data governance, security and privacy controls: role-based access, masking, encryption, audit logging.
• Monitor platform health: pipeline failures, resource usage, cost, data quality issues; respond and remediate proactively.
• Stay abreast of new data technologies and drive continuous improvement in tooling, architecture, and processes.
Required Qualifications
• 8 + years of experience in data engineering, with at least 2-3 years in a senior or lead capacity architecture decisions).
• Strong hands-on experience with Snowflake — schema design, performance optimisation, security.
• Proven experience implementing and managing CDC (change-data-capture) ingestion pipelines from transactional systems.
• Proven experience using Airflow to build and maintain complex data workflows.
• Strong programming skills (Python, SQL, or Scala/Java) and proficiency with data processing frameworks.
• Deep understanding of data modelling (relational, dimensional, data vault) and ETL/ELT design patterns.
• Experience working in cloud environments (AWS, Azure or GCP) and managing cost-effective data infrastructure.
• Excellent communication skills: able to interact with both technical and non-technical stakeholders, translate business requirements into technical solutions.
• Excellent leadership skills: mentoring, setting standards, building high-performing teams, driving accountability and results.
Title: Lead Data Engineer
Location: New York (Onsite)
The Role
As the Lead Data Engineer, you will:
• Own the design, implementation, and operation of our core data platform (warehouse, pipelines, ingestion, CDC).
• Lead projects
• Partner with analytics, data science, product, and business stakeholders to translate business needs into data solutions.
• Drive architecture decisions and roadmap for data ingestion, transformation, storage, and serving layers.
• Ensure our platform supports both batch and near-real-time data flows — particularly around change-data-capture for source systems.
• Design and maintain orchestration workflows with Apache Airflow and ensure reliability, observability, and alerts.
• Develop and manage our data warehouse in Snowflake including modeling, performance tuning, security and governance.
• Define and maintain data quality, lineage, catalog, and documentation practices.
• Establish and track key metrics around data availability, latency, reliability, cost, and processing throughput.
• Champion data engineering best practices: version control, CI/CD, automated testing, monitoring, and cost-efficient cloud usage.
Key Responsibilities
• Architect and lead implementation of a scalable cloud data platform built on Snowflake for analytics, reporting and advanced use cases.
• Build robust ingestion pipelines using CDC techniques to capture changes from transactional systems into the data warehouse or lake.
• Develop and maintain orchestration workflows in Airflow: schedule management, complex DAGs, dependency handling, back-fills, recoverability, alerting.
• Implement data modelling (star/snowflake schemas, data vault, dimensional modelling) aligned with business domains and evolving requirements.
• Manage data pipeline lifecycles: ingestion → transformation → storage → serving, ensuring SLAs for latency, throughput, and reliability..
• Collaborate cross-functionally with analytics, BI, data science, operations, product and business units to deliver data products.
• Establish and enforce data governance, security and privacy controls: role-based access, masking, encryption, audit logging.
• Monitor platform health: pipeline failures, resource usage, cost, data quality issues; respond and remediate proactively.
• Stay abreast of new data technologies and drive continuous improvement in tooling, architecture, and processes.
Required Qualifications
• 8 + years of experience in data engineering, with at least 2-3 years in a senior or lead capacity architecture decisions).
• Strong hands-on experience with Snowflake — schema design, performance optimisation, security.
• Proven experience implementing and managing CDC (change-data-capture) ingestion pipelines from transactional systems.
• Proven experience using Airflow to build and maintain complex data workflows.
• Strong programming skills (Python, SQL, or Scala/Java) and proficiency with data processing frameworks.
• Deep understanding of data modelling (relational, dimensional, data vault) and ETL/ELT design patterns.
• Experience working in cloud environments (AWS, Azure or GCP) and managing cost-effective data infrastructure.
• Excellent communication skills: able to interact with both technical and non-technical stakeholders, translate business requirements into technical solutions.
• Excellent leadership skills: mentoring, setting standards, building high-performing teams, driving accountability and results.






