

AVP Data Engineering
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AVP Data Engineering on a contract-to-hire basis in Warren, NJ, offering $80-$90/hr. Requires 10+ years in data engineering, expertise in Databricks, Azure, and Airflow, plus strong leadership skills. In-person interview mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
720
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Warren, NJ
-
π§ - Skills detailed
#Storage #Data Pipeline #Azure Data Platforms #Data Storage #Azure #Data Processing #Programming #Spark SQL #ADLS (Azure Data Lake Storage) #Data Integration #AI (Artificial Intelligence) #Data Design #"ETL (Extract #Transform #Load)" #Jira #Data Security #Security #Data Privacy #Data Science #Delta Lake #Python #PySpark #Data Engineering #ERWin #Computer Science #Azure DevOps #Databricks #Data Orchestration #ADF (Azure Data Factory) #Data Management #Data Quality #Spark (Apache Spark) #Airflow #Scala #SQL (Structured Query Language) #Kafka (Apache Kafka) #Data Modeling #Data Ingestion #Database Schema #Compliance #Leadership #Data Governance #DevOps #Data Architecture #Agile
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Our client is hiring a AVP, Data Engineering on a contract to hire basis.
Job ID 83104
This is a Contract To Hire Role
Location: Warren NJ (hybrid 3 days onsite)- MUST BE LOCAL TO WARREN, NJ. IN PERSON INTERVIEW REQUIRED.
Duration: This is a hybrid role (3 days in office/2 days remote) based in Warren, NJ office.
Seeking an AVP, Data Engineering to join the Information Technology team, located at the US headquarters in Warren, New Jersey.
Pay Rate is $80-$90/hr W2.
β’ AVP - Engineer Role:
β’ This role is more focused on the delivery of data solutions.
β’ Lead role
β’ More of delivery β exp in building pipeline
β’ Hands on
β’ Airflow and data orchestration exp
β’ The candidate should have strong experience in building data pipelines and data development.
β’ They will also be part of the overall platform architecture and design from the delivery side.
β’ Key Technical Skills: Both roles require strong knowledge of Databricks and Azure data technologies. The client specifically mentioned skills in orchestration tools like Airflow for the engineer role.
This role is responsible for design, development and optimization of data ingestion/transformation pipelines, domain data models and systems that support the acquisition, storage, transformation, and analysis of large volumes of data. You will collaborate with cross-functional teams, including analysts, software engineers, data management and operations teams to ensure the availability, reliability, and integrity of data for various business needs including AI. The candidate should have strong experience and established track record of building data domains and data solutions leveraging Databricks and Azure Data platforms. The candidate must possess strong hands-on data design and engineering skills, analytical skills, communications skills, leadership, and stakeholder management.
Responsibilities:
β’ Deliver data engineering initiatives and capabilities and ensure the solutions that are delivered meet high standards for quality, performance, and scalability.
β’ Design, develop, and maintain data pipelines for ingesting, transforming, and loading data from various sources into unified data platform. Ensure data quality and integrity throughout the process.
β’ Develop transformation processes to clean, aggregate, and enrich raw data, ensuring it is in the appropriate format for downstream analysis and consumption. Integrate data from diverse sources to provide unified data domains.
β’ Design and implement efficient data models and database schemas that support the storage and retrieval of structured and unstructured data. Optimize data storage and access for performance and scalability.
β’ Implement modern data processing principles to streamline data ingestion/transformation processes. Leverage modern data pipeline tools to reduce human attention and ensure the efficiency and reliability of data ingestion and processing.
β’ Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver effective data solutions. Document data engineering processes, data flows, and system configurations.
β’ Stay updated with the latest trends, tools, and technologies in the field of data engineering. Proactively identify opportunities to improve data engineering practices and contribute to the evolution of data infrastructure. Mentor and coach engineering team to increase skill levels.
β’ Implement data quality controls and validation processes to identify and rectify data anomalies, inconsistencies, and errors. Collaborate with stakeholders to define and enforce data governance standards and policies.
β’ Identify performance bottlenecks in data pipelines and optimize queries, data structures, and infrastructure configurations to improve overall system performance and scalability.
β’ Implement appropriate security measures to protect sensitive data and ensure compliance with data privacy regulations. Monitor and address data security vulnerabilities and risks.
β’ Leverage SAFe agile practices to plan, prioritize and deliver data engineering initiatives.
Qualifications:
β’ Bachelor's Degree (required) or masterβs degree (preferred) in Computer Science, Information Systems, or a related field.
β’ 10+ years of overall technology experience in data engineering, data architecture, design and development of data solutions and services.
β’ 5+ years of successful team leadership experience at an enterprise level
β’ Strong experience in data engineering, including data pipeline development, ETL/ELT processes, and data modeling.
β’ Strong experience with programming languages and tools such as Python, PySpark, SQL etc.
β’ In-depth experience in Databricks, Medallion/Delta Lake Architecture, and Azure Data technologies such as ADLS, ADF, Fabric, SQL, PowerBI etc.
β’ Strong experience with orchestration tools such as Airflow and Event Driven data integration tools such as Kafka etc.
β’ Data modeling with tools like Erwin and knowledge of insurance industry standards (e.g., ACORD) and insurance data (policy, claims, underwriting, etc.).
β’ Hands-on knowledge of current technology standards/trends coupled with a desire to continually expand personal knowledge/skills and mentor teams to raise technical competency.
β’ Strong experience in leading data teams with standards, guidelines, development practices to deliver consistent results.
β’ Strong thought leadership in pursuit of modern data architecture principals and technology modernization.
β’ Experience in deploying pipelines via Azure DevOps, including code review, branching strategies, and approvals.
β’ Self-starter with strong communication and collaboration skills to work effectively with cross-functional teams.
β’ Ability to establish and maintain relationships with other business and technology leaders and manage multiple priorities and meet deadlines in a fast-paced environment.
β’ Excellent problem-solving skills and attention to detail.
β’ Recent experience working in a Scaled Agile environment with Agile tools, e.g. Jira, Confluence, etc.