

Lead Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer in Jersey City, NJ, with a contract length of unspecified duration, offering a competitive pay rate. Requires 10+ years of experience in Data Engineering, proficiency in Snowflake, SQL, Python, and AWS, plus insurance industry knowledge.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 21, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Jersey City, NJ
-
π§ - Skills detailed
#Aurora #Data Vault #DevOps #Storage #Data Engineering #"ETL (Extract #Transform #Load)" #Leadership #Data Modeling #Data Quality #Debugging #Data Ingestion #SQL (Structured Query Language) #AWS (Amazon Web Services) #Migration #Python #Data Processing #Big Data #PySpark #Programming #Cloud #S3 (Amazon Simple Storage Service) #AWS Glue #Snowflake #Data Storage #Jenkins #Security #Agile #Data Security #GIT #Data Migration #Spark (Apache Spark) #Aurora RDS #Vault #Compliance #RDS (Amazon Relational Database Service) #Data Pipeline #Data Governance #Computer Science
Role description
Job Title: Lead Data Engineer
Location: Jersey City, NJ
Key Skills: Snowflake, SQL, Python, Spark, AWS- Glue, Big Data Concepts
Responsibilities:
β’ Lead the design, development, and implementation of data solutions using AWS and Snowflake.
β’ Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
β’ Develop and maintain data pipelines, ensuring data quality, integrity, and security.
β’ Optimize data storage and retrieval processes to support data warehousing and analytics.
β’ Provide technical leadership and mentorship to junior data engineers.
β’ Work closely with stakeholders to gather requirements and deliver data-driven insights.
β’ Ensure compliance with industry standards and best practices in data engineering.
β’ Utilize knowledge of insurance, particularly claims and loss, to enhance data solutions.
Must have:
β’ 10+ years of relevant experience in Data Engineering and delivery.
β’ 10+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations.
β’ Strong experience with SQL, python and Pyspark
β’ Good understanding of Data ingestion and data processing frameworks
β’ Good experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture)
β’ Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership as appropriate.
β’ Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment.
β’ Experience working in Agile Methodology
Good to have:
β’ Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines.
β’ Worked on cloud implementations, data migration, Data Vault 2.0, etc.
Requirements:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Technology, or a related field.
β’ Proven experience as a Data Engineer, with a focus on AWS and Snowflake.
β’ Strong understanding of data warehousing concepts and best practices.
β’ Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders.
β’ Experience in the insurance industry, preferably with knowledge of claims and loss processes.
β’ Proficiency in SQL, Python, and other relevant programming languages.
β’ Strong problem-solving skills and attention to detail.
β’ Ability to work independently and as part of a team in a fast-paced environment.
Preferred Qualifications:
β’ Experience with data modeling and ETL processes.
β’ Familiarity with data governance and data security practices.
β’ Certification in AWS or Snowflake is a plus.
Job Title: Lead Data Engineer
Location: Jersey City, NJ
Key Skills: Snowflake, SQL, Python, Spark, AWS- Glue, Big Data Concepts
Responsibilities:
β’ Lead the design, development, and implementation of data solutions using AWS and Snowflake.
β’ Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
β’ Develop and maintain data pipelines, ensuring data quality, integrity, and security.
β’ Optimize data storage and retrieval processes to support data warehousing and analytics.
β’ Provide technical leadership and mentorship to junior data engineers.
β’ Work closely with stakeholders to gather requirements and deliver data-driven insights.
β’ Ensure compliance with industry standards and best practices in data engineering.
β’ Utilize knowledge of insurance, particularly claims and loss, to enhance data solutions.
Must have:
β’ 10+ years of relevant experience in Data Engineering and delivery.
β’ 10+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations.
β’ Strong experience with SQL, python and Pyspark
β’ Good understanding of Data ingestion and data processing frameworks
β’ Good experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture)
β’ Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership as appropriate.
β’ Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment.
β’ Experience working in Agile Methodology
Good to have:
β’ Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines.
β’ Worked on cloud implementations, data migration, Data Vault 2.0, etc.
Requirements:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Technology, or a related field.
β’ Proven experience as a Data Engineer, with a focus on AWS and Snowflake.
β’ Strong understanding of data warehousing concepts and best practices.
β’ Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders.
β’ Experience in the insurance industry, preferably with knowledge of claims and loss processes.
β’ Proficiency in SQL, Python, and other relevant programming languages.
β’ Strong problem-solving skills and attention to detail.
β’ Ability to work independently and as part of a team in a fast-paced environment.
Preferred Qualifications:
β’ Experience with data modeling and ETL processes.
β’ Familiarity with data governance and data security practices.
β’ Certification in AWS or Snowflake is a plus.