Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a contract-to-hire arrangement, based in Jersey City. Requires 10+ years in data engineering, expertise in Snowflake, SQL, Python, AWS, and experience in the insurance industry.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 25, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Fixed Term
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Security #RDS (Amazon Relational Database Service) #AWS Glue #Migration #Data Vault #Compliance #Storage #Data Engineering #AWS (Amazon Web Services) #Data Governance #Data Pipeline #Data Security #Data Storage #Data Migration #DevOps #Data Ingestion #Aurora #Computer Science #Jenkins #Data Modeling #"ETL (Extract #Transform #Load)" #Data Processing #Programming #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Agile #Leadership #PySpark #Snowflake #Big Data #Data Quality #Debugging #Spark (Apache Spark) #Cloud #Aurora RDS #GIT #Python #Vault
Role description
Contarct to hire role Job Title: Lead Data Engineer Jersey City -onsite Key Skills: Snowflake, SQL, Python, Spark, AWS- Glue, Big Data Concepts Responsibilities: β€’ Lead the design, development, and implementation of data solutions using AWS and Snowflake. β€’ Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. β€’ Develop and maintain data pipelines, ensuring data quality, integrity, and security. β€’ Optimize data storage and retrieval processes to support data warehousing and analytics. β€’ Provide technical leadership and mentorship to junior data engineers. β€’ Work closely with stakeholders to gather requirements and deliver data-driven insights. β€’ Ensure compliance with industry standards and best practices in data engineering. β€’ Utilize knowledge of insurance, particularly claims and loss, to enhance data solutions. Must have: β€’ 10+ years of relevant experience in Data Engineering and delivery. β€’ 10+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations - AWS in particular β€’ Strong experience being a hands-on lead for a team of 4+ members atleast. β€’ Strong experience with SQL, python and Pyspark β€’ Good understanding of Data ingestion and data processing frameworks β€’ Good experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture) β€’ Good aptitude, good communication skill, strong problem-solving abilities, analytical skills, and ability to take ownership as appropriate. β€’ Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment. β€’ Experience working in Agile Methodology Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Worked on cloud implementations, data migration, Data Vault 2.0, etc. Requirements: β€’ Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. β€’ Proven experience as a Data Engineer, with a focus on AWS and Snowflake. β€’ Strong understanding of data warehousing concepts and best practices. β€’ Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. β€’ Experience in the insurance industry, preferably with knowledge of claims and loss processes. β€’ Proficiency in SQL, Python, and other relevant programming languages. β€’ Strong problem-solving skills and attention to detail. β€’ Ability to work independently and as part of a team in a fast-paced environment. Preferred Qualifications: β€’ Experience with data modeling and ETL processes. β€’ Familiarity with data governance and data security practices. β€’ Certification in AWS or Snowflake is a plus.