

Saicon
Lead AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead AWS Data Engineer, a 6-month contract position with a pay rate of "$X/hour". Key skills include AWS, Databricks, data modeling, and data integration. A Bachelor's degree in Computer Science or related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
760
-
ποΈ - Date
November 25, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Livingston, NJ
-
π§ - Skills detailed
#Data Catalog #Snowflake #RDBMS (Relational Database Management System) #Cloud #SQL (Structured Query Language) #Data Quality #Data Science #Jenkins #Strategy #Data Modeling #Jira #AWS (Amazon Web Services) #Data Lake #GitHub #NoSQL #Databricks #Computer Science #Security #Terraform #"ETL (Extract #Transform #Load)" #Data Engineering #Data Analysis #Compliance #Data Integration #Scala #SQL Server #Data Ingestion #Oracle #Batch #MongoDB #Replication
Role description
Weβre looking for a Lead Data Engineer to drive the design and delivery of a modern data hub and marketplace that powers analytics across the organization. In this role, youβll build scalable data integration solutions, partner closely with source system owners, and provide curated, high-quality data to analysts, data scientists, and downstream applications. Youβll help define data models, improve data quality, and shape the strategy for a large contact-center data ecosystem. This role is ideal for someone who excels at complex data engineering, enjoys cross-team collaboration, and brings deep expertise in AWS and Databricks
.
What Youβll Do
β’ Lead the engineering of a service data hub that supports data ingestion, transformation, and curated delivery to internal data consumers.
β’ Build and maintain enterprise data models across data lakes, warehouses, and analytic layers.
β’ Define integration rules and data acquisition strategies for streaming, replication, and batch pipelines.
β’ Conduct deep data analysis to validate sources, understand lineage, and assess usability for key business use cases.
β’ Establish and maintain enterprise data taxonomy and support data catalog initiatives.
β’ Collaborate with data engineers to define workflows, data quality checks, and scalable pipelines.
β’ Communicate clearly with technical and business stakeholders, providing mentorship and architectural guidance.
What You Bring
β’ Advanced hands-on experience with cloud-based data engineering, especially AWS and Databricks (Snowflake is also a plus)
β’ Background with RDBMS platforms (Oracle, SQL Server) and familiarity with NoSQL (e.g., MongoDB).
β’ Experience building ingestion pipelines for both structured and unstructured data.
β’ Solid understanding of data modeling, governance, quality, compliance, and security.
β’ Experience working with GitHub, JIRA, Confluence, and CI/CD pipelines (Jenkins, Terraform, etc.).
β’ Ability to troubleshoot complex systems and communicate effectively with both business and technical teams.
β’ Bachelorβs degree in Computer Science, IT, or related field.
Weβre looking for a Lead Data Engineer to drive the design and delivery of a modern data hub and marketplace that powers analytics across the organization. In this role, youβll build scalable data integration solutions, partner closely with source system owners, and provide curated, high-quality data to analysts, data scientists, and downstream applications. Youβll help define data models, improve data quality, and shape the strategy for a large contact-center data ecosystem. This role is ideal for someone who excels at complex data engineering, enjoys cross-team collaboration, and brings deep expertise in AWS and Databricks
.
What Youβll Do
β’ Lead the engineering of a service data hub that supports data ingestion, transformation, and curated delivery to internal data consumers.
β’ Build and maintain enterprise data models across data lakes, warehouses, and analytic layers.
β’ Define integration rules and data acquisition strategies for streaming, replication, and batch pipelines.
β’ Conduct deep data analysis to validate sources, understand lineage, and assess usability for key business use cases.
β’ Establish and maintain enterprise data taxonomy and support data catalog initiatives.
β’ Collaborate with data engineers to define workflows, data quality checks, and scalable pipelines.
β’ Communicate clearly with technical and business stakeholders, providing mentorship and architectural guidance.
What You Bring
β’ Advanced hands-on experience with cloud-based data engineering, especially AWS and Databricks (Snowflake is also a plus)
β’ Background with RDBMS platforms (Oracle, SQL Server) and familiarity with NoSQL (e.g., MongoDB).
β’ Experience building ingestion pipelines for both structured and unstructured data.
β’ Solid understanding of data modeling, governance, quality, compliance, and security.
β’ Experience working with GitHub, JIRA, Confluence, and CI/CD pipelines (Jenkins, Terraform, etc.).
β’ Ability to troubleshoot complex systems and communicate effectively with both business and technical teams.
β’ Bachelorβs degree in Computer Science, IT, or related field.






