

Pure Talent Consulting
AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with a contract length of "unknown" and a pay rate of "$/hour". Key skills include AWS data tools, SQL, and Python. Requires a Bachelor's degree and 5+ years of experience in data platforms and architecture.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 15, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Business Analysis #Athena #IAM (Identity and Access Management) #Docker #Cloud #Linux #Microsoft SQL #Databases #Security #API (Application Programming Interface) #Deployment #BI (Business Intelligence) #AWS (Amazon Web Services) #Automation #Infrastructure as Code (IaC) #Programming #Microsoft SQL Server #Data Science #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Data Modeling #PySpark #Python #AI (Artificial Intelligence) #ML (Machine Learning) #Pytest #SQL (Structured Query Language) #MS SQL (Microsoft SQL Server) #Data Governance #Scala #GIT #Containers #Data Engineering #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Data Warehouse #DMS (Data Migration Service) #SQL Server #Data Lake
Role description
No C2C!! No sponsorship available for this role.
AWS Data Engineer:
The Data Engineer will be responsible for the design, development, implementation, and continuous improvement of reporting platform and architecture. This position will be responsible for Business Intelligence (BI) and Data Analytics, providing high-value, timely data solutions to internal and external data stakeholders. Regular collaboration will occur with business, technology, and software development teams to ensure we exceed expectations for data, information, and reporting.
Essential Duties and Responsibilities:
β’ Can design, extend, and document a modern ELT lakehouse for all company data.
β’ S3 parquet medallion layers, Glue Catalog, clean architecture/docs, flows, modeling artifacts.
β’ AWS-first engineering β deep platform skills, not AWS console work.
β’ Glue (PySpark), DMS CDC, AppFlow, Step Functions, Lambda, IAM, CDK.
β’ Strong with NAWS CI/CD IaC β can build, extend, and debug pipelines and CDK Stacks in single or multi account deployments.
β’ CodeCommit, CodePipeline, CodeBuild etc.
β’ Owns source system changes and cross-system integration end-to-end.
β’ Schema alignment, SCD2, filling gaps between systems with no shared keys, building composite business keys, entity resolution, and shaping normalized/dimensional Gold models (dim/fact) and denormalized reporting layer (rpt).
β’ A developer with a proper dev workflow.
β’ WSL/Linux, Docker Glue containers, pytest/Spark testing, mypy, ruff, Git flow, CI/CD pipelines.
β’ Understands data governance & security patterns and can help harden the platform.
β’ Lake Formation permissions, table/column-level controls, catalog organization, cross-account access patterns.
β’ Can design normalized β denormalized layers in Gold for BI, apps, and reporting.
β’ Entity modeling, dimensional design, surrogate keys, star schemas, wide reporting views.
β’ Forward-looking engineer who can grow the platform into streaming + AI.
β’ Kinesis/MSK ingestion, Delta/Iceberg, ML-driven enrichment, scalable gold models.
β’ Data Warehouse/Data Lake Architecture and Development.
β’ Data Modeling & Architecture.
β’ Design and maintain the architecture in reporting platforms, including the QuickSight.
β’ Synthesize many disparate data sources into one centralized data platform.
β’ Lead efforts to design, develop and implement database enhancement to improve efficiency and streamline the use for analytics, business analysis.
β’ Scope out and develop data enhancement plans for company initiatives.
β’ Implement and enforce data validation procedures.
β’ Make suggestions for improvements to the data team and team processes.
β’ Produce complete and accurate reports to business entities and users.
β’ Other duties as assigned.
Qualifications
Education and Experience:
β’ Bachelorβs degree required in Data Science, Software Engineering, Information Technology, or a related field.
β’ Minimum 5 years hands experience with AWS Data Tools.
β’ 2+ years of experience as a technical lead on data and analytics projects.
β’ 5+ years of experience architecting and maintaining enterprise data platforms.
Required Skills
β’ Expert level in AWS data tools (QuickSight).
β’ Expert level user of Microsoft SQL Server tools and SQL.
β’ Expert level working knowledge of The Python language.
β’ Experience validating databases for accuracy.
β’ Experience working with API and event-driven architectures.
β’ Experience working with ETL/ELT Pipelines.
β’ Knowledge of how data entities and elements should be structured to assure accuracy, performance, understanding, operational, analytical, reporting, and data science efficiencies.
β’ Applied knowledge of cloud computing and RPA programming/machine learning.
β’ Applied knowledge of data modeling principles (e.g., dimensional modeling and star schemas).
β’ Must be a self-directed professional with the ability to work effectively in a fast-paced demanding environment, handle multi-tasks, problem solve and effectively follow-through.
β’ Work collaboratively with internal partners/departments.
Preferred Skills
β’ Hands-on experience with modern data lake/lakehouse patterns, S3-based pipelines, Glue, Step Functions, Athena, or event-driven orchestration.
β’ Real-world implementation experience to be able to articulate tradeoffs between architectural approaches like snapshotting vs diffing vs time-travel.
β’ High level of comfort and willingness to be working at the infrastructure or developer tooling level, writing Python-based ELT logic, using SDKs like Boto3 or CDK, or contributing to CI/CD automation and platform tooling.
β’ Able to contribute to foundational platform design or reusable framework development.
β’ Comfortable working in ambiguity-heavy environment that requires creating new patterns and processes.
β’ Ability to collaborate with other data engineers on SQL transformation logic and business rules in parallel with establishing core pipeline and platform.
β’ Someone who can help drive the modernization of our platform, through IaC, scalability, and NAWS automation.
β’ Demonstrated taking initiative on own (self-directed) in past roles (e.g., manually deploying CloudFormation templates, standing up some automation).
No C2C!! No sponsorship available for this role.
AWS Data Engineer:
The Data Engineer will be responsible for the design, development, implementation, and continuous improvement of reporting platform and architecture. This position will be responsible for Business Intelligence (BI) and Data Analytics, providing high-value, timely data solutions to internal and external data stakeholders. Regular collaboration will occur with business, technology, and software development teams to ensure we exceed expectations for data, information, and reporting.
Essential Duties and Responsibilities:
β’ Can design, extend, and document a modern ELT lakehouse for all company data.
β’ S3 parquet medallion layers, Glue Catalog, clean architecture/docs, flows, modeling artifacts.
β’ AWS-first engineering β deep platform skills, not AWS console work.
β’ Glue (PySpark), DMS CDC, AppFlow, Step Functions, Lambda, IAM, CDK.
β’ Strong with NAWS CI/CD IaC β can build, extend, and debug pipelines and CDK Stacks in single or multi account deployments.
β’ CodeCommit, CodePipeline, CodeBuild etc.
β’ Owns source system changes and cross-system integration end-to-end.
β’ Schema alignment, SCD2, filling gaps between systems with no shared keys, building composite business keys, entity resolution, and shaping normalized/dimensional Gold models (dim/fact) and denormalized reporting layer (rpt).
β’ A developer with a proper dev workflow.
β’ WSL/Linux, Docker Glue containers, pytest/Spark testing, mypy, ruff, Git flow, CI/CD pipelines.
β’ Understands data governance & security patterns and can help harden the platform.
β’ Lake Formation permissions, table/column-level controls, catalog organization, cross-account access patterns.
β’ Can design normalized β denormalized layers in Gold for BI, apps, and reporting.
β’ Entity modeling, dimensional design, surrogate keys, star schemas, wide reporting views.
β’ Forward-looking engineer who can grow the platform into streaming + AI.
β’ Kinesis/MSK ingestion, Delta/Iceberg, ML-driven enrichment, scalable gold models.
β’ Data Warehouse/Data Lake Architecture and Development.
β’ Data Modeling & Architecture.
β’ Design and maintain the architecture in reporting platforms, including the QuickSight.
β’ Synthesize many disparate data sources into one centralized data platform.
β’ Lead efforts to design, develop and implement database enhancement to improve efficiency and streamline the use for analytics, business analysis.
β’ Scope out and develop data enhancement plans for company initiatives.
β’ Implement and enforce data validation procedures.
β’ Make suggestions for improvements to the data team and team processes.
β’ Produce complete and accurate reports to business entities and users.
β’ Other duties as assigned.
Qualifications
Education and Experience:
β’ Bachelorβs degree required in Data Science, Software Engineering, Information Technology, or a related field.
β’ Minimum 5 years hands experience with AWS Data Tools.
β’ 2+ years of experience as a technical lead on data and analytics projects.
β’ 5+ years of experience architecting and maintaining enterprise data platforms.
Required Skills
β’ Expert level in AWS data tools (QuickSight).
β’ Expert level user of Microsoft SQL Server tools and SQL.
β’ Expert level working knowledge of The Python language.
β’ Experience validating databases for accuracy.
β’ Experience working with API and event-driven architectures.
β’ Experience working with ETL/ELT Pipelines.
β’ Knowledge of how data entities and elements should be structured to assure accuracy, performance, understanding, operational, analytical, reporting, and data science efficiencies.
β’ Applied knowledge of cloud computing and RPA programming/machine learning.
β’ Applied knowledge of data modeling principles (e.g., dimensional modeling and star schemas).
β’ Must be a self-directed professional with the ability to work effectively in a fast-paced demanding environment, handle multi-tasks, problem solve and effectively follow-through.
β’ Work collaboratively with internal partners/departments.
Preferred Skills
β’ Hands-on experience with modern data lake/lakehouse patterns, S3-based pipelines, Glue, Step Functions, Athena, or event-driven orchestration.
β’ Real-world implementation experience to be able to articulate tradeoffs between architectural approaches like snapshotting vs diffing vs time-travel.
β’ High level of comfort and willingness to be working at the infrastructure or developer tooling level, writing Python-based ELT logic, using SDKs like Boto3 or CDK, or contributing to CI/CD automation and platform tooling.
β’ Able to contribute to foundational platform design or reusable framework development.
β’ Comfortable working in ambiguity-heavy environment that requires creating new patterns and processes.
β’ Ability to collaborate with other data engineers on SQL transformation logic and business rules in parallel with establishing core pipeline and platform.
β’ Someone who can help drive the modernization of our platform, through IaC, scalability, and NAWS automation.
β’ Demonstrated taking initiative on own (self-directed) in past roles (e.g., manually deploying CloudFormation templates, standing up some automation).






