

DataEdge Consulting
Data Engineer [AWS, Python, JSON] [W2 ONLY] [NO IMMIGRATION]
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 7+ years of experience, focusing on AWS, Python, and JSON. Contract length is CTH, with a remote work location. Key skills include ETL, data pipelines, and AWS services. AWS certifications are desirable.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 6, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #Data Architecture #Documentation #Programming #Data Pipeline #Data Security #Data Storage #Compliance #DynamoDB #SQL (Structured Query Language) #AWS Glue #Snowflake #Cloud #Security #PostgreSQL #Data Access #Data Modeling #Data Warehouse #DevOps #Scala #Data Engineering #Data Lake #Migration #AWS (Amazon Web Services) #BI (Business Intelligence) #API (Application Programming Interface) #Angular #Agile #Databricks #Terraform #Data Analysis #Lambda (AWS Lambda) #S3 (Amazon Simple Storage Service) #dbt (data build tool) #Data Processing #Athena #AWS Lambda #Storage #Infrastructure as Code (IaC) #JSON (JavaScript Object Notation) #TypeScript #ML (Machine Learning) #Python #NoSQL
Role description
[W2 ONLY]
[REMOTE]
[NO IMMIGRATION SPONSORSHIP]
Sr AWS Data Engineer / Charlotte, NC or REMOTE / CTH
KEY SKILLS: Python, Angular/TypeScript, JASON, AWS services (S3, PostgreSQL, DynamoDB, Athena, Snowflake, Lambda, and Glue).
About our Customer & Role:
Our direct customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a βData Engineering Consultantβ, who will be responsible for designing and implementing robust, scalable, and high-performing data solutions on AWS. Ensures cloud data infrastructure meets the needs of growing organization.
Qualifications:
β’ 7+ years of experience in data architecture, engineering, or similar roles.
β’ Very strong programming skills in Python.
β’ Expertise in ETL or Data Engineering role building and implementing data pipelines.
β’ Strong understanding of design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
β’ Hands-on experience with AWS Lambda, AWS Glue, and other AWS services.
β’ Proficient in Python and SQL with the ability to write efficient queries.
β’ Experience with API-driven data access (API development experience a plus).
β’ Solid experience with database technologies (SQL, NoSQL) and data modeling.
β’ Understanding of serverless architecture benefits and challenges.
β’ Experience working in agile development environments.
β’ AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
Preferred Skills:
β’ Experience with modern data stack technologies (e.g., DBT, Snowflake, Databricks).
β’ Familiarity with machine learning pipelines and AI-driven analytics.
β’ Background in DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
β’ Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
β’ Define, build, test, and implement scalable data pipelines.
β’ Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
β’ Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
β’ Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
β’ Design and build API integrations to support the needs of analysts and reporting systems.
β’ Develop, deploy, and manage AWS Lambda functions written in Python.
β’ Develop, deploy, and manage AWS Glue jobs written in Python.
β’ Ensure efficient and scalable serverless operations.
β’ Debug and troubleshoot Lambda functions and Glue jobs.
β’ Collaborate with other AWS service teams to design and implement robust solutions.
β’ Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
β’ Ensure data security, compliance, and privacy policies are integrated into solutions.
β’ Develop and maintain technical documentation and architecture diagrams.
β’ Stay current with AWS updates and industry trends to continuously evolve the data architecture.
β’ Mentor and provide technical guidance to junior team members and stakeholders.
[W2 ONLY]
[REMOTE]
[NO IMMIGRATION SPONSORSHIP]
Sr AWS Data Engineer / Charlotte, NC or REMOTE / CTH
KEY SKILLS: Python, Angular/TypeScript, JASON, AWS services (S3, PostgreSQL, DynamoDB, Athena, Snowflake, Lambda, and Glue).
About our Customer & Role:
Our direct customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a βData Engineering Consultantβ, who will be responsible for designing and implementing robust, scalable, and high-performing data solutions on AWS. Ensures cloud data infrastructure meets the needs of growing organization.
Qualifications:
β’ 7+ years of experience in data architecture, engineering, or similar roles.
β’ Very strong programming skills in Python.
β’ Expertise in ETL or Data Engineering role building and implementing data pipelines.
β’ Strong understanding of design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
β’ Hands-on experience with AWS Lambda, AWS Glue, and other AWS services.
β’ Proficient in Python and SQL with the ability to write efficient queries.
β’ Experience with API-driven data access (API development experience a plus).
β’ Solid experience with database technologies (SQL, NoSQL) and data modeling.
β’ Understanding of serverless architecture benefits and challenges.
β’ Experience working in agile development environments.
β’ AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
Preferred Skills:
β’ Experience with modern data stack technologies (e.g., DBT, Snowflake, Databricks).
β’ Familiarity with machine learning pipelines and AI-driven analytics.
β’ Background in DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
β’ Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
β’ Define, build, test, and implement scalable data pipelines.
β’ Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
β’ Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
β’ Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
β’ Design and build API integrations to support the needs of analysts and reporting systems.
β’ Develop, deploy, and manage AWS Lambda functions written in Python.
β’ Develop, deploy, and manage AWS Glue jobs written in Python.
β’ Ensure efficient and scalable serverless operations.
β’ Debug and troubleshoot Lambda functions and Glue jobs.
β’ Collaborate with other AWS service teams to design and implement robust solutions.
β’ Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
β’ Ensure data security, compliance, and privacy policies are integrated into solutions.
β’ Develop and maintain technical documentation and architecture diagrams.
β’ Stay current with AWS updates and industry trends to continuously evolve the data architecture.
β’ Mentor and provide technical guidance to junior team members and stakeholders.






