

DataEdge Consulting
Data Engineering Architect [W2 ONLY] [NO IMMIGRATION SPONSORSHIP] [REMOTE]
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Data Engineering Architect" with a contract length of "CTH" and a pay rate of "W2 ONLY." Key skills include Python, AWS services, and data architecture. Requires 7+ years of experience and AWS certifications are desirable. Remote work location.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 5, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#S3 (Amazon Simple Storage Service) #Documentation #Storage #Data Pipeline #DynamoDB #Data Architecture #Scala #Data Processing #Data Access #Data Modeling #Athena #AWS Glue #Databricks #Snowflake #Data Engineering #"ETL (Extract #Transform #Load)" #Migration #API (Application Programming Interface) #AWS (Amazon Web Services) #Agile #ML (Machine Learning) #Angular #AWS Lambda #Data Warehouse #PostgreSQL #AI (Artificial Intelligence) #BI (Business Intelligence) #Python #Data Security #TypeScript #Data Storage #Security #Data Analysis #Cloud #NoSQL #dbt (data build tool) #SQL (Structured Query Language) #Compliance #Lambda (AWS Lambda) #DevOps #Data Lake #Infrastructure as Code (IaC) #Programming #Terraform
Role description
[W2 ONLY]
[REMOTE]
[NO IMMIGRATION SPONSORSHIP]
AWS Data Engineer - Architect / Charlotte, NC or REMOTE / CTH
About our Customer & Role:
Our direct customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a βData Engineering Architectβ, who will be responsible for designing and implementing robust, scalable, and high-performing data solutions on AWS. Ensures cloud data infrastructure meets the needs of growing organization.
KEY SKILLS: Python, Angular/TypeScript, AWS services (S3, PostgreSQL, DynamoDB, Athena, Snowflake, Lambda, and Glue).
Qualifications:
β’ 7+ years of experience in data architecture, engineering, or similar roles.
β’ Very strong programming skills in Python.
β’ Expertise in ETL or Data Engineering role building and implementing data pipelines.
β’ Strong understanding of design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
β’ Hands-on experience with AWS Lambda, AWS Glue, and other AWS services.
β’ Proficient in Python and SQL with the ability to write efficient queries.
β’ Experience with API-driven data access (API development experience a plus).
β’ Solid experience with database technologies (SQL, NoSQL) and data modeling.
β’ Understanding of serverless architecture benefits and challenges.
β’ Experience working in agile development environments.
β’ AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
Preferred Skills:
β’ Experience with modern data stack technologies (e.g., DBT, Snowflake, Databricks).
β’ Familiarity with machine learning pipelines and AI-driven analytics.
β’ Background in DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
β’ Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
β’ Define, build, test, and implement scalable data pipelines.
β’ Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
β’ Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
β’ Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
β’ Design and build API integrations to support the needs of analysts and reporting systems.
β’ Develop, deploy, and manage AWS Lambda functions written in Python.
β’ Develop, deploy, and manage AWS Glue jobs written in Python.
β’ Ensure efficient and scalable serverless operations.
β’ Debug and troubleshoot Lambda functions and Glue jobs.
β’ Collaborate with other AWS service teams to design and implement robust solutions.
β’ Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
β’ Ensure data security, compliance, and privacy policies are integrated into solutions.
β’ Develop and maintain technical documentation and architecture diagrams.
β’ Stay current with AWS updates and industry trends to continuously evolve the data architecture.
β’ Mentor and provide technical guidance to junior team members and stakeholders.
[W2 ONLY]
[REMOTE]
[NO IMMIGRATION SPONSORSHIP]
AWS Data Engineer - Architect / Charlotte, NC or REMOTE / CTH
About our Customer & Role:
Our direct customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a βData Engineering Architectβ, who will be responsible for designing and implementing robust, scalable, and high-performing data solutions on AWS. Ensures cloud data infrastructure meets the needs of growing organization.
KEY SKILLS: Python, Angular/TypeScript, AWS services (S3, PostgreSQL, DynamoDB, Athena, Snowflake, Lambda, and Glue).
Qualifications:
β’ 7+ years of experience in data architecture, engineering, or similar roles.
β’ Very strong programming skills in Python.
β’ Expertise in ETL or Data Engineering role building and implementing data pipelines.
β’ Strong understanding of design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
β’ Hands-on experience with AWS Lambda, AWS Glue, and other AWS services.
β’ Proficient in Python and SQL with the ability to write efficient queries.
β’ Experience with API-driven data access (API development experience a plus).
β’ Solid experience with database technologies (SQL, NoSQL) and data modeling.
β’ Understanding of serverless architecture benefits and challenges.
β’ Experience working in agile development environments.
β’ AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
Preferred Skills:
β’ Experience with modern data stack technologies (e.g., DBT, Snowflake, Databricks).
β’ Familiarity with machine learning pipelines and AI-driven analytics.
β’ Background in DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
β’ Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
β’ Define, build, test, and implement scalable data pipelines.
β’ Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
β’ Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
β’ Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
β’ Design and build API integrations to support the needs of analysts and reporting systems.
β’ Develop, deploy, and manage AWS Lambda functions written in Python.
β’ Develop, deploy, and manage AWS Glue jobs written in Python.
β’ Ensure efficient and scalable serverless operations.
β’ Debug and troubleshoot Lambda functions and Glue jobs.
β’ Collaborate with other AWS service teams to design and implement robust solutions.
β’ Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
β’ Ensure data security, compliance, and privacy policies are integrated into solutions.
β’ Develop and maintain technical documentation and architecture diagrams.
β’ Stay current with AWS updates and industry trends to continuously evolve the data architecture.
β’ Mentor and provide technical guidance to junior team members and stakeholders.






