

DataEdge Consulting
Data Engineer Architect [W2 ONLY] [NO C2C]
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a “Data Engineering Architect” with a contract length of unspecified duration, offering a pay rate of “W2 ONLY.” Remote work is available for qualified candidates. Requires 7+ years of experience, strong Python and SQL skills, AWS proficiency, and relevant certifications.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Python #Data Architecture #NoSQL #Data Security #DevOps #Infrastructure as Code (IaC) #Scala #Data Processing #Snowflake #Data Storage #Data Modeling #SNS (Simple Notification Service) #Data Lake #Data Pipeline #Cloud #Storage #Databricks #AWS Glue #AWS Lambda #AWS (Amazon Web Services) #ML (Machine Learning) #Security #Data Analysis #AI (Artificial Intelligence) #Agile #Documentation #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #S3 (Amazon Simple Storage Service) #Data Extraction #dbt (data build tool) #Terraform #Data Access #Fivetran #API (Application Programming Interface) #BI (Business Intelligence) #Lambda (AWS Lambda) #Compliance #Data Engineering #Programming #Data Warehouse
Role description
W2 ONLY
NO C2C
Remote OK for Qualified candidates
Data Engineering Architect / Remote is OK / Our DIRECT Client
About our Customer:
Our premium customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a “Data Engineering Architect”, whose primary purpose is to implement robust, scalable, and high-performing data solutions on AWS.
Qualifications:
• 7+ years of experience in data architecture, engineering, or similar roles.
• Strong programming skills in Python & SQL.
• Proficiency in Python for data extraction from APIs using AWS Lambda, Glue, etc.
• Strong experience building ETL/ELT or Data Engineering solutions using data pipelines.
• Design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
• Hands-on experience with AWS Lambda, AWS Glue, and S3, SNS, other AWS services.
• Experience with API-driven data access (API development experience a plus).
• Solid experience with database technologies (SQL, NoSQL) and data modeling.
• Understanding of serverless architecture benefits and challenges.
• Experience working in agile development environments.
• AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
• Experience with Fivetran and DBT is a plus.
Preferred Skills:
• Experience with modern data stack technologies (e.g., dbt, Snowflake, Databricks).
• Familiarity with machine learning pipelines and AI-driven analytics.
• DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
• Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
• Define, build, test, and implement scalable data pipelines.
• Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
• Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
• Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
• Design and build API integrations to support the needs of analysts and reporting systems.
• Develop, deploy, and manage AWS Lambda functions written in Python.
• Develop, deploy, and manage AWS Glue jobs written in Python.
• Ensure efficient and scalable serverless operations.
• Debug and troubleshoot Lambda functions and Glue jobs.
• Collaborate with other AWS service teams to design and implement robust solutions.
• Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
• Ensure data security, compliance, and privacy policies are integrated into solutions.
• Develop and maintain technical documentation and architecture diagrams.
• Stay current with AWS updates and industry trends to continuously evolve the data architecture.
• Mentor and provide technical guidance to junior team members and stakeholders.
W2 ONLY
NO C2C
Remote OK for Qualified candidates
Data Engineering Architect / Remote is OK / Our DIRECT Client
About our Customer:
Our premium customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a “Data Engineering Architect”, whose primary purpose is to implement robust, scalable, and high-performing data solutions on AWS.
Qualifications:
• 7+ years of experience in data architecture, engineering, or similar roles.
• Strong programming skills in Python & SQL.
• Proficiency in Python for data extraction from APIs using AWS Lambda, Glue, etc.
• Strong experience building ETL/ELT or Data Engineering solutions using data pipelines.
• Design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
• Hands-on experience with AWS Lambda, AWS Glue, and S3, SNS, other AWS services.
• Experience with API-driven data access (API development experience a plus).
• Solid experience with database technologies (SQL, NoSQL) and data modeling.
• Understanding of serverless architecture benefits and challenges.
• Experience working in agile development environments.
• AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
• Experience with Fivetran and DBT is a plus.
Preferred Skills:
• Experience with modern data stack technologies (e.g., dbt, Snowflake, Databricks).
• Familiarity with machine learning pipelines and AI-driven analytics.
• DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
• Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
• Define, build, test, and implement scalable data pipelines.
• Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
• Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
• Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
• Design and build API integrations to support the needs of analysts and reporting systems.
• Develop, deploy, and manage AWS Lambda functions written in Python.
• Develop, deploy, and manage AWS Glue jobs written in Python.
• Ensure efficient and scalable serverless operations.
• Debug and troubleshoot Lambda functions and Glue jobs.
• Collaborate with other AWS service teams to design and implement robust solutions.
• Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
• Ensure data security, compliance, and privacy policies are integrated into solutions.
• Develop and maintain technical documentation and architecture diagrams.
• Stay current with AWS updates and industry trends to continuously evolve the data architecture.
• Mentor and provide technical guidance to junior team members and stakeholders.






