

Data Engineer Architect [W2 ONLY]
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineering Architect with 7+ years of experience, strong Python and SQL skills, and expertise in AWS services. The contract is remote, with a pay rate of “”. AWS certifications are highly desirable.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 24, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Security #Programming #Data Storage #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #DevOps #Infrastructure as Code (IaC) #Cloud #Terraform #Data Access #Data Lake #Data Pipeline #Python #ML (Machine Learning) #Compliance #Snowflake #AWS Lambda #SQL (Structured Query Language) #Data Engineering #dbt (data build tool) #AWS Glue #Data Architecture #Agile #Data Modeling #Security #Data Warehouse #Databricks #Scala #NoSQL #Data Processing #Lambda (AWS Lambda) #API (Application Programming Interface) #Data Analysis #Storage #AWS (Amazon Web Services) #BI (Business Intelligence) #Documentation
Role description
W2 ONLY
Data Engineering Architect / Remote is OK / Our DIRECT Client
About our Customer:
Our premium customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a “Data Engineering Architect”, whose primary purpose is to implement robust, scalable, and high-performing data solutions on AWS.
Qualifications:
• 7+ years of experience in data architecture, engineering, or similar roles.
• Strong programming skills in Python & SQL.
• Strong experience building ETL or Data Engineering solutions using data pipelines.
• Design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
• Hands-on experience with AWS Lambda, AWS Glue, and other AWS services.
• Experience with API-driven data access (API development experience a plus).
• Solid experience with database technologies (SQL, NoSQL) and data modeling.
• Understanding of serverless architecture benefits and challenges.
• Experience working in agile development environments.
• AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
Preferred Skills:
• Experience with modern data stack technologies (e.g., dbt, Snowflake, Databricks).
• Familiarity with machine learning pipelines and AI-driven analytics.
• DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
• Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
• Define, build, test, and implement scalable data pipelines.
• Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
• Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
• Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
• Design and build API integrations to support the needs of analysts and reporting systems.
• Develop, deploy, and manage AWS Lambda functions written in Python.
• Develop, deploy, and manage AWS Glue jobs written in Python.
• Ensure efficient and scalable serverless operations.
• Debug and troubleshoot Lambda functions and Glue jobs.
• Collaborate with other AWS service teams to design and implement robust solutions.
• Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
• Ensure data security, compliance, and privacy policies are integrated into solutions.
• Develop and maintain technical documentation and architecture diagrams.
• Stay current with AWS updates and industry trends to continuously evolve the data architecture.
• Mentor and provide technical guidance to junior team members and stakeholders.
W2 ONLY
Data Engineering Architect / Remote is OK / Our DIRECT Client
About our Customer:
Our premium customer, a global Fortune 500 company & a leader in the Food Services industry is looking for a “Data Engineering Architect”, whose primary purpose is to implement robust, scalable, and high-performing data solutions on AWS.
Qualifications:
• 7+ years of experience in data architecture, engineering, or similar roles.
• Strong programming skills in Python & SQL.
• Strong experience building ETL or Data Engineering solutions using data pipelines.
• Design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
• Hands-on experience with AWS Lambda, AWS Glue, and other AWS services.
• Experience with API-driven data access (API development experience a plus).
• Solid experience with database technologies (SQL, NoSQL) and data modeling.
• Understanding of serverless architecture benefits and challenges.
• Experience working in agile development environments.
• AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
Preferred Skills:
• Experience with modern data stack technologies (e.g., dbt, Snowflake, Databricks).
• Familiarity with machine learning pipelines and AI-driven analytics.
• DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
• Knowledge of CI/CD pipelines for data workflows.
Responsibilities:
• Define, build, test, and implement scalable data pipelines.
• Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
• Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
• Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
• Design and build API integrations to support the needs of analysts and reporting systems.
• Develop, deploy, and manage AWS Lambda functions written in Python.
• Develop, deploy, and manage AWS Glue jobs written in Python.
• Ensure efficient and scalable serverless operations.
• Debug and troubleshoot Lambda functions and Glue jobs.
• Collaborate with other AWS service teams to design and implement robust solutions.
• Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
• Ensure data security, compliance, and privacy policies are integrated into solutions.
• Develop and maintain technical documentation and architecture diagrams.
• Stay current with AWS updates and industry trends to continuously evolve the data architecture.
• Mentor and provide technical guidance to junior team members and stakeholders.