

Data Architect / Engineer: AWS Infrastructure That Enables AI/ML & App Workloads
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect/Engineer focused on AWS infrastructure for AI/ML workloads. Contract length is unspecified, with a pay rate of $60-80/hr. Key skills include AWS expertise, data pipeline management, and collaboration. Remote work requires Pacific Time availability.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
640
-
ποΈ - Date discovered
August 15, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Irvine, CA
-
π§ - Skills detailed
#Storage #Deployment #Data Storage #Automation #Databases #Metadata #ML (Machine Learning) #Aurora #AI (Artificial Intelligence) #Datadog #Prometheus #AWS S3 (Amazon Simple Storage Service) #DynamoDB #Monitoring #Data Pipeline #Lambda (AWS Lambda) #Data Management #AWS (Amazon Web Services) #Cloud #Compliance #Data Architecture #RDS (Amazon Relational Database Service) #Storytelling #Athena #Scala #Knowledge Graph #Data Engineering #Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #Data Ingestion #EC2 #Redshift
Role description
NO 3RD / C2C PARTY FIRM CANDIDATES. DO NOT EMAIL TO ASK. AND NO I CANNOT HIRE YOUR CONSULTANTS ON MY W2.
THIS ROLE CAN BE DONE REMOTELY BUT YOU WOULD BE REQUIRED TO WORK PACIFIC TIME ZONE 5 DAYS A WEEK.
SOMEONE THAT'S LOCAL TO SOUTHERN CALIFORNIA THAT CAN DO ANY AMOUNT OF ONSITE IN IRVINE, CA WOULD BE A BIG PLUS.
KORE1, a nationwide provider of staffing and recruiting solutions, has an immediate opening for a Data Architect / Engineer: AWS infrastructure that enables AI/ML & app workloads
Role Overview
We are seeking a skilled Data Architect / Engineer to build, own, and optimize the cloud infrastructure and data platform that powers an AI product. You will ensure scalability, uptime, and cost efficiency while empowering AI/ML and application teams to deliver a seamless, high-performance user experience.
Your role is critical in managing data pipelines, cloud environments, and infrastructure components that enable AI/ML workflows and the platform's frontend and backend applications.
Key Responsibilities
β’ Design, deploy, and manage scalable, reliable cloud infrastructure (primarily AWS) supporting data ingestion, storage, and processing for AI/ML and application workflows
β’ Manage and optimize data pipelines, including document ingestion, metadata tagging, and content preparation, ensuring uptime and smooth operation
β’ Collaborate closely with AI/ML and application teams to ensure infrastructure and data platform reliability and responsiveness
β’ Monitor system health, implement alerting, and perform capacity planning to meet growing demand and maintain low latency
β’ Balance cost and performance through cloud resource management, leveraging open-source tools where appropriate to reduce operational expenses
β’ Oversee secure data storage (e.g., S3 buckets), access controls, and compliance requirements
β’ Support integration points between data infrastructure and user-facing platforms, including APIs and content upload features
β’ Continuously evaluate and implement improvements for infrastructure automation, deployment, and scalability
Required Skills & Experience
β’ Bachelor degree required. One that's relevant is a plus.
β’ Relevant certifications are a plus.
β’ 5-8+ years of experience in cloud data engineering / data platform architecture with a focus on AWS technologies (S3, Lambda, Redshift, Glue, RDS, EC2, DynamoDB, Aurora, Athena, AWS Data Pipeline, Kinesis, Kafka, etc.)
β’ Strong experience managing cloud infrastructure to support AI/ML and application workloads, with an emphasis on reliability, scalability, and cost optimization
β’ Proficient in building and maintaining data pipelines and workflows, including document ingestion and metadata management
β’ Experience working with storage solutions like AWS S3 and managing secure access and permissions
β’ Familiarity with monitoring, alerting, and system health tools (CloudWatch, Datadog, Prometheus, etc.)
β’ Comfortable collaborating across cross-functional teams, including AI/ML engineers, application developers, and product owners
β’ Strong problem-solving skills and ability to manage priorities in a dynamic, fast-growing environment
Preferred Qualifications
β’ Familiarity with AI/ML workflows and how to enable them through data infrastructure without direct model building
β’ Experience with vector databases, RAG environments, or semantic search infrastructure is a plus
β’ Understanding of metadata tagging, knowledge graphs, or related data organization techniques
β’ Knowledge of cost management strategies for cloud AI/ML workloads
β’ Interest or background in storytelling, immersive media, or human-computer interaction
Compensation depends on experience but is typically $60-80/hr W2.
NO 3RD / C2C PARTY FIRM CANDIDATES. DO NOT EMAIL TO ASK. AND NO I CANNOT HIRE YOUR CONSULTANTS ON MY W2.
THIS ROLE CAN BE DONE REMOTELY BUT YOU WOULD BE REQUIRED TO WORK PACIFIC TIME ZONE 5 DAYS A WEEK.
SOMEONE THAT'S LOCAL TO SOUTHERN CALIFORNIA THAT CAN DO ANY AMOUNT OF ONSITE IN IRVINE, CA WOULD BE A BIG PLUS.
KORE1, a nationwide provider of staffing and recruiting solutions, has an immediate opening for a Data Architect / Engineer: AWS infrastructure that enables AI/ML & app workloads
Role Overview
We are seeking a skilled Data Architect / Engineer to build, own, and optimize the cloud infrastructure and data platform that powers an AI product. You will ensure scalability, uptime, and cost efficiency while empowering AI/ML and application teams to deliver a seamless, high-performance user experience.
Your role is critical in managing data pipelines, cloud environments, and infrastructure components that enable AI/ML workflows and the platform's frontend and backend applications.
Key Responsibilities
β’ Design, deploy, and manage scalable, reliable cloud infrastructure (primarily AWS) supporting data ingestion, storage, and processing for AI/ML and application workflows
β’ Manage and optimize data pipelines, including document ingestion, metadata tagging, and content preparation, ensuring uptime and smooth operation
β’ Collaborate closely with AI/ML and application teams to ensure infrastructure and data platform reliability and responsiveness
β’ Monitor system health, implement alerting, and perform capacity planning to meet growing demand and maintain low latency
β’ Balance cost and performance through cloud resource management, leveraging open-source tools where appropriate to reduce operational expenses
β’ Oversee secure data storage (e.g., S3 buckets), access controls, and compliance requirements
β’ Support integration points between data infrastructure and user-facing platforms, including APIs and content upload features
β’ Continuously evaluate and implement improvements for infrastructure automation, deployment, and scalability
Required Skills & Experience
β’ Bachelor degree required. One that's relevant is a plus.
β’ Relevant certifications are a plus.
β’ 5-8+ years of experience in cloud data engineering / data platform architecture with a focus on AWS technologies (S3, Lambda, Redshift, Glue, RDS, EC2, DynamoDB, Aurora, Athena, AWS Data Pipeline, Kinesis, Kafka, etc.)
β’ Strong experience managing cloud infrastructure to support AI/ML and application workloads, with an emphasis on reliability, scalability, and cost optimization
β’ Proficient in building and maintaining data pipelines and workflows, including document ingestion and metadata management
β’ Experience working with storage solutions like AWS S3 and managing secure access and permissions
β’ Familiarity with monitoring, alerting, and system health tools (CloudWatch, Datadog, Prometheus, etc.)
β’ Comfortable collaborating across cross-functional teams, including AI/ML engineers, application developers, and product owners
β’ Strong problem-solving skills and ability to manage priorities in a dynamic, fast-growing environment
Preferred Qualifications
β’ Familiarity with AI/ML workflows and how to enable them through data infrastructure without direct model building
β’ Experience with vector databases, RAG environments, or semantic search infrastructure is a plus
β’ Understanding of metadata tagging, knowledge graphs, or related data organization techniques
β’ Knowledge of cost management strategies for cloud AI/ML workloads
β’ Interest or background in storytelling, immersive media, or human-computer interaction
Compensation depends on experience but is typically $60-80/hr W2.