

MLOps Architect with AWS CDK & DataZone :: Local to MN & GA Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Architect with AWS CDK & DataZone, based in Atlanta, GA or Minneapolis, MN. It is a contract position requiring 10-12 years of experience, strong AWS CDK skills in Typescript or Python, and knowledge of data governance frameworks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
May 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Atlanta, GA
-
π§ - Skills detailed
#Security #AWS (Amazon Web Services) #Model Deployment #Metadata #Data Pipeline #Leadership #ML (Machine Learning) #Data Management #TypeScript #Data Catalog #Compliance #ML Ops (Machine Learning Operations) #SageMaker #GDPR (General Data Protection Regulation) #Cloud #Lambda (AWS Lambda) #Scala #Data Engineering #Classification #Python #Data Access #Data Architecture #Monitoring #Redshift #Deployment #DevOps #Data Security #Automation #IAM (Identity and Access Management) #Data Governance #Data Science #Scripting #Consul #Programming #Observability #Consulting #S3 (Amazon Simple Storage Service) #Docker #Data Privacy
Role description
Role :- MLOps Architect with AWS CDK & DataZone
Location :- Atlanta , GA 30354 or Minneapolis, MN 55450
Contract
Job Description :-
We are looking for a highly skilled AWS DataZone and MLOps Architect to design, implement, and oversee modern cloud-native data governance and machine learning operations (MLOps) frameworks with deep expertise in designing scalable, secure, and automated cloud-native data and ML platforms using AWS CDK. The successful candidate will be responsible for architecting & implementing robust data governance using AWS DataZone, and building MLOps pipelines leveraging services like SageMaker, Step Functions, and CodePipeline, all provisioned and maintained through CDK-based infrastructure-as-code.
This role requires strong cloud-native architectural thinking, automation-first mindset, and hands-on experience with CDK in either Typescript or Python.
Key Responsibilities
β’ Architect and deploy AWS DataZone for enterprise-wide data discovery, cataloging, and governance using AWS CDK.
β’ Design and build CI/CD-enabled MLOps pipelines to manage the end-to-end ML lifecycle: data prep, training, model deployment, monitoring, and retraining.
β’ Use AWS CDK to manage infrastructure-as-code across data and ML workflows (e.g., SageMaker, Lambda, S3, Glue, DataZone, Step Functions).
β’ Integrate DataZone with Lake Formation, Glue Data Catalog, and Redshift for centralized governance and access control.
β’ Define policies and automation for data access requests, lineage, and classification using DataZone and IAM roles.
β’ Ensure security, compliance, and auditability across all components using least-privilege principles and automation.
β’ Collaborate with Data Engineers, MLOps Engineers, Data Scientists, and Security teams to design end-to-end solutions.
β’ Drive adoption of reusable CDK constructs/modules for consistent deployment and governance of data and ML services.
Required Skills and Experience
β’ 7+ years in cloud architecture, DevOps, or data engineering roles.
β’ Overall 10-12+ years of exp reqd.
β’ Strong hands-on experience with AWS CDK in Typescript or Python (required).
β’ Deep knowledge of AWS DataZone, SageMaker, Glue, Lake Formation, IAM, Step Functions, CodePipeline, and CloudWatch.
β’ Experience architecting MLOps pipelines and automating deployments in AWS.
β’ Proficient in containerization using Docker, ECS, or EKS.
β’ Working knowledge of data governance, metadata management, and data security/compliance frameworks (e.g., HIPAA, GDPR).
β’ Strong programming skills in Python and scripting tools for automation.
β’ Understanding of data privacy, compliance, and enterprise governance frameworks.
β’ Excellent communication and stakeholder management skills.
β’ High Consulting skills reqd that takes key stakeholders in confidence and provide them day to day solutions on overall solution that includes Data Zone, MLOps and CDK.
β’ Model Monitoring and Evaluation: Experience with model performance monitoring, drift detection, and explainability.
Preferred Qualifications
β’ AWS Certifications (Solutions Architect Professional, DevOps Engineer, or Machine Learning Specialty).
β’ Experience deploying enterprise-scale data platforms using CDK and reusable infrastructure modules.
β’ Familiarity with observability/monitoring tools for ML and data pipelines.
β’ Knowledge domain-driven data architecture.
β’ Previous experience with versioned deployment strategies (Blue/Green, Canary) for ML models.
β’ Exposure to data observability tools and model monitoring frameworks.
β’ Communication and Collaboration: Ability to communicate effectively with diverse teams.
β’ Problem-Solving: Strong analytical and problem-solving skills.
β’ Leadership: Ability to lead and guide technical teams.
Role :- MLOps Architect with AWS CDK & DataZone
Location :- Atlanta , GA 30354 or Minneapolis, MN 55450
Contract
Job Description :-
We are looking for a highly skilled AWS DataZone and MLOps Architect to design, implement, and oversee modern cloud-native data governance and machine learning operations (MLOps) frameworks with deep expertise in designing scalable, secure, and automated cloud-native data and ML platforms using AWS CDK. The successful candidate will be responsible for architecting & implementing robust data governance using AWS DataZone, and building MLOps pipelines leveraging services like SageMaker, Step Functions, and CodePipeline, all provisioned and maintained through CDK-based infrastructure-as-code.
This role requires strong cloud-native architectural thinking, automation-first mindset, and hands-on experience with CDK in either Typescript or Python.
Key Responsibilities
β’ Architect and deploy AWS DataZone for enterprise-wide data discovery, cataloging, and governance using AWS CDK.
β’ Design and build CI/CD-enabled MLOps pipelines to manage the end-to-end ML lifecycle: data prep, training, model deployment, monitoring, and retraining.
β’ Use AWS CDK to manage infrastructure-as-code across data and ML workflows (e.g., SageMaker, Lambda, S3, Glue, DataZone, Step Functions).
β’ Integrate DataZone with Lake Formation, Glue Data Catalog, and Redshift for centralized governance and access control.
β’ Define policies and automation for data access requests, lineage, and classification using DataZone and IAM roles.
β’ Ensure security, compliance, and auditability across all components using least-privilege principles and automation.
β’ Collaborate with Data Engineers, MLOps Engineers, Data Scientists, and Security teams to design end-to-end solutions.
β’ Drive adoption of reusable CDK constructs/modules for consistent deployment and governance of data and ML services.
Required Skills and Experience
β’ 7+ years in cloud architecture, DevOps, or data engineering roles.
β’ Overall 10-12+ years of exp reqd.
β’ Strong hands-on experience with AWS CDK in Typescript or Python (required).
β’ Deep knowledge of AWS DataZone, SageMaker, Glue, Lake Formation, IAM, Step Functions, CodePipeline, and CloudWatch.
β’ Experience architecting MLOps pipelines and automating deployments in AWS.
β’ Proficient in containerization using Docker, ECS, or EKS.
β’ Working knowledge of data governance, metadata management, and data security/compliance frameworks (e.g., HIPAA, GDPR).
β’ Strong programming skills in Python and scripting tools for automation.
β’ Understanding of data privacy, compliance, and enterprise governance frameworks.
β’ Excellent communication and stakeholder management skills.
β’ High Consulting skills reqd that takes key stakeholders in confidence and provide them day to day solutions on overall solution that includes Data Zone, MLOps and CDK.
β’ Model Monitoring and Evaluation: Experience with model performance monitoring, drift detection, and explainability.
Preferred Qualifications
β’ AWS Certifications (Solutions Architect Professional, DevOps Engineer, or Machine Learning Specialty).
β’ Experience deploying enterprise-scale data platforms using CDK and reusable infrastructure modules.
β’ Familiarity with observability/monitoring tools for ML and data pipelines.
β’ Knowledge domain-driven data architecture.
β’ Previous experience with versioned deployment strategies (Blue/Green, Canary) for ML models.
β’ Exposure to data observability tools and model monitoring frameworks.
β’ Communication and Collaboration: Ability to communicate effectively with diverse teams.
β’ Problem-Solving: Strong analytical and problem-solving skills.
β’ Leadership: Ability to lead and guide technical teams.