Senior Data Engineer (AI & MLOps) Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (AI & MLOps) on a 6-month contract, offering £300 - £500 per day. Key skills include advanced Python, AWS services, and ML pipeline orchestration. Experience in data engineering and MLOps practices is required. Remote work.
🌎 - Country
United States
💱 - Currency
£ GBP
-
💰 - Day rate
500
-
🗓️ - Date discovered
September 17, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
European Union
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #Docker #RDS (Amazon Relational Database Service) #Redshift #Automation #ML Ops (Machine Learning Operations) #Data Pipeline #ML (Machine Learning) #Data Engineering #Cloud #Scala #Monitoring #"ETL (Extract #Transform #Load)" #Version Control #Infrastructure as Code (IaC) #Data Science #Security #EC2 #SageMaker #Kubernetes #AI (Artificial Intelligence) #S3 (Amazon Simple Storage Service) #DynamoDB #Data Wrangling #Compliance #Databases #Python #Deployment #Terraform #AWS (Amazon Web Services)
Role description
Senior Data Engineer (AI & MLOps) – Software – Remote Day rate: £300 - £500 Duration: 6 months Start: ASAP My new client is looking for a Senior Data Engineer with expertise in AI, MLOps, and AWS architecture to design and deliver production-grade machine learning pipelines. The ideal candidate will be passionate about bridging the gap between data science experimentation and scalable production systems, driving automation, and enabling faster innovation cycles. Key Responsibilities • Architect, build, and maintain production-grade ML Ops pipelines to automate deployment, monitoring, and scaling of machine learning models. • Collaborate with data scientists and ML engineers to reduce time-to-production for experiments and prototypes. • Design and optimize data wrangling and transformation workflows using Python. • Leverage AWS cloud services (EC2, S3, Lambda, SageMaker, RDS, DynamoDB, Redshift, etc.) to build robust, scalable, and cost-effective solutions. • Apply AIOps practices to enhance monitoring, automation, and resilience of ML systems. • Implement best practices in data engineering, version control, CI/CD, and infrastructure as code. • Ensure the security, reliability, and compliance of data pipelines and deployed ML solutions. Required Qualifications • Proven experience as a Senior Data Engineer, MLOps Engineer, or similar role. • Strong background in data structures, algorithms, and software engineering principles. • Advanced proficiency in Python for data wrangling, pipeline automation, and ML workflows. • Expertise in AWS services, including databases (RDS, DynamoDB, Redshift) and machine learning/AI (SageMaker, AI/ML frameworks). • Hands-on experience with ML pipeline orchestration, CI/CD, and deployment automation. • Deep understanding of ML Ops practices, including monitoring, scaling, and retraining strategies. • Familiarity with AIOps concepts and tools for operational automation. Preferred Skills • Experience with data science and machine learning model development. • Knowledge of containerization (Docker, Kubernetes, EKS). • Exposure to infrastructure-as-code (Terraform, CloudFormation). • Strong problem-solving, communication, and collaboration skills.