

Data Engineer - (AWS/MLOps/Python/PySpark/Sagemaker/ECS/Gitlab/CI/CD/Banking/Fintech)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with expertise in AWS, MLOps, Python, and PySpark, focusing on data pipeline development in a hybrid location. Key skills include ECS, Sagemaker, CI/CD practices, and experience in banking/fintech. Contract length and pay rate are unspecified.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
August 27, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Knutsford, England, United Kingdom
-
π§ - Skills detailed
#Big Data #Cloud #Spark (Apache Spark) #ML (Machine Learning) #Model Deployment #Airflow #SageMaker #AI (Artificial Intelligence) #GitLab #AWS (Amazon Web Services) #Docker #MLflow #Data Engineering #Jenkins #Monitoring #Flask #Deployment #PySpark #Kubernetes #HTML (Hypertext Markup Language) #Data Science #Python #Data Pipeline #Streamlit
Role description
Job Title: Data Engineer - (AWS/MLOps/Python/PySpark/Sagemaker/ECS/Gitlab/CI/CD/Banking/Fintech) Location: Knutsford (Hybrid)
Job Description:
We are seeking an experienced Data Engineer with strong expertise in AWS, MLOps, and data pipeline development. The ideal candidate will have hands-on experience in deploying, monitoring, and maintaining machine learning models in cloud environments, as well as proficiency in big data ecosystems and CI/CD pipelines.
Key Responsibilities:
β’ Design, develop, and optimize data pipelines to support AI/ML workloads.
β’ Build and manage solutions using AWS services including ECS and Sagemaker.
β’ Implement MLOps practices with tools such as MLflow, Airflow, Docker, and Kubernetes.
β’ Collaborate with data scientists and engineers to streamline the machine learning lifecycle.
β’ Integrate backend services via RESTful APIs and support front-end frameworks (HTML, Streamlit, Flask).
β’ Ensure CI/CD best practices using GitLab, Jenkins, and related tools.
Key Skills:
β’ Primary Skills: AWS Data Engineering, ML Engineering, MLOps, ECS, Sagemaker, GitLab, Jenkins, CI/CD, AI Lifecycle, Front-end (HTML, Streamlit, Flask), Cloud model deployment/monitoring
β’ Technical Skills: Python, PySpark, Big Data ecosystems
β’ MLOps Tools: MLflow, Airflow, Docker, Kubernetes
β’ Secondary Skills: RESTful APIs, Backend integration
Job Title: Data Engineer - (AWS/MLOps/Python/PySpark/Sagemaker/ECS/Gitlab/CI/CD/Banking/Fintech) Location: Knutsford (Hybrid)
Job Description:
We are seeking an experienced Data Engineer with strong expertise in AWS, MLOps, and data pipeline development. The ideal candidate will have hands-on experience in deploying, monitoring, and maintaining machine learning models in cloud environments, as well as proficiency in big data ecosystems and CI/CD pipelines.
Key Responsibilities:
β’ Design, develop, and optimize data pipelines to support AI/ML workloads.
β’ Build and manage solutions using AWS services including ECS and Sagemaker.
β’ Implement MLOps practices with tools such as MLflow, Airflow, Docker, and Kubernetes.
β’ Collaborate with data scientists and engineers to streamline the machine learning lifecycle.
β’ Integrate backend services via RESTful APIs and support front-end frameworks (HTML, Streamlit, Flask).
β’ Ensure CI/CD best practices using GitLab, Jenkins, and related tools.
Key Skills:
β’ Primary Skills: AWS Data Engineering, ML Engineering, MLOps, ECS, Sagemaker, GitLab, Jenkins, CI/CD, AI Lifecycle, Front-end (HTML, Streamlit, Flask), Cloud model deployment/monitoring
β’ Technical Skills: Python, PySpark, Big Data ecosystems
β’ MLOps Tools: MLflow, Airflow, Docker, Kubernetes
β’ Secondary Skills: RESTful APIs, Backend integration