

ML Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Full Stack ML Engineer based in Charlotte, NC or Malvern, PA (hybrid). Contract length is 6 months with a focus on AWS, Python, SQL, and financial services personalization. Requires 10 years of experience.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Apache Airflow #MLflow #SageMaker #IAM (Identity and Access Management) #Databases #ML (Machine Learning) #AI (Artificial Intelligence) #S3 (Amazon Simple Storage Service) #Python #Data Engineering #Docker #TensorFlow #Lambda (AWS Lambda) #Monitoring #AWS (Amazon Web Services) #Cloud #SQL (Structured Query Language) #Data Manipulation #Data Science #Data Modeling #Automation #ECR (Elastic Container Registery) #PyTorch #Airflow #"ETL (Extract #Transform #Load)" #Data Pipeline
Role description
Location: Charlotte, NC or Malvern, PA (hybrid β 3 days/week from office) Duration: 06 months yrs of exp: 10 Job Description: Overview: We are seeking Full Stack ML Engineers to support the Hyper Personalization program for our Wealth client, a key initiative aimed at enhancing personalization within financial services. This role requires strong delivery-focused individuals with a deep understanding of the AWS tech stack and financial services personalization. Responsibilities:
Integrate AI/ML models with multiple data sources: Ensure seamless data flow in and out of models.
Fine-tune existing models: Optimize performance and adapt models to evolving requirements.
Build and maintain data pipelines: Design and implement ETL processes to support model integration.
Monitor and manage ML models in production: Implement MLOps practices for model monitoring, tracking, and maintenance.
Collaborate with cross-functional teams: Work closely with data scientists, data engineers, and other stakeholders to deliver robust ML solutions.
Drive architecture and engineering best practices: Lead efforts to establish and enforce best practices in building the integration framework.
Technical Skills:
Proficiency in Python and SQL databases: Essential for data manipulation and integration tasks.
Experience with AWS cloud services: Including but not limited to:
o SageMaker o Lambda o Glue o S3 o IAM o CodeCommit o CodePipeline o Bedrock
Experience with data pipeline and workflow management tools: Such as Apache Airflow or AWS Step Functions.
Understanding of ETL techniques, data modeling, and data warehousing concepts: To build efficient data pipelines.
Familiarity with AI/ML platforms and tools: Including TensorFlow, PyTorch, MLflow, and others.
Knowledge of MLOps practices: Including model monitoring, data drift detection, and pipeline automation.
Experience with Docker and AWS ECR: For containerization of ML applications.
Location: Charlotte, NC or Malvern, PA (hybrid β 3 days/week from office) Duration: 06 months yrs of exp: 10 Job Description: Overview: We are seeking Full Stack ML Engineers to support the Hyper Personalization program for our Wealth client, a key initiative aimed at enhancing personalization within financial services. This role requires strong delivery-focused individuals with a deep understanding of the AWS tech stack and financial services personalization. Responsibilities:
Integrate AI/ML models with multiple data sources: Ensure seamless data flow in and out of models.
Fine-tune existing models: Optimize performance and adapt models to evolving requirements.
Build and maintain data pipelines: Design and implement ETL processes to support model integration.
Monitor and manage ML models in production: Implement MLOps practices for model monitoring, tracking, and maintenance.
Collaborate with cross-functional teams: Work closely with data scientists, data engineers, and other stakeholders to deliver robust ML solutions.
Drive architecture and engineering best practices: Lead efforts to establish and enforce best practices in building the integration framework.
Technical Skills:
Proficiency in Python and SQL databases: Essential for data manipulation and integration tasks.
Experience with AWS cloud services: Including but not limited to:
o SageMaker o Lambda o Glue o S3 o IAM o CodeCommit o CodePipeline o Bedrock
Experience with data pipeline and workflow management tools: Such as Apache Airflow or AWS Step Functions.
Understanding of ETL techniques, data modeling, and data warehousing concepts: To build efficient data pipelines.
Familiarity with AI/ML platforms and tools: Including TensorFlow, PyTorch, MLflow, and others.
Knowledge of MLOps practices: Including model monitoring, data drift detection, and pipeline automation.
Experience with Docker and AWS ECR: For containerization of ML applications.