

AI/ML Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI/ML Engineer on a W2 contract basis, hybrid in Reston. Requires a degree, 3+ years in machine learning, proficiency in Python and AWS services, and experience with foundation models and MLOps practices. Pay rate is unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Reston, VA
-
π§ - Skills detailed
#DevOps #MLflow #Docker #Scala #Libraries #Cloud #Monitoring #AWS (Amazon Web Services) #Computer Science #ML (Machine Learning) #SageMaker #Data Science #Lambda (AWS Lambda) #Python #TensorFlow #AI (Artificial Intelligence) #S3 (Amazon Simple Storage Service) #PyTorch #Containers
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Hi,
Hope you are doing good!
Role: AI/ML Engineer
Locations: Hybrid Reston
Type of Hiring: Contract-W2
Job Description:
β’ AI/ML Engineer
β’ Architect and develop scalable machine learning pipelines using AWS services (e.g., SageMaker, Lambda, Step Functions).
β’ Build and fine-tune LLM applications using Amazon Bedrock with foundational models such as Anthropic Claude, Meta LLaMA, and Amazon Titan.
β’ Implement and optimize ML models in Python for performance, scalability, and reusability.
β’ Integrate ML solutions into production environments using AWS-native tools and infrastructure-as-code practices.
β’ Develop APIs and interfaces for AI services to interact with downstream applications.
β’ Work closely with data scientists, software engineers, and product managers to translate business needs into technical solutions.
β’ Stay updated on the latest advancements in generative AI, foundation models, and cloud-native AI services.
β’ Implement monitoring tools for model performance, data drift, and cost control in cloud environments.
Required:
β’ Bachelorβs or masterβs degree in computer science, Engineering, Data Science, or related field.
β’ 3 plus years of experience in machine learning engineering
β’ Proficiency in Python, with hands-on experience using libraries such as PyTorch, TensorFlow, or Scikit-learn.
β’ Deep understanding of AWS services, especially Amazon Bedrock, SageMaker, Lambda, CloudWatch, and S3.
β’ Experience building and deploying machine learning models in production environments.
β’ Familiarity with RESTful APIs, containers (Docker), and DevOps practices (CI/CD).
β’ Experience working with foundation models and prompt engineering.
Knowledge of MLOps best practices and tools such as MLflow, Kubeflow, or Amazon SageMaker Pipelines.