Machine Learning Engineer-MHP 2537

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include MLOps, Python, Scala, and experience with AWS. A BS/MS/PhD in Computer Science or equivalent and 3-5 years of relevant experience are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Mountain View, CA
-
🧠 - Skills detailed
#Kubernetes #Airflow #AWS (Amazon Web Services) #Java #SageMaker #Spark (Apache Spark) #Docker #Data Processing #GitHub #Distributed Computing #Version Control #ML (Machine Learning) #ML Ops (Machine Learning Operations) #Apache Airflow #Data Science #Pandas #Classification #Cloud #Monitoring #TensorFlow #Scala #GIT #NLTK (Natural Language Toolkit) #Computer Science #Data Management #NumPy #Deployment #Model Evaluation #AI (Artificial Intelligence) #A/B Testing #Batch #SQL (Structured Query Language) #Keras #Data Pipeline #AWS SageMaker #DevOps #Regression #Data Wrangling #Clustering #HBase #Python
Role description
JD for ML Ops engineer: As a Machine Learning Engineer, you will have a strong AI/ML background and be an integral part of a vibrant team of Data Scientists and Machine Learning engineers. You will be expected to help architect, code, optimize, and deploy Machine Learning models at scale using the latest industry tools and techniques. You will also contribute to automating, delivering, monitoring, and improving machine learning solutions. Important skills for this role include software development, systems engineering, data wrangling, feature engineering, architecture, MLOps, and testing. Responsibilities: β€’ Design and build scalable, usable, and high-performance machine learning systems. β€’ Collaborate cross-functionally with product managers, data scientists, and engineers to understand, implement, refine, and design machine learning and other algorithms. β€’ Effectively communicate results to peers and leaders. β€’ Explore state-of-the-art technologies and apply them to deliver customer benefits. β€’ Discover, access, import, clean, and prepare data for machine learning. β€’ Work with AI scientists to create and refine features from underlying data and build pipelines to train and deploy models. β€’ Run regular A/B tests, gather data, perform statistical analysis, and draw conclusions on the impact of your models. β€’ Explore new technology shifts to determine their potential connection with desired customer benefits. β€’ Interact with various data sources, collaborating with peers and partners to refine features from the underlying data and build end-to-end pipelines. β€’ Model Productionalization: Partner with data scientists to productionalize prototype models for customer use at scale, which may involve increasing training data, automating training and prediction, and orchestrating data for continuous prediction. You will also provide metrics (like precision, recall etc.) for model comparison. β€’ Model Enhancement: Work on existing codebases to enhance model prediction performance or reduce training time, understanding the specifics of algorithm implementation. This can be exploratory or directed work based on data science team proposals. β€’ Machine Learning Tools: Develop tools to address pain points in the data science process, such as speeding up training, simplifying data processing, or improving data management tooling. β€’ Model Context Protocol (MCP) Server: β€’ Experience building and maintaining services that manage and serve the contextual information required by large models. β€’ Familiarity with creating and managing data pipelines for real-time and batch-based context enrichment. β€’ Skills in designing scalable, low-latency APIs for models to access and retrieve necessary context during inference. β€’ Implement robust monitoring and alerting for deployed models to ensure continuous performance and detect anomalies. β€’ Manage model versions, dependencies, and deployment workflows using MLOps best practices. Qualifications: β€’ BS, MS, or PhD degree in Computer Science or a related field, or equivalent practical experience. β€’ 3 to 5 year of experience and Hands-on with Languages : Scala, Java , Python β€’ Strong computer science fundamentals: data structures, algorithms, performance complexity, and implications of computer architecture on software performance (e.g., I/O and memory tuning). β€’ Solid software engineering fundamentals: experience with version control systems (Git, GitHub) and workflows, and the ability to write production-ready code. β€’ Knowledge of Machine Learning or Data Science languages, tools, and frameworks, including SQL, Scikit-learn, NLTK, NumPy, Pandas, TensorFlow, and Keras. β€’ Understanding of machine learning techniques (e.g., classification, regression, clustering) and principles (e.g., training, validation, and testing). β€’ Experience with data processing tools and distributed computing systems and related technologies such as Spark, Hive, and Flink. β€’ Familiarity with cloud technologies, including AWS SageMaker tools and AWS Bedrock. β€’ Understanding of DevOps concepts, including CI/CD. β€’ Experience with software container technology, such as Docker and Kubernetes. β€’ In-depth knowledge of MLOps principles and tools for model lifecycle management, including experiment tracking, model registry, and serving infrastructure. β€’ Experience with workflow orchestration tools (e.g., Apache Airflow, Kubeflow Pipelines). β€’ Familiarity with model explainability (XAI) and fairness techniques. β€’ Proficiency in optimizing machine learning models for performance, efficiency, and resource utilization. β€’ Experience with A/B testing frameworks and statistical analysis for model evaluation.