

Machine Learning Specialist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Machine Learning Specialist" on a long-term contract, offering a competitive pay rate. Key skills include extensive experience in Machine Learning, Big Data, AWS, and Python. Preferred experience in NLP and computer vision is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
May 30, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#NumPy #Django #AWS (Amazon Web Services) #Mathematics #Regression #Spark SQL #Big Data #SQL (Structured Query Language) #GIT #REST (Representational State Transfer) #Spark (Apache Spark) #Data Pipeline #Leadership #API (Application Programming Interface) #ML (Machine Learning) #Reinforcement Learning #HBase #Data Processing #NoSQL #Cloud #Scala #Classification #Predictive Modeling #Flask #Java #Linux #Pandas #Python #REST API #RDBMS (Relational Database Management System) #Web Services #Sqoop (Apache Sqoop) #Calculus #Clustering #Cloudera #Deployment #Datasets #Deep Learning #AI (Artificial Intelligence) #Automation #Hadoop #PyTorch #Matplotlib #Statistics #SciPy #Data Science #Keras #Jupyter #Unsupervised Learning #Consul #Kubernetes #Programming #Eclipse #Docker #Supervised Learning #TensorFlow #Bash #Unix #NLP (Natural Language Processing)
Role description
Position: Machine Learning AI Engineer
Location: Remote
Duration: Long Term
Must Have Skills:
• Strong project experience in Machine Learning, Big Data, NLP, Deep Learning, RDBMS is must.
• Strong project experience with Amazon Web Services and Cloudera Data Platform is must.
• 4-5 experience building data pipelines using Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive,
• 4-5 years of programming experience in AWS, Linux and Data Science notebooks is must.
• Strong experience with REST API development using Python frameworks (Django, Flask etc.).
• Micro Services/Web service development experience using Spring framework is highly desirable
TECHNICAL KNOWLEDGE AND SKILLS:
Consultant resources shall possess most of the following technical knowledge and experience:
· Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.
· 4-5 years of Strong programming experience in Python, Java, Scala, SQL.
· Proficient in Machine Learning Algorithms: Supervised Learning (Regression, Classification, SVM, Decision Trees etc.), Unsupervised Learning (Clustering) and Reinforcement Learning
· Strong Hands-on Experience in building, deploying and productionizing ML models using MLLib, TensorFlow, PyTorch, Keras, Python Scikit-learn etc.
· Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Spark, Spark SQL, HBase.
· Data Processing and Analysis experience with Pandas, NumPy, Matplotlib/Seaborn etc. and using Big Data technologies (Hadoop/Spark)
· Must have Natural Language Processing (NLP) and Computer Vision experience.
· Ability to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatory
· Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must
· Strong Experience with Data Science Notebooks like Jupyter, Zeppelin, RStudio. PyCharm etc.
· Strong Mathematics and Statistics Background (Linear Algebra, Calculus, Probability and Statistics)
· 4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.
· Proficient in Big Data, SQL, relational database and NoSQL database for data retrieval and analysis.
· Must have experience with developing Hive QL, UDF’s for analyzing semi structured/structured datasets.
· Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
· Experience with AWS and other cloud platforms.
· Experience using Git and Eclipse.
· Experience in creating and managing RESTFul API’s using Python and Java frameworks.
· Experience in Docker and Kubernetes containerization.
· Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.
· Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.
PREFERRED SKILLS:
· Machine Learning, Big Data, NLP, Deep Learning, Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive, Data Science Notebooks, SQL, API, Unix/Linux, AWS
Position: Machine Learning AI Engineer
Location: Remote
Duration: Long Term
Must Have Skills:
• Strong project experience in Machine Learning, Big Data, NLP, Deep Learning, RDBMS is must.
• Strong project experience with Amazon Web Services and Cloudera Data Platform is must.
• 4-5 experience building data pipelines using Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive,
• 4-5 years of programming experience in AWS, Linux and Data Science notebooks is must.
• Strong experience with REST API development using Python frameworks (Django, Flask etc.).
• Micro Services/Web service development experience using Spring framework is highly desirable
TECHNICAL KNOWLEDGE AND SKILLS:
Consultant resources shall possess most of the following technical knowledge and experience:
· Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.
· 4-5 years of Strong programming experience in Python, Java, Scala, SQL.
· Proficient in Machine Learning Algorithms: Supervised Learning (Regression, Classification, SVM, Decision Trees etc.), Unsupervised Learning (Clustering) and Reinforcement Learning
· Strong Hands-on Experience in building, deploying and productionizing ML models using MLLib, TensorFlow, PyTorch, Keras, Python Scikit-learn etc.
· Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Spark, Spark SQL, HBase.
· Data Processing and Analysis experience with Pandas, NumPy, Matplotlib/Seaborn etc. and using Big Data technologies (Hadoop/Spark)
· Must have Natural Language Processing (NLP) and Computer Vision experience.
· Ability to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatory
· Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must
· Strong Experience with Data Science Notebooks like Jupyter, Zeppelin, RStudio. PyCharm etc.
· Strong Mathematics and Statistics Background (Linear Algebra, Calculus, Probability and Statistics)
· 4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.
· Proficient in Big Data, SQL, relational database and NoSQL database for data retrieval and analysis.
· Must have experience with developing Hive QL, UDF’s for analyzing semi structured/structured datasets.
· Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
· Experience with AWS and other cloud platforms.
· Experience using Git and Eclipse.
· Experience in creating and managing RESTFul API’s using Python and Java frameworks.
· Experience in Docker and Kubernetes containerization.
· Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.
· Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.
PREFERRED SKILLS:
· Machine Learning, Big Data, NLP, Deep Learning, Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive, Data Science Notebooks, SQL, API, Unix/Linux, AWS