

JMD Technologies Inc.
ML Ops Data Scientist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a ML Ops Data Scientist with 5-10 years of experience, offering a contract length of "X months" and a pay rate of "$X/hour." Key skills include Python, SQL, ETL tools, and data visualization.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 25, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Automation #Unix #Shell Scripting #Snowflake #Data Processing #SQL (Structured Query Language) #Storage #Data Science #Talend #Visualization #ML Ops (Machine Learning Operations) #DataStage #Scripting #A/B Testing #Data Manipulation #Data Storage #Microsoft Power BI #ML (Machine Learning) #Time Series #"ETL (Extract #Transform #Load)" #Python #Databases #Libraries #BI (Business Intelligence) #Data Ingestion #Oracle #Tableau
Role description
Description: (5-10 years experience)
• Develop and implement data-driven solutions to business challenges.
• Utilize Python, SQL, and Unix Shell Scripting for data manipulation and analysis.
• Apply machine learning frameworks and data visualization libraries for insights.
• Work with ETL tools like Talend, DataStage, or MoveIT Automation for data processing.
• Manage and optimize data storage using Snowflake or Oracle databases.
• Implement CI/CD pipelines to streamline data workflows.
• Design and evaluate experiments, including A/B testing and statistical methodologies.
• Collaborate across teams to integrate data ingestion, transformation, and integration processes.
• Leverage data visualization tools such as Tableau and Power BI for reporting.
• Explore advanced techniques like time series modeling and product recommendation systems.
Description: (5-10 years experience)
• Develop and implement data-driven solutions to business challenges.
• Utilize Python, SQL, and Unix Shell Scripting for data manipulation and analysis.
• Apply machine learning frameworks and data visualization libraries for insights.
• Work with ETL tools like Talend, DataStage, or MoveIT Automation for data processing.
• Manage and optimize data storage using Snowflake or Oracle databases.
• Implement CI/CD pipelines to streamline data workflows.
• Design and evaluate experiments, including A/B testing and statistical methodologies.
• Collaborate across teams to integrate data ingestion, transformation, and integration processes.
• Leverage data visualization tools such as Tableau and Power BI for reporting.
• Explore advanced techniques like time series modeling and product recommendation systems.






