

Technology Hub Inc
Senior Data Scientist(W2)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist (W2) with an 8+ year experience in Data Science and Machine Learning. Contract length is unspecified, with a pay rate of "unknown." Key skills include Azure, Kubernetes, and NoSQL databases.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Deployment #Kubernetes #Data Processing #Data Science #Scala #Azure #DynamoDB #Data Pipeline #AI (Artificial Intelligence) #Data Engineering #Monitoring #Python #AutoScaling #TensorFlow #Logging #Azure Cosmos DB #NoSQL #PyTorch #ML (Machine Learning) #NumPy #Pandas #Microservices #Data Modeling #Libraries #Apache Kafka #Databricks #MongoDB #Synapse #Microsoft Azure #Cloud #Databases #Kafka (Apache Kafka) #DevOps
Role description
Required Skills
• 8+ years of hands-on experience in Data Science, Machine Learning, and AI development.
• Strong experience in building and deploying real-time ML models and data pipelines.
• Proven experience with streaming platforms such as Azure Event Hub, Apache Kafka, or similar.
• Expertise in cloud platforms, preferably Microsoft Azure (Azure ML, Databricks, Synapse).
• Hands-on experience with Kubernetes for deploying and scaling ML models (MLOps).
• Strong knowledge of NoSQL databases such as Azure Cosmos DB, MongoDB, or DynamoDB.
• Proficiency in Python, along with libraries like TensorFlow, PyTorch, Scikit-learn, Pandas, and NumPy.
• Experience with real-time inference systems and model serving frameworks.
• Strong understanding of data modeling, feature engineering, and ML optimization techniques.
• Familiarity with CI/CD pipelines for ML (MLOps), monitoring, and automated retraining.
Key Responsibilities
• Design, develop, and deploy machine learning models for real-time data processing and analytics.
• Build and maintain streaming ML pipelines using Azure Event Hub (or Kafka-like systems).
• Architect and implement scalable ML microservices deployed on Kubernetes with auto-scaling.
• Develop and optimize feature stores and data pipelines integrated with NoSQL databases (Cosmos DB, MongoDB, DynamoDB).
• Apply advanced techniques in feature engineering, model selection, and hyperparameter tuning.
• Implement real-time model inference systems with low latency and high throughput.
• Ensure production readiness by embedding monitoring, logging, drift detection, and self-healing mechanisms.
• Lead model lifecycle management, including versioning, deployment, and retraining strategies.
• Handle production support, including troubleshooting model performance issues and pipeline failures.
• Collaborate with data engineers, DevOps, and cloud architects to ensure seamless integration and scalability.
• Drive adoption of best practices in AI/ML, MLOps, and responsible AI.
Required Skills
• 8+ years of hands-on experience in Data Science, Machine Learning, and AI development.
• Strong experience in building and deploying real-time ML models and data pipelines.
• Proven experience with streaming platforms such as Azure Event Hub, Apache Kafka, or similar.
• Expertise in cloud platforms, preferably Microsoft Azure (Azure ML, Databricks, Synapse).
• Hands-on experience with Kubernetes for deploying and scaling ML models (MLOps).
• Strong knowledge of NoSQL databases such as Azure Cosmos DB, MongoDB, or DynamoDB.
• Proficiency in Python, along with libraries like TensorFlow, PyTorch, Scikit-learn, Pandas, and NumPy.
• Experience with real-time inference systems and model serving frameworks.
• Strong understanding of data modeling, feature engineering, and ML optimization techniques.
• Familiarity with CI/CD pipelines for ML (MLOps), monitoring, and automated retraining.
Key Responsibilities
• Design, develop, and deploy machine learning models for real-time data processing and analytics.
• Build and maintain streaming ML pipelines using Azure Event Hub (or Kafka-like systems).
• Architect and implement scalable ML microservices deployed on Kubernetes with auto-scaling.
• Develop and optimize feature stores and data pipelines integrated with NoSQL databases (Cosmos DB, MongoDB, DynamoDB).
• Apply advanced techniques in feature engineering, model selection, and hyperparameter tuning.
• Implement real-time model inference systems with low latency and high throughput.
• Ensure production readiness by embedding monitoring, logging, drift detection, and self-healing mechanisms.
• Lead model lifecycle management, including versioning, deployment, and retraining strategies.
• Handle production support, including troubleshooting model performance issues and pipeline failures.
• Collaborate with data engineers, DevOps, and cloud architects to ensure seamless integration and scalability.
• Drive adoption of best practices in AI/ML, MLOps, and responsible AI.






