

Altak Group Inc.
Data Science Manager
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Science Manager with an 8+ year background in AI/ML and data engineering, managing teams for 12 months, offering a pay rate of "X" at a remote location. Key skills include AWS AI/ML stack, vector search technologies, and MLOps best practices.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 2, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#ML (Machine Learning) #AutoScaling #Scala #S3 (Amazon Simple Storage Service) #Leadership #Data Governance #Databricks #Forecasting #Lambda (AWS Lambda) #Langchain #Classification #AI (Artificial Intelligence) #Azure #NLP (Natural Language Processing) #Data Engineering #Observability #Documentation #Predictive Modeling #AWS (Amazon Web Services) #Batch #VPC (Virtual Private Cloud) #Indexing #IAM (Identity and Access Management) #Redshift #Security #Compliance #Kafka (Apache Kafka) #Strategy #Data Science #SageMaker #Snowflake #OpenSearch #A/B Testing #Metadata #Deployment #Cloud
Role description
Role summary
Weβre looking for a Manager, Data Scientist to lead the design and delivery of advanced AI/ML solutionsβfrom problem framing and model development to deployment strategy, security, and cost management. You will manage a team of data scientists and collaborate closely with solution architects to ensure AI/ML efforts fit enterprise goals. You will direct model selection, validation, infrastructure integration, and governance to drive scalable, secure, and cost-effective AI services.
What youβll do
β’ Lead end-to-end AI/ML project execution including predictive modeling, NLP, computer vision, and retrieval-augmented generation (RAG) initiatives. Define best practices, reference architectures, and operational patterns (real-time, batch, online/offline inference).
β’ Oversee model lifecycle management: select appropriate LLMs/foundation models and classical machine learning algorithms, establish evaluation metrics (quality, latency, safety), and ensure guardrails and fallback mechanisms are in place.
β’ Manage collaboration with cloud engineering teams on AWS AI/ML services (Amazon Bedrock, SageMaker Studio and Training, S3/Lake Formation, Kendra, OpenSearch, Lambda, EKS/ECS, etc.) for solution deployment and orchestration.
β’ Guide vector search and retrieval system design, including embedding strategies, indexing, metadata filtering, and operational considerations using technologies like Kendra, Pinecone, OpenSearch, and others.
β’ Drive development of RAG and agentic AI patterns, such as retrieval pipelines, prompt orchestration, tool integration, persona management, caching, and safety filtering.
β’ Design and implement environment strategies across development, testing, and production, including GPU/accelerator resource allocation and CI/CD pipelines for models and prompts.
β’ Balance cost optimization and performance tuning by forecasting resource needs, managing budgets, autoscaling, and employing techniques such as quantization and caching.
β’ Enforce security, privacy, and compliance standards related to data governance, encryption, key management, and mitigation of prompt-injection or other threats.
β’ Partner with ML engineers and data scientists on feature stores, experiment tracking, evaluation pipelines, and A/B testing to continuously improve model performance and reliability.
β’ Establish operational readiness with defined SLOs/SLIs, telemetry, incident response plans, and capacity planning.
β’ Mentor and develop your team, author documentation and best practices, and help enable cross-functional teams through training and reference solutions.
Required qualifications
β’ 8+ years experience in data science, AI/ML, and data engineering, with 3+ years managing teams and architecting production-scale AI solutions in enterprise environments.
β’ Demonstrated leadership in delivering LLM-driven applications (chatbots, agents, RAG) and machine learning services (forecasting, classification, ranking) in production.
β’ Deep familiarity with AWS AI/ML stack (Bedrock, SageMaker, S3, Glue, Redshift, Lambda, EKS, VPC, IAM, KMS).
β’ Strong expertise in vector search technologies, embeddings, retrieval design, re-ranking, and metadata. Experience managing at least one managed vector store or search service.
β’ Hands-on knowledge of MLOps best practices including CI/CD, model registries, automated evaluation, versioning, rollbacks, and observability.
β’ Solid understanding of security, compliance, and governance for regulated data (PHI/PII), including encryption, access controls, and audit trails.
β’ Strong financial acumen in managing compute and token budgets, optimizing latency, throughput, and cost in AI/ML workloads.
β’ Excellent communication skills to lead teams and collaborate with stakeholders.
Preferred qualifications
β’ Experience with Azure AI and hybrid/multi-cloud architectures.
β’ Familiarity with Snowflake Cortex/Databricks/MosaicML ecosystems.
β’ Knowledge of agent frameworks (LangChain, LlamaIndex) and productionizing on cloud platforms.
β’ Experience with event-driven streaming (Kafka/MSK), online feature stores, and real-time inference.
β’ FinOps mindset with prior responsibility for large budgets involving GPU and token usage.
β’ Background in regulated industries like healthcare or financial services; FedRAMP or equivalent compliance experience preferred.
β’ Relevant cloud/AI certifications (e.g., AWS ML Specialty, AWS Solutions Architect Pro).
Role summary
Weβre looking for a Manager, Data Scientist to lead the design and delivery of advanced AI/ML solutionsβfrom problem framing and model development to deployment strategy, security, and cost management. You will manage a team of data scientists and collaborate closely with solution architects to ensure AI/ML efforts fit enterprise goals. You will direct model selection, validation, infrastructure integration, and governance to drive scalable, secure, and cost-effective AI services.
What youβll do
β’ Lead end-to-end AI/ML project execution including predictive modeling, NLP, computer vision, and retrieval-augmented generation (RAG) initiatives. Define best practices, reference architectures, and operational patterns (real-time, batch, online/offline inference).
β’ Oversee model lifecycle management: select appropriate LLMs/foundation models and classical machine learning algorithms, establish evaluation metrics (quality, latency, safety), and ensure guardrails and fallback mechanisms are in place.
β’ Manage collaboration with cloud engineering teams on AWS AI/ML services (Amazon Bedrock, SageMaker Studio and Training, S3/Lake Formation, Kendra, OpenSearch, Lambda, EKS/ECS, etc.) for solution deployment and orchestration.
β’ Guide vector search and retrieval system design, including embedding strategies, indexing, metadata filtering, and operational considerations using technologies like Kendra, Pinecone, OpenSearch, and others.
β’ Drive development of RAG and agentic AI patterns, such as retrieval pipelines, prompt orchestration, tool integration, persona management, caching, and safety filtering.
β’ Design and implement environment strategies across development, testing, and production, including GPU/accelerator resource allocation and CI/CD pipelines for models and prompts.
β’ Balance cost optimization and performance tuning by forecasting resource needs, managing budgets, autoscaling, and employing techniques such as quantization and caching.
β’ Enforce security, privacy, and compliance standards related to data governance, encryption, key management, and mitigation of prompt-injection or other threats.
β’ Partner with ML engineers and data scientists on feature stores, experiment tracking, evaluation pipelines, and A/B testing to continuously improve model performance and reliability.
β’ Establish operational readiness with defined SLOs/SLIs, telemetry, incident response plans, and capacity planning.
β’ Mentor and develop your team, author documentation and best practices, and help enable cross-functional teams through training and reference solutions.
Required qualifications
β’ 8+ years experience in data science, AI/ML, and data engineering, with 3+ years managing teams and architecting production-scale AI solutions in enterprise environments.
β’ Demonstrated leadership in delivering LLM-driven applications (chatbots, agents, RAG) and machine learning services (forecasting, classification, ranking) in production.
β’ Deep familiarity with AWS AI/ML stack (Bedrock, SageMaker, S3, Glue, Redshift, Lambda, EKS, VPC, IAM, KMS).
β’ Strong expertise in vector search technologies, embeddings, retrieval design, re-ranking, and metadata. Experience managing at least one managed vector store or search service.
β’ Hands-on knowledge of MLOps best practices including CI/CD, model registries, automated evaluation, versioning, rollbacks, and observability.
β’ Solid understanding of security, compliance, and governance for regulated data (PHI/PII), including encryption, access controls, and audit trails.
β’ Strong financial acumen in managing compute and token budgets, optimizing latency, throughput, and cost in AI/ML workloads.
β’ Excellent communication skills to lead teams and collaborate with stakeholders.
Preferred qualifications
β’ Experience with Azure AI and hybrid/multi-cloud architectures.
β’ Familiarity with Snowflake Cortex/Databricks/MosaicML ecosystems.
β’ Knowledge of agent frameworks (LangChain, LlamaIndex) and productionizing on cloud platforms.
β’ Experience with event-driven streaming (Kafka/MSK), online feature stores, and real-time inference.
β’ FinOps mindset with prior responsibility for large budgets involving GPU and token usage.
β’ Background in regulated industries like healthcare or financial services; FedRAMP or equivalent compliance experience preferred.
β’ Relevant cloud/AI certifications (e.g., AWS ML Specialty, AWS Solutions Architect Pro).






