

neteffects
Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist with a 6-month contract, offering a pay rate of "pay rate". It requires strong AI/ML engineering skills, experience in GCP and AWS, and proficiency in Python, SQL, and model deployment.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date
December 20, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#VPC (Virtual Private Cloud) #Forecasting #BigQuery #A/B Testing #Documentation #Deep Learning #Plotly Dash #Classification #Streamlit #Lambda (AWS Lambda) #TensorFlow #Automated Testing #S3 (Amazon Simple Storage Service) #Regression #Data Pipeline #Compliance #Data Quality #ML (Machine Learning) #AI (Artificial Intelligence) #AWS (Amazon Web Services) #PyTorch #Scala #Security #Data Science #Statistics #SageMaker #SQL (Structured Query Language) #Plotly #GitHub #Cloud #GCP (Google Cloud Platform) #Python #Monitoring #Deployment #Langchain #AWS S3 (Amazon Simple Storage Service) #Computer Science #IAM (Identity and Access Management)
Role description
Not available for C2C engagements | 3rd party vendors who contact regarding this role will be blocked.
Candidates must be eligible for employment in the US without need for sponsorship now or in the future.
Weβre hiring a hands-on Data Scientist with strong AI/ML engineering skills to help design and build AI/ML solutions across cloud platforms. Youβll work with business and engineering partners to translate business problems into measurable, scalable AI/ML services β including GenAI agents, production ML models, and lightweight internal apps.
What youβll do
β’ Partner with business and engineering teams to translate problems into measurable AI/ML solutions and success metrics.
β’ Design, train, validate, and deploy models for classification, regression, recommendation, and time-series forecasting; pick algorithms, features, and evaluation strategies that match business goals.
β’ Develop and evaluate GenAI agent applications using frameworks like Langchain and Google ADK, leveraging techniques such as RAG, prompt engineering, and vector DB integration.
β’ Build and operate reliable data pipelines and model inference endpoints across GCP and AWS (BigQuery, Vertex AI, Cloud Run, S3, Lambda, SageMaker, etc.).
β’ Implement CI/CD, automated testing, and monitoring for ML/data projects (GitHub Actions, Cloud Build, CodeBuild/CodePipeline).
β’ Create lightweight dashboards and internal apps (Streamlit, Plotly Dash) to deliver models and insights to stakeholders.
β’ Write clear model documentation: problem formulation, modeling approach, validation, data needs, and deployment steps.
β’ Advocate for coding best practices, reproducibility, and shared documentation across a global data science organization.
Required qualifications
β’ Masterβs + 1+ years, or a PhD in Data Science, Computer Science, Applied Math/Statistics, Econometrics, or related quantitative field.
β’ Strong Python and SQL skills; experience with scikit-learn, XGBoost/LightGBM, and a deep learning framework (PyTorch or TensorFlow).
β’ Hands-on experience building GenAI/LLM applications and agentic workflows.
β’ Practical experience building reliable data pipelines and ensuring data quality/lineage on GCP (BigQuery).
β’ CI/CD experience for ML/data projects (e.g., GitHub Actions, Cloud Build, AWS CodeBuild/CodePipeline, etc).
β’ Practical experience building internal dashboards/apps with Streamlit and/or Plotly Dash.
β’ Cloud experience with GCP (BigQuery, Vertex AI, Cloud Run, Gemini Enterprise) and AWS (S3, Lambda, ECS, SageMaker).
β’ Solid comprehension of the mechanisms behind widely used AI/ML algorithms β including their intuition, assumptions, statistical theory, computational complexity, strengths/weaknesses, and when to use each.
Preferred qualifications
β’ Security and compliance best practices: IAM, secrets management, VPC/networking.
β’ Experience building APIs for model/agent inference.
β’ Containerization & orchestration.
β’ Workflow orchestration.
β’ Background in causal inference, statistics, or econometrics.
β’ A/B testing and experimentation design.
Not available for C2C engagements | 3rd party vendors who contact regarding this role will be blocked.
Candidates must be eligible for employment in the US without need for sponsorship now or in the future.
Weβre hiring a hands-on Data Scientist with strong AI/ML engineering skills to help design and build AI/ML solutions across cloud platforms. Youβll work with business and engineering partners to translate business problems into measurable, scalable AI/ML services β including GenAI agents, production ML models, and lightweight internal apps.
What youβll do
β’ Partner with business and engineering teams to translate problems into measurable AI/ML solutions and success metrics.
β’ Design, train, validate, and deploy models for classification, regression, recommendation, and time-series forecasting; pick algorithms, features, and evaluation strategies that match business goals.
β’ Develop and evaluate GenAI agent applications using frameworks like Langchain and Google ADK, leveraging techniques such as RAG, prompt engineering, and vector DB integration.
β’ Build and operate reliable data pipelines and model inference endpoints across GCP and AWS (BigQuery, Vertex AI, Cloud Run, S3, Lambda, SageMaker, etc.).
β’ Implement CI/CD, automated testing, and monitoring for ML/data projects (GitHub Actions, Cloud Build, CodeBuild/CodePipeline).
β’ Create lightweight dashboards and internal apps (Streamlit, Plotly Dash) to deliver models and insights to stakeholders.
β’ Write clear model documentation: problem formulation, modeling approach, validation, data needs, and deployment steps.
β’ Advocate for coding best practices, reproducibility, and shared documentation across a global data science organization.
Required qualifications
β’ Masterβs + 1+ years, or a PhD in Data Science, Computer Science, Applied Math/Statistics, Econometrics, or related quantitative field.
β’ Strong Python and SQL skills; experience with scikit-learn, XGBoost/LightGBM, and a deep learning framework (PyTorch or TensorFlow).
β’ Hands-on experience building GenAI/LLM applications and agentic workflows.
β’ Practical experience building reliable data pipelines and ensuring data quality/lineage on GCP (BigQuery).
β’ CI/CD experience for ML/data projects (e.g., GitHub Actions, Cloud Build, AWS CodeBuild/CodePipeline, etc).
β’ Practical experience building internal dashboards/apps with Streamlit and/or Plotly Dash.
β’ Cloud experience with GCP (BigQuery, Vertex AI, Cloud Run, Gemini Enterprise) and AWS (S3, Lambda, ECS, SageMaker).
β’ Solid comprehension of the mechanisms behind widely used AI/ML algorithms β including their intuition, assumptions, statistical theory, computational complexity, strengths/weaknesses, and when to use each.
Preferred qualifications
β’ Security and compliance best practices: IAM, secrets management, VPC/networking.
β’ Experience building APIs for model/agent inference.
β’ Containerization & orchestration.
β’ Workflow orchestration.
β’ Background in causal inference, statistics, or econometrics.
β’ A/B testing and experimentation design.






