

ZonForce Technology
Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist with a focus on the energy domain, offering a remote contract. Candidates must have strong Python, machine learning, and generative AI skills, along with experience in model development and performance monitoring.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 5, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#ML (Machine Learning) #Databricks #Libraries #AI (Artificial Intelligence) #Data Exploration #NumPy #Pandas #Regression #Python #TensorFlow #Data Science #Classification #Clustering #Visualization #ADF (Azure Data Factory) #Deployment #Data Analysis #Matplotlib #PyTorch #Azure Data Factory #Monitoring #Compliance #Azure #Data Manipulation #Hugging Face
Role description
ONLY W2 CANDIDATES FOR THIS OPPORTUNITY.
Title - Data Scientist (Energy Domain Preferred)
Location - Remote
Responsibilities:
Python Proficiency: Strong hands-on experience in Python for data manipulation, analysis, and model development using libraries like Pandas, NumPy, and Scikit-learn.
β’ Machine Learning Expertise: Advanced knowledge of traditional ML techniques (e.g., regression, classification, clustering) and frameworks like TensorFlow or PyTorch.
β’ Generative AI: Experience in developing and fine-tuning generative AI models (e.g., GPT, LLMs) using frameworks like OpenAI or Hugging Face.
β’ Prompt Engineering: Proven ability to design and optimize prompts for generative AI models to enhance AI agent performance.
β’ Model Development and Evaluation: Expertise in building, testing, and optimizing models with robust evaluation techniques (e.g., F1-score, AUC-ROC, BLEU scores) and hyperparameter tuning.
β’ AI Agent Development: Hands-on experience in creating AI agents integrating machine learning, reasoning, and heuristics for prioritization and decision-making.
β’ Data Exploration: Strong skills in exploratory data analysis (EDA) and feature engineering using visualization libraries like Matplotlib and Seaborn.
β’ Performance Monitoring: Experience in setting up pipelines to monitor model performance and ensure accuracy, efficiency, and ethical compliance.
β’ Pipeline Development: Proven ability to design, build, and automate end-to-end pipelines for data preparation, model training, evaluation, and deployment using tools like Databricks, Azure Data Factory, or similar orchestration frameworks
ONLY W2 CANDIDATES FOR THIS OPPORTUNITY.
Title - Data Scientist (Energy Domain Preferred)
Location - Remote
Responsibilities:
Python Proficiency: Strong hands-on experience in Python for data manipulation, analysis, and model development using libraries like Pandas, NumPy, and Scikit-learn.
β’ Machine Learning Expertise: Advanced knowledge of traditional ML techniques (e.g., regression, classification, clustering) and frameworks like TensorFlow or PyTorch.
β’ Generative AI: Experience in developing and fine-tuning generative AI models (e.g., GPT, LLMs) using frameworks like OpenAI or Hugging Face.
β’ Prompt Engineering: Proven ability to design and optimize prompts for generative AI models to enhance AI agent performance.
β’ Model Development and Evaluation: Expertise in building, testing, and optimizing models with robust evaluation techniques (e.g., F1-score, AUC-ROC, BLEU scores) and hyperparameter tuning.
β’ AI Agent Development: Hands-on experience in creating AI agents integrating machine learning, reasoning, and heuristics for prioritization and decision-making.
β’ Data Exploration: Strong skills in exploratory data analysis (EDA) and feature engineering using visualization libraries like Matplotlib and Seaborn.
β’ Performance Monitoring: Experience in setting up pipelines to monitor model performance and ensure accuracy, efficiency, and ethical compliance.
β’ Pipeline Development: Proven ability to design, build, and automate end-to-end pipelines for data preparation, model training, evaluation, and deployment using tools like Databricks, Azure Data Factory, or similar orchestration frameworks





