
Data Scientist (W2 Position) - Hybrid (3 Days Onsite in Washington, DC)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist (W2 Position) with a contract length of "unknown" and a pay rate of "unknown." It requires 3+ years of experience, proficiency in Python, SQL, and Power BI, and the ability to obtain a Public Trust clearance.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
September 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Washington, DC 20032
-
π§ - Skills detailed
#Deployment #Security #GraphQL #Visualization #NLP (Natural Language Processing) #Deep Learning #Libraries #Data Processing #Classification #TensorFlow #Batch #Data Analysis #PyTorch #Pandas #Data Management #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Splunk #Neural Networks #Regression #Microsoft Power BI #Scripting #BI (Business Intelligence) #Databases #Monitoring #Matplotlib #R #Cybersecurity #Automation #SQL (Structured Query Language) #Python #NoSQL #Data Pipeline #DAX #Scala #Computer Science #NumPy #Programming #AI (Artificial Intelligence) #Forecasting #Data Science #Statistics #Data Aggregation #API (Application Programming Interface) #Data Mining #Storage
Role description
Clearance and Location Requirements
Able to be cleared for a Public Trust clearance.
This position requires to be onsite 3 days per week in Washington, DC.
βAbout the Role:
The overall objective of the Data Scientist position is to harness complex data and transform it into actionable insights that drive strategic decision-making across the organization. This role entails leveraging Python, APIs, SQL, and Power BI for advanced analytical and technical skills to develop predictive models, optimize data processes, and produce robust data-driven solutions.
The Data Scientist will work with the enterprise cybersecurity department in a practical data-driven role, collaborating with various teams to understand business requirements and ensure that data analytics efforts are aligned with organizational goals.
β
Technical Skills
Programming Proficiency: Strong experience with open-source programming such as R or Python for data retrieval, transformation, manipulation, consolidation, querying, analysis, and database connections.
Artificial Intelligence & Machine Learning: Practical experience building, training, and deploying machine learning models (regression, classification, forecasting, NLP, and neural networks), with exposure to deep learning frameworks (TensorFlow, PyTorch, or similar).
API Development & Integration: Ability to design, consume, and automate RESTful and GraphQL APIs for seamless data exchange and workflow integration across platforms.
Python Automation: Proven ability to automate data processing, reporting, and analysis tasks using Python (e.g., scripting ETL workflows, building reusable libraries, and integrating with APIs to reduce manual effort and increase efficiency).
Threat Hunting & Security Analytics: Hands-on experience performing threat hunting in Splunk and other SIEM platforms, including developing searches, building dashboards, and correlating endpoint (EDR) and network (IDS/Firewall) events to identify anomalies and detect malicious activity.
Data Management: Overseeing the collection, storage, management, quality, and protection of data; ability to create and programmatically work with and maintain SQL and NoSQL databases.
Data Visualization and Analysis Tools: Proficiency with Power BI (DAX, Power Query) and/or equivalent tools for data analysis, visualization, and creating interactive data-driven dashboards.
Data Pipeline Optimization: Build and optimize scalable, automated data pipelines in Python to support real-time and batch analytics, ensuring performance, quality, and integrity.
Analytical Skills
Data Science and Analytics: 2+ years of experience in data visualization, statistical analysis, predictive analytics, and machine learning.
Data Mining and Wrangling: Hands-on experience preparing, cleaning, and transforming data for model consumption.
Model Development Lifecycle: Understanding the AI/ML lifecycle from data aggregation and feature engineering to model training, deployment, and monitoring.
Python Libraries for Data Analysis: Proficiency with libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and familiarity with automation frameworks.
Innovation in AI: Ability to evaluate emerging AI/automation solutions and integrate them into business workflows for efficiency and scalability.
Collaborative Skills and Education Background
Cross-Functional Collaboration: Partner with Data Scientists, Cybersecurity Engineers, Program Leads, and Data Owners to understand requirements and deliver efficient AI-powered, API-driven solutions for exploration, analysis, and modeling.
Educational Background and Work Experience: 3+ years of experience and a bachelorβs degree in Data Science, Statistics, Computer Science, or a related quantitative discipline (advanced degree preferred).
Job Type: Contract
Work Location: Hybrid remote in Washington, DC 20032
Clearance and Location Requirements
Able to be cleared for a Public Trust clearance.
This position requires to be onsite 3 days per week in Washington, DC.
βAbout the Role:
The overall objective of the Data Scientist position is to harness complex data and transform it into actionable insights that drive strategic decision-making across the organization. This role entails leveraging Python, APIs, SQL, and Power BI for advanced analytical and technical skills to develop predictive models, optimize data processes, and produce robust data-driven solutions.
The Data Scientist will work with the enterprise cybersecurity department in a practical data-driven role, collaborating with various teams to understand business requirements and ensure that data analytics efforts are aligned with organizational goals.
β
Technical Skills
Programming Proficiency: Strong experience with open-source programming such as R or Python for data retrieval, transformation, manipulation, consolidation, querying, analysis, and database connections.
Artificial Intelligence & Machine Learning: Practical experience building, training, and deploying machine learning models (regression, classification, forecasting, NLP, and neural networks), with exposure to deep learning frameworks (TensorFlow, PyTorch, or similar).
API Development & Integration: Ability to design, consume, and automate RESTful and GraphQL APIs for seamless data exchange and workflow integration across platforms.
Python Automation: Proven ability to automate data processing, reporting, and analysis tasks using Python (e.g., scripting ETL workflows, building reusable libraries, and integrating with APIs to reduce manual effort and increase efficiency).
Threat Hunting & Security Analytics: Hands-on experience performing threat hunting in Splunk and other SIEM platforms, including developing searches, building dashboards, and correlating endpoint (EDR) and network (IDS/Firewall) events to identify anomalies and detect malicious activity.
Data Management: Overseeing the collection, storage, management, quality, and protection of data; ability to create and programmatically work with and maintain SQL and NoSQL databases.
Data Visualization and Analysis Tools: Proficiency with Power BI (DAX, Power Query) and/or equivalent tools for data analysis, visualization, and creating interactive data-driven dashboards.
Data Pipeline Optimization: Build and optimize scalable, automated data pipelines in Python to support real-time and batch analytics, ensuring performance, quality, and integrity.
Analytical Skills
Data Science and Analytics: 2+ years of experience in data visualization, statistical analysis, predictive analytics, and machine learning.
Data Mining and Wrangling: Hands-on experience preparing, cleaning, and transforming data for model consumption.
Model Development Lifecycle: Understanding the AI/ML lifecycle from data aggregation and feature engineering to model training, deployment, and monitoring.
Python Libraries for Data Analysis: Proficiency with libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and familiarity with automation frameworks.
Innovation in AI: Ability to evaluate emerging AI/automation solutions and integrate them into business workflows for efficiency and scalability.
Collaborative Skills and Education Background
Cross-Functional Collaboration: Partner with Data Scientists, Cybersecurity Engineers, Program Leads, and Data Owners to understand requirements and deliver efficient AI-powered, API-driven solutions for exploration, analysis, and modeling.
Educational Background and Work Experience: 3+ years of experience and a bachelorβs degree in Data Science, Statistics, Computer Science, or a related quantitative discipline (advanced degree preferred).
Job Type: Contract
Work Location: Hybrid remote in Washington, DC 20032