Ascii Group, LLC

Senior Data Scientist - W2 Only

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist in Pleasanton, CA, for 12+ months at a W2 pay rate. Key skills include Python, SQL, Azure Data Factory, Spark/PySpark, and machine learning frameworks. Requires 9+ years of relevant experience.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
Unknown
-
๐Ÿ—“๏ธ - Date
February 18, 2026
๐Ÿ•’ - Duration
More than 6 months
-
๐Ÿ๏ธ - Location
On-site
-
๐Ÿ“„ - Contract
W2 Contractor
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
Pleasanton, CA
-
๐Ÿง  - Skills detailed
#Data Cleaning #MongoDB #ML (Machine Learning) #Big Data #SQL (Structured Query Language) #SQL Server #Normalization #Deep Learning #AWS S3 (Amazon Simple Storage Service) #Model Evaluation #"ETL (Extract #Transform #Load)" #Transformers #Airflow #Regression #Data Wrangling #BI (Business Intelligence) #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #MySQL #Forecasting #NLP (Natural Language Processing) #Storytelling #Azure Data Factory #Cloud #Model Deployment #ADF (Azure Data Factory) #Flask #EC2 #Kubernetes #Matplotlib #Synapse #Tableau #Azure #Data Engineering #Data Science #Databricks #TensorFlow #Azure Synapse Analytics #Logistic Regression #PyTorch #Hadoop #FastAPI #Data Pipeline #NoSQL #Programming #Keras #PostgreSQL #Databases #PySpark #Docker #SageMaker #Plotly #Data Processing #GCP (Google Cloud Platform) #Spark (Apache Spark) #Visualization #AI (Artificial Intelligence) #Data Framework #Microsoft Power BI #Deployment #Python
Role description
The following requirement is open with our client. Title : Senior Data Scientist (W2) Location : Pleasanton, CA Duration : 12+ Months Relevant Experience (in Yrs.): 9+ Job Description: 1.Programming Languages ยท Strong Python familiarity (hands-on) for data prep, modeling, and building ML components. ยท SQL - Skills: joins, window functions, CTEs, query optimization 1. Machine Learning ยท Linear/Logistic Regression ยท Decision Trees, Random Forest, XGBoost, LightGBM ยท SVM, KNN ยท Model evaluation - Precision/Recall, F1, ROC-AUC, MSE, RMSE ยท Model tuning - Grid search, randomized search, cross-validation 1. Deep Learning ยท Frameworks: TensorFlow, Keras, PyTorch ยท CNNs, RNNs, LSTMs, Transformers ยท Use cases: NLP, computer vision, time-series forecasting 1. Data Wrangling & Preprocessing ยท Missing data handling ยท Feature engineering ยท Data cleaning ยท Outlier detection Normalization standardization 1. Data Visualization BI Tools Python Matplotlib| Seaborn| Plotly Tools Tableau| Power BI Dashboards| reporting| storytelling with data 1. Big Data Cloud Tools (Needed for production-scale roles) Big Data Frameworks Spark| Hadoop Cloud Platforms (any one strongly) AWS (S3| EC2| SageMaker) Azure (Data Factory| Databricks| ML Studio) GCP (Big Query| Vertex AI) 1. Deployment Skills (advanced roles) Model deployment Flask| FastAPI Docker| Kubernetes (optional)CICD basics 1. Databases Data Engineering Basics Relational MySQL| PostgreSQL| SQL Server NoSQL MongoDB| Cassandra Data pipelines Airflow| Prefect (optional) Desirable Skills: Azure data factory| Databricks| Azure Synapse Analytics Python| Sql |Pyspark Databricks Must-Have Skills: โ€ข Azure data factory โ€ข Python โ€ข SQL โ€ข Spark / PySpark (Big Data Processing) โ€ข Azure/AWS/GCP