EPITEC

Data Scientist I - Data Science & Machine Learning

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist I in Austin, TX, with a 6-month contract at $59.50–$63.00/hr. Key skills include Python ML model development, SQL, Spark, and strong communication abilities. MS or PhD preferred; experience with large-scale data processing required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
504
-
🗓️ - Date
February 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Geospatial Analysis #Spark (Apache Spark) #JavaScript #ML (Machine Learning) #Indexing #NLP (Natural Language Processing) #Java #Databricks #Scala #Attribute Analysis #Tableau #Visualization #Data Engineering #Regression #Monitoring #Unsupervised Learning #AI (Artificial Intelligence) #Normalization #Supervised Learning #Data Science #Data Modeling #Data Processing #Object Detection #Model Validation #SQL (Structured Query Language) #Python
Role description
Location: Austin, TX (Onsite) Contract Length: 6 Months (with extensions) Compensation: $59.50–$63.00/hr (W2 with benefits) Job Description We are seeking a Data Scientist to join an Intelligent Services team working with petabyte?scale machine and agronomic data to deliver production?ready analytics and machine learning solutions. This role partners closely with Product and Data Engineering teams to translate complex data into actionable insights that drive real?world outcomes. The ideal candidate is hands?on, technically strong, and able to clearly communicate findings to both technical and non?technical stakeholders. Education • MS preferred; PhD strongly preferred • Bachelor’s degree considered with strong applied data science experience Top Skills & Requirements • Production ML model development in Python (object?oriented) • Large?scale data processing with SQL, Spark, and Databricks • End?to?end ML or analytics solution delivery • KPI definition and performance measurement • Strong communication and collaboration skills • Comfortable working onsite in a cross?functional team Technical Skills • ML techniques: regression, supervised/unsupervised learning, probabilistic models, NLP • Data modeling and quality assessment (normalization, coverage, attribute analysis) • Model validation, bias detection, and drift monitoring • Visualization tools: Tableau, Kepler.gl, QGIS • Experience with structured, unstructured, time?series, and geo?tagged data Nice to Haves • Geospatial analysis (vector/raster data, geo?indexing) • Remote sensing, GIS, and satellite imagery • Computer vision (object detection, segmentation, SAM) • Advanced AI solutions (RAG, agentic systems, model fine?tuning, monitoring) • Additional languages (Java, JavaScript, Scala) • Simulation methods (Monte Carlo, Gibbs sampling) • Publications, patents, or project portfolio Key Responsibilities • Develop and deploy ML models using high?resolution machine and agronomic data • Translate analysis into actionable insights and recommendations • Define and track KPIs tied to customer and product success • Partner with Data Engineering on scalable analytics solutions • Communicate results, methodology, and trade?offs to stakeholders • Contribute to best practices for model development and monitoring #INDOEM