Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Scientist position for a W2 contract in Urbandale, IA, focusing on automation for off-road vehicles. Key skills include AWS, big data, and coding. Requires experience with cloud solutions, MLOps, and infrastructure as code.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
520
🗓️ - Date discovered
April 22, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Urbandale, IA
🧠 - Skills detailed
#IAM (Identity and Access Management) #Migration #S3 (Amazon Simple Storage Service) #PostgreSQL #Big Data #Terraform #Databricks #SageMaker #Databases #Cloud #PySpark #SQL (Structured Query Language) #Debugging #Monitoring #Docker #REST (Representational State Transfer) #AWS (Amazon Web Services) #Lambda (AWS Lambda) #Spark (Apache Spark) #Data Science #AWS Lambda #Data Analysis #Datadog #Deployment #RDS (Amazon Relational Database Service) #MLflow #Infrastructure as Code (IaC) #GitHub #Scala #Automation
Role description

Data Scientist

Location: On-site, Urbandale IA 50322

Job Type: W2, Contract

General Description:

We are looking for a highly technical engineer or scientist to create features and support the development of automation and autonomy products for complex off-road vehicles and related control systems using a cloud-based solutions stack. We are open to early or advanced career candidates with strong examples of novel contributions and highly independent work in a fast-paced software delivery environment.

Essential Attributes/Experience:

   • Excellent coding skills that include production software deployment experience.

   • Big data experience (terabyte or petabyte level data sources).

   • Core understanding of cloud computing (e.g., AWS services like IAM, Lambda, S3, RDS).

   • Very strong communication skills and the ability to easily communicate their experience.

Example Responsibilities (including but not limited to):

   • Architect and propose new AWS/Databricks solutions & updates to existing backend systems that process terabyte and petabyte level data.

   • Work closely with the product management team and end users to understand customer experience and system requirements, build backlog, and prioritize work.

   • Build infrastructure as code (e.g., Terraform).

   • Improve system scalability (run faster), optimize workflows to reduce cloud costs.

   • Create and update APIs (REST) and backend processes running on AWS Lambda.

   • Build/support solutions involving containerization (e.g., Docker) and databases (e.g., PostgreSQL/PostGIS).

   • MLOps (e.g., deploy CVML models via Sagemaker, MLFlow) & Data analysis (AWS/Databricks stack with SQL/Pyspark).

   • Migration of CI/CD pipelines to Github Actions.

   • Enhance monitoring and alerting for multiple systems (e.g., Datadog).

   • Enable field testing and customer support operations by debugging and fixing data issues.

   • Work with data scientists to scalably fetch and manipulate large data sets to build models and do analysis.