360 Technology

Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist with 5+ years of experience, focusing on model development in AWS Sagemaker using Python and Pyspark. Requires expertise in SQL, MLops, and AWS services. Contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 14, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Data Wrangling #Libraries #SQL (Structured Query Language) #Classification #Clustering #NumPy #PyTorch #Athena #Data Science #Data Quality #ML (Machine Learning) #Cloud #Deployment #Pandas #AI (Artificial Intelligence) #Model Evaluation #Jenkins #Airflow #NLTK (Natural Language Toolkit) #Lambda (AWS Lambda) #SageMaker #Docker #GitHub #AWS (Amazon Web Services) #PySpark #Python #Data Mining #AWS Lambda #TensorFlow #Terraform #Spark (Apache Spark) #SciPy #AWS SageMaker #Databricks #Visualization #"ETL (Extract #Transform #Load)" #Tableau #Redshift
Role description
β€’ 5+ years’ experience as data scientist with MLops understanding. β€’ Responsible to development of models and prototypes in AWS sagemaker notebook using Python, Pyspark. β€’ Perform detailed analysis of data sources using AWS Athena, Redshift or Databricks SQL in terms of Data Quality and Feature Engineering. Expert in SQL querying and know windowing functions. β€’ Perform data assessment and other data related preparations needed for the use case. β€’ Have good understanding of the different linear and nonlinear model algorithms like Random Forest, XGBoost, etc. Aware of different model evaluation techniques. β€’ Responsible to perform the MLops operation end to end (i.e. development, validation, deployment & visualization) using different AWS services like AWS lambda , Step functions, Jenkins & Terraform. β€’ Have good understanding of cloud and different cloud services. β€’ Proficient in coding logic using python and have used python libraries like Numpy, Pandas, Scipy, NLTK, Sklearn etc. and different data wrangling packages. β€’ Good to have hands on experience on visualization tool like PowerBI, Tableau etc. β€’ Expert level knowledge of advanced data mining and statistical modelling: Predictive modelling techniques Classification techniques; Clustering techniques & Text mining techniques. Candidate MUST HAVE:- - Hands on model building with python / pyspark ml - AWS experience (Sagemaker, Athena, etc.) - Databricks experience - Experience with TensorFlow or Pytorch (Transformer model) - Experienced with Scikitlearn python package - Expert in SQL querying and know windowing functions - GitHub experience Good to have:- - Airflow - Docker - AWS Bedrock - PowerBI - GEN AI experience