

Ket Software
Data Science Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Science Engineer position in Boston, MA, offering a 12+ month contract at a pay rate of "TBD." Key skills include AWS, Python, Spark, and machine learning. Requires 4+ years AWS experience and 2+ years in machine learning.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 10, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Boston, MA
-
🧠 - Skills detailed
#Data Ingestion #Python #S3 (Amazon Simple Storage Service) #Programming #Spark (Apache Spark) #Storage #ML (Machine Learning) #Compliance #Data Quality #AWS Glue #Data Science #Data Warehouse #Data Lake #SageMaker #Scala #Snowflake #AWS (Amazon Web Services) #Data Governance #Data Lifecycle #Deployment #Visualization #Web Services #Monitoring #Data Pipeline #"ETL (Extract #Transform #Load)" #Security #Data Analysis
Role description
Role: Data Science Engineer
Location: Boston, MA( Only Locals)
Note: In-person interview required
This is a 12+ month, ongoing contract with our insurance client in Boston, MA 4x hybrid per week with a mandatory final onsite interview.
We are seeking a talented Data Science Engineer to join our team and contribute to the development and implementation of advanced data solutions using technologies such as AWS Glue, Python, Spark, Snowflake Data Lake, S3, SageMaker, and machine learning (M/L).
Overview: As a Data Science Engineer, you will play a crucial role in designing, building, and optimizing data pipelines, machine learning models, and analytics solutions. You will work closely with cross-functional teams to extract actionable insights from data and drive business outcomes.
• Develop and maintain ETL pipelines using AWS Glue for data ingestion, transformation, and integration from various sources.
• Utilize Python and Spark for data preprocessing, feature engineering, and model development.
• Design and implement data lake architecture using Snowflake Data Lake, Snowflake data warehouse and S3 for scalable and efficient storage and processing of structured and unstructured data.
• Leverage SageMaker for model training, evaluation, deployment, and monitoring in production environments.
• Collaborate with data scientists, analysts, and business stakeholders to understand requirements, develop predictive models, and generate actionable insights.
• Conduct exploratory data analysis (EDA) and data visualization to communicate findings and trends effectively.
• Stay updated with advancements in machine learning algorithms, techniques, and best practices to enhance model performance and accuracy.
• Ensure data quality, integrity, and security throughout the data lifecycle by implementing robust data governance and compliance measures.
Requirements added by the job poster
• 4+ years of work experience with Amazon Web Services (AWS)
• 2+ years of work experience with Machine Learning
• Accept a background check
• 3+ years of work experience with Python (Programming Language)
• Working in a hybrid setting
Role: Data Science Engineer
Location: Boston, MA( Only Locals)
Note: In-person interview required
This is a 12+ month, ongoing contract with our insurance client in Boston, MA 4x hybrid per week with a mandatory final onsite interview.
We are seeking a talented Data Science Engineer to join our team and contribute to the development and implementation of advanced data solutions using technologies such as AWS Glue, Python, Spark, Snowflake Data Lake, S3, SageMaker, and machine learning (M/L).
Overview: As a Data Science Engineer, you will play a crucial role in designing, building, and optimizing data pipelines, machine learning models, and analytics solutions. You will work closely with cross-functional teams to extract actionable insights from data and drive business outcomes.
• Develop and maintain ETL pipelines using AWS Glue for data ingestion, transformation, and integration from various sources.
• Utilize Python and Spark for data preprocessing, feature engineering, and model development.
• Design and implement data lake architecture using Snowflake Data Lake, Snowflake data warehouse and S3 for scalable and efficient storage and processing of structured and unstructured data.
• Leverage SageMaker for model training, evaluation, deployment, and monitoring in production environments.
• Collaborate with data scientists, analysts, and business stakeholders to understand requirements, develop predictive models, and generate actionable insights.
• Conduct exploratory data analysis (EDA) and data visualization to communicate findings and trends effectively.
• Stay updated with advancements in machine learning algorithms, techniques, and best practices to enhance model performance and accuracy.
• Ensure data quality, integrity, and security throughout the data lifecycle by implementing robust data governance and compliance measures.
Requirements added by the job poster
• 4+ years of work experience with Amazon Web Services (AWS)
• 2+ years of work experience with Machine Learning
• Accept a background check
• 3+ years of work experience with Python (Programming Language)
• Working in a hybrid setting






