
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position in Boston, MA, lasting over 6 months, with a pay rate of $145,212.42 - $174,879.48 per year. Requires 6+ years of experience, strong AWS, Spark, Scala, and SQL skills, particularly with Redshift.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
794.9045454545
-
ποΈ - Date discovered
August 24, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Boston, MA 02118
-
π§ - Skills detailed
#Data Pipeline #Debugging #Big Data #Documentation #SNS (Simple Notification Service) #Python #GIT #Spark (Apache Spark) #SQL (Structured Query Language) #Redshift #SQS (Simple Queue Service) #AWS Glue #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Data Engineering #Databases #Data Warehouse #Scala #AWS (Amazon Web Services) #Deployment #Cloud #Unit Testing #Snowflake
Role description
Position Name: Data Engineer
Industry: IT
Location:Onsite β Boston, MA (USA)
Job Description:
We are looking for experienced Data Engineers with strong expertise in designing and developing enterprise-wide big data solutions. The ideal candidate will have deep knowledge of AWS services, Spark, and modern data engineering practices, along with the ability to contribute to design, development, deployment, and production support.
Role and Responsibilities:
Design, develop, and implement end-to-end enterprise big data solutions. Β· Build and optimize big data pipelines using Spark, Scala, AWS Glue, Lambda, SNS, SQS, and CloudWatch.
Develop and maintain scalable applications with Scala/Python.
Work extensively with SQL databases (preferably Redshift) and data warehouse technologies.
Design and implement ETL/ELT processes and frameworks.
Collaborate with teams to create integration and technical design documentation.
Conduct peer reviews of design and code to ensure adherence to best practices and standards.
Perform configuration, development, test case preparation, and unit testing.
Identify and resolve complex defects during testing and perform root cause analysis.
Support and execute performance testing of solutions.
Utilize Git repositories and participate in CI/CD deployment pipelines.
Provide production support, troubleshooting issues, and tuning environments for optimal performance.
Ensure best practices and coding standards are followed across all phases of the project lifecycle.
Required Skills & Experience:
6+ years of experience in designing, developing, and delivering enterprise big data solutions.
Strong expertise in Spark, Scala, AWS Glue, Lambda, SNS, SQS, CloudWatch.
Proficiency in Scala and Python application development.
Advanced SQL skills with hands-on experience in Redshift (preferred).
Solid experience in building and maintaining ETL/ELT pipelines.
Familiarity with Git and CI/CD processes.
Strong debugging, problem-solving, and performance tuning skills.
Preferred / Nice-to-Have:
Experience with Snowflake data platform.
Exposure to enterprise-scale performance testing and optimization.
Job Type: Contract
Pay: $145,212.42 - $174,879.48 per year
Benefits:
401(k)
Ability to Commute:
Boston, MA 02118 (Required)
Ability to Relocate:
Boston, MA 02118: Relocate before starting work (Required)
Work Location: In person
Position Name: Data Engineer
Industry: IT
Location:Onsite β Boston, MA (USA)
Job Description:
We are looking for experienced Data Engineers with strong expertise in designing and developing enterprise-wide big data solutions. The ideal candidate will have deep knowledge of AWS services, Spark, and modern data engineering practices, along with the ability to contribute to design, development, deployment, and production support.
Role and Responsibilities:
Design, develop, and implement end-to-end enterprise big data solutions. Β· Build and optimize big data pipelines using Spark, Scala, AWS Glue, Lambda, SNS, SQS, and CloudWatch.
Develop and maintain scalable applications with Scala/Python.
Work extensively with SQL databases (preferably Redshift) and data warehouse technologies.
Design and implement ETL/ELT processes and frameworks.
Collaborate with teams to create integration and technical design documentation.
Conduct peer reviews of design and code to ensure adherence to best practices and standards.
Perform configuration, development, test case preparation, and unit testing.
Identify and resolve complex defects during testing and perform root cause analysis.
Support and execute performance testing of solutions.
Utilize Git repositories and participate in CI/CD deployment pipelines.
Provide production support, troubleshooting issues, and tuning environments for optimal performance.
Ensure best practices and coding standards are followed across all phases of the project lifecycle.
Required Skills & Experience:
6+ years of experience in designing, developing, and delivering enterprise big data solutions.
Strong expertise in Spark, Scala, AWS Glue, Lambda, SNS, SQS, CloudWatch.
Proficiency in Scala and Python application development.
Advanced SQL skills with hands-on experience in Redshift (preferred).
Solid experience in building and maintaining ETL/ELT pipelines.
Familiarity with Git and CI/CD processes.
Strong debugging, problem-solving, and performance tuning skills.
Preferred / Nice-to-Have:
Experience with Snowflake data platform.
Exposure to enterprise-scale performance testing and optimization.
Job Type: Contract
Pay: $145,212.42 - $174,879.48 per year
Benefits:
401(k)
Ability to Commute:
Boston, MA 02118 (Required)
Ability to Relocate:
Boston, MA 02118: Relocate before starting work (Required)
Work Location: In person