Jobs via Dice

Data Engineer (Hybrid Onsite - W2 Contract)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term W2 contract in Quincy, MA, requiring 5+ years of cloud development experience. Key skills include Python/Pyspark, Spark, Scala, and AWS. A Bachelor's degree and AWS/Databricks certification are preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Quincy, MA
-
🧠 - Skills detailed
#Programming #Big Data #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #Cloud #Kubernetes #Databricks #Computer Science #PySpark #Python #RDS (Amazon Relational Database Service) #EC2 #Java #API (Application Programming Interface) #Lambda (AWS Lambda) #BI (Business Intelligence) #Data Engineering #Hadoop #Deployment #Spark (Apache Spark) #Airflow #RDBMS (Relational Database Management System) #SQL (Structured Query Language) #Migration #AWS (Amazon Web Services) #Database Architecture #Docker #Scala
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Xoriant Corporation, is seeking the following. Apply via Dice today! Job Title : Data Engineer Location: Quincy, MA (3 days a week Hybrid onsite) Duration : Long term contract Contract: W2 Job Description: Key Skills: Python/Pyspark, Spark, Scala, Databricks, AWS. Education and experience Qualification Bachelors degree in Computer Science, Information Systems, or equivalent education or work experience Around 5+ years of experience as a developer on cloud technologies Any AWS and/or Databricks certification will be a plus Roles & Responsibilities: Recognize the current application infrastructure and suggest new concepts to improve performance Document the best practices and strategies associated with application deployment and infrastructure support Produce reusable, efficient, and scalable programs, and also cost-effective migration strategies Develop Data Engineering and ML pipelines in Databricks and different AWS services, including S3, EC2, API, RDS, Kinesis/Kafka and Lambda to build serverless applications Work jointly with the IT team and other departments to migrate data engineering and ML applications to Databricks/AWS Comfortable to work on tight timelines, when required. Skill Sets Required: Good decision-making and problem solving skills Solid understanding of Databricks fundamentals/architecture and have hands on experience in setting up Databricks cluster, working in Databricks modules (Data Engineering, ML and SQL warehouse). Knowledge on medallion architecture, DLT and unity catalog within Databricks. Experience in migrating data from on-prem Hadoop to Databricks/AWS Understanding of core AWS services, uses, and AWS architecture best practices Hands-on experience in different domains, like database architecture, business intelligence, machine learning, advanced analytics, big data, etc. Solid knowledge on Airflow Solid knowledge on CI/CD pipeline in AWS technologies Application migration of RDBMS, java/python applications, model code, elastic etc. Solid programming background on scala, python Experience with Docker and Kubernetes is a plus