

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Boston, MA, with a contract length of unspecified duration and a pay rate of "TBD." Candidates must have 5+ years of experience with SQL, Spark, Java/Scala, Unix/Shell scripting, and Databricks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 24, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Boston, MA
-
π§ - Skills detailed
#NoSQL #Automation #Data Science #Linux #Data Engineering #Programming #Unix #Scrum #Databricks #Cloud #Scripting #AWS (Amazon Web Services) #Agile #Spark (Apache Spark) #Apache Spark #BI (Business Intelligence) #Data Quality #Data Extraction #SQL Queries #Java #Scala #Azure #Storage #Distributed Computing #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Data Modeling #Data Processing #Data Pipeline #Security #Data Analysis #Shell Scripting
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Emp Type: W2 or 1099........ (No C2C)
Visa: H1B, H4EAD, GCEAD, L2, OPT, CPT,Green Card, US Citizens (Only USA Applicants)
Workplace Type: Onsite - Boston,Ma
Experience: 5+ Yrs
Job Summary:
We are looking for a highly skilled Data Engineer with hands-on experience in SQL, Spark, Java/Scala, Unix/Shell scripting, and Databricks. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines, performing data analysis, and supporting data modeling efforts. You will work in an agile delivery environment and collaborate with cross-functional teams to enable high-quality data solutions.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Key Responsibilities:
β’ Develop complex and optimized SQL queries for data extraction, transformation, and loading (ETL).
β’ Design and build Spark-based data processing pipelines using Java or Scala in a distributed computing environment.
β’ Perform exploratory data analysis and build data models to support analytics and business intelligence use cases.
β’ Develop and maintain Unix/Shell scripts for automation and job orchestration.
β’ Work within Databricks notebooks and workflows for collaborative data processing and transformation.
β’ Participate in Agile ceremonies such as sprint planning, daily stand-ups, retrospectives, and demonstrations.
β’ Collaborate with data analysts, data scientists, and business users to gather requirements and deliver data solutions.
β’ Ensure data quality, integrity, and security across data pipelines and storage layers.
β’ Troubleshoot performance issues in distributed data environments.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Required Qualifications:
β’ 4+ years of experience in writing complex SQL queries and optimizing query performance.
β’ Proficient in Apache Spark with programming experience in Java or Scala.
β’ Strong experience in Unix/Linux environments and Shell scripting.
β’ Hands-on experience in Databricks, including notebook development, job scheduling, and integration with cloud data platforms (e.g., Azure, AWS).
β’ Good understanding of data modeling concepts (relational, dimensional, and NoSQL).
β’ Experience working in Agile / Scrum delivery environments.
β’ Strong analytical and problem-solving skills.
β’ Excellent communication and collaboration skills.
Please forward your resume and contact details to krithik_r@surgetechinc.com/ kaviya_t@surgetechinc.com