

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term W2 contract in San Jose, CA, requiring onsite work 3 days a week. Key skills include Python, Hadoop ecosystem, ETL, and Unix/Linux shell scripting.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
August 6, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
San Jose, CA
-
π§ - Skills detailed
#Data Engineering #Scripting #Shell Scripting #Programming #Spark (Apache Spark) #Spark SQL #"ETL (Extract #Transform #Load)" #HDFS (Hadoop Distributed File System) #Scala #Automation #Big Data #Data Pipeline #Data Processing #Linux #Python #SQL (Structured Query Language) #Unix #Hadoop
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description:
We are seeking a skilled Data Engineer with expertise in Big Data technologies to join our client team in San Jose, CA. This role requires a long-term W2 contract and onsite presence 3 days per week.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines and ETL workflows.
β’ Work extensively with Big Data technologies including Hadoop, HDFS, Hive, and Spark SQL.
β’ Develop and optimize Python scripts for data processing and automation.
β’ Write and maintain Unix/Linux shell scripts for data workflows and automation.
β’ Collaborate with cross-functional teams to support data-driven decision making.
Required Skills:
β’ Strong experience in Python programming.
β’ Proficient with Hadoop ecosystem: Hadoop, HDFS, Hive, Spark SQL.
β’ Solid understanding of Unix/Linux operating systems and shell scripting.
β’ Extensive ETL experience
β’ Ability to work onsite in San Jose, CA at least 3 days per week.