

Big Data Developer-12+ Years Only -w2 Position
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in San Francisco, CA, for 12 months at a competitive pay rate. Key skills include Java or Scala, SQL, NoSQL, Apache Spark, and cloud platforms. Minimum qualifications require 2+ years in relevant technologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
May 30, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Sunnyvale, CA
-
π§ - Skills detailed
#AWS (Amazon Web Services) #SQL (Structured Query Language) #Big Data #Data Warehouse #Data Pipeline #Spark (Apache Spark) #Teradata #BigQuery #Apache Spark #NoSQL #Cloud #NiFi (Apache NiFi) #Scala #Data Engineering #Kafka (Apache Kafka) #Data Modeling #Java #Python #Redshift #Deployment #Data Quality #Unit Testing #Azure #Impala #Programming #Airflow
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: BIG DATA ENGINEER- Senior/ Middle Data Software Engineer
Location: San Francisco, CA - Hybrid
Duration: 12 Months
We can accept h1b transfer if willing to relocate on their own
Interviews: (4 Rounds):
Mandatory Skills: Either Java or Scala, SQL and NoSQL, Apache Spark, Cloud Platforms
Data Engineer Responsibilities:
β’ System development and maintenance activities of the team to meet service level agreements and create solutions with innovation, cost effectiveness, high quality and faster time to market.
β’ Support code versioning, and code deployments for Data Pipelines.
β’ Ensure test coverage for unit testing and support integration and performance testing.
β’ Contribute ideas to help ensure that required standards and processes are in place.
β’ Research and evaluate current and upcoming technologies and frameworks.
β’ Build data expertise and own data quality for your areas.
Minimum Qualifications:
β’ 2+ years of Java/Scala development experience.
β’ 2+ years of SQL and NoSQL experience (handling structured and unstructured data).
β’ 1+ years of extensive experience with Spark Processing engine.
β’ 1+ years of experience with Big data tools / technologies/ Streaming (Hive, Impala, OOZIE, Airflow, NIFI, Kafka)
β’ 1+ yearsβ experience with Data Modeling.
β’ Experience analyzing data to discover opportunities and address gaps.
β’ Experience working with cloud or on-prem Big Data platform(i.e. Netezza, Teradata, AWS Redshift, Google BigQuery, Azure Data Warehouse, or similar)
β’ Nice to have but not mandatory
β’ Programming experience in Python