

Harvey Nash
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, a 12-month hybrid contract in San Francisco, CA, offering $109.68/hr. Requires 2+ years with Databricks and 3+ years in Python/PySpark, experience with AWS, and modern data stacks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
872
-
ποΈ - Date
December 2, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
San Francisco, CA
-
π§ - Skills detailed
#Agile #NoSQL #Scala #S3 (Amazon Simple Storage Service) #Databricks #PySpark #Unit Testing #Data Engineering #Spark (Apache Spark) #Python #AWS (Amazon Web Services) #Monitoring #Jupyter #Data Warehouse #"ETL (Extract #Transform #Load)" #Airflow #Databases #Redshift #Data Pipeline #Collibra #Snowflake #Cloud
Role description
Harvey Nash USA has been engaged to find a talented Data Engineer β III.
β’
β’
β’ Must be legally authorized to work in the United States for any employer without sponsorship
β’
β’
β’ As a Data Engineer, you will be responsible for collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable insights.
Will work across multiple platforms to ensure that data pipelines are scalable, repeatable, and secure, capable of serving multiple users.
Key Responsibilities:
β’ Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data from disparate sources into our Common Data Platform.
β’ Deploy monitoring and alerting for data pipelines and data stores, implementing auto-remediation where possible to ensure system availability and reliability.
β’ Deliver high-quality data products and services following Safe Agile Practices.
β’ Actively participate in Agile rituals and follow Scaled Agile processes as set forth by the CDP Program team.
Job Title: Data Engineer - III
Location: San Francisco, CA (Hybrid)
Duration: 12 Months Contract
Required Skills & Experience
β’ 2+ yearsβ experience with tools such as Databricks, Collibra, and Starburst.
β’ 3+ yearsβ experience with Python and PySpark.
β’ Experience using Jupyter notebooks, including coding and unit testing.
β’ 2+ years of experience with a modern data stack (Object stores like S3, Spark, Airflow, Lakehouse architectures, real-time databases) and cloud data warehouses such as RedShift, Snowflake.
β’ Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling.
β’ Data engineering experience in AWS (any CFS2/EDS) highlighting the services/tools used.
A reasonable, good faith estimate of the minimum and maximum hourly wage for this position is $109.68/hr. on W2 (rate may differ based on current location). Benefits will be available, and details are available at the following links:
Benefits Details: https://britehr.app/HarveyNashContractorsNH2025
401K Plan: Our employees work hard, which is why Harvey Nash is proud to contribute to their hard-earned savings with a 401(k) retirement plan that includes a 25% company match on all deferrals. We also offer a Roth 401(k) for even more flexibility. Employees 21 years of age or older, and have completed 3 months of service, are eligible to participate.
Harvey Nash USA has been engaged to find a talented Data Engineer β III.
β’
β’
β’ Must be legally authorized to work in the United States for any employer without sponsorship
β’
β’
β’ As a Data Engineer, you will be responsible for collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable insights.
Will work across multiple platforms to ensure that data pipelines are scalable, repeatable, and secure, capable of serving multiple users.
Key Responsibilities:
β’ Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data from disparate sources into our Common Data Platform.
β’ Deploy monitoring and alerting for data pipelines and data stores, implementing auto-remediation where possible to ensure system availability and reliability.
β’ Deliver high-quality data products and services following Safe Agile Practices.
β’ Actively participate in Agile rituals and follow Scaled Agile processes as set forth by the CDP Program team.
Job Title: Data Engineer - III
Location: San Francisco, CA (Hybrid)
Duration: 12 Months Contract
Required Skills & Experience
β’ 2+ yearsβ experience with tools such as Databricks, Collibra, and Starburst.
β’ 3+ yearsβ experience with Python and PySpark.
β’ Experience using Jupyter notebooks, including coding and unit testing.
β’ 2+ years of experience with a modern data stack (Object stores like S3, Spark, Airflow, Lakehouse architectures, real-time databases) and cloud data warehouses such as RedShift, Snowflake.
β’ Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling.
β’ Data engineering experience in AWS (any CFS2/EDS) highlighting the services/tools used.
A reasonable, good faith estimate of the minimum and maximum hourly wage for this position is $109.68/hr. on W2 (rate may differ based on current location). Benefits will be available, and details are available at the following links:
Benefits Details: https://britehr.app/HarveyNashContractorsNH2025
401K Plan: Our employees work hard, which is why Harvey Nash is proud to contribute to their hard-earned savings with a 401(k) retirement plan that includes a 25% company match on all deferrals. We also offer a Roth 401(k) for even more flexibility. Employees 21 years of age or older, and have completed 3 months of service, are eligible to participate.





