

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Sunnyvale, CA, on a long-term contract. Key skills include GCP, data pipeline development, Apache tools, and programming in Python, Java, or Scala. Experience with data warehouses and Agile methodologies is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Sunnyvale, CA
-
π§ - Skills detailed
#Data Pipeline #Apache Kafka #BigQuery #Physical Data Model #Apache Airflow #Data Processing #Automated Testing #Jenkins #Apache Hive #Data Lake #Programming #Hadoop #Apache Spark #Scripting #Java #Jira #Data Warehouse #Python #RDBMS (Relational Database Management System) #Big Data #Data Engineering #Scrum #Spark (Apache Spark) #Airflow #Perl #Scala #Kafka (Apache Kafka) #BitBucket #GCP (Google Cloud Platform) #Agile
Role description
Job Title: GCP Data Engineer
Location: Sunnyvale, CA (Onsite)
Long Term Contract
Job description:
Responsibilities:
β’ Design and develop big data applications using the latest open source technologies.
β’ Desired working in offshore model and Managed outcome
β’ Develop logical and physical data models for big data platforms.
β’ Automate workflows using Apache Airflow.
β’ Create data pipelines using Apache Hive, Apache Spark, Apache Kafka.
β’ Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support.
β’ Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.
β’ Mentor junior engineers on the team
β’ Lead daily standups and design reviews
β’ Groom and prioritize backlog using JIRA
β’ Act as the point of contact for your assigned business domain
Requirements:
β’ GCP Experience
β’ Experience building data pipelines in GCP
β’ GCP Dataproc, GCS & BIGQuery experience
β’ Hands-on experience with developing data warehouse solutions and data products.
β’ Hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required
β’ Hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms.
β’ Experience with programming languages: Python, Java, Scala, etc.
β’ Experience with scripting languages: Perl, Shell, etc.
β’ Practice working with, processing, and managing large data sets (multi TB/PB scale).
β’ Exposure to test driven development and automated testing frameworks.
β’ Background in Scrum/Agile development methodologies.
β’ Capable of delivering on multiple competing priorities with little supervision.
β’ Excellent verbal and written communication skills.
Candidates will also have experience in the following:
β’ Gitflow
β’ Atlassian products β BitBucket, JIRA, Confluence etc.
β’ Continuous Integration tools such as Bamboo, Jenkins, or TFS
Job Title: GCP Data Engineer
Location: Sunnyvale, CA (Onsite)
Long Term Contract
Job description:
Responsibilities:
β’ Design and develop big data applications using the latest open source technologies.
β’ Desired working in offshore model and Managed outcome
β’ Develop logical and physical data models for big data platforms.
β’ Automate workflows using Apache Airflow.
β’ Create data pipelines using Apache Hive, Apache Spark, Apache Kafka.
β’ Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support.
β’ Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.
β’ Mentor junior engineers on the team
β’ Lead daily standups and design reviews
β’ Groom and prioritize backlog using JIRA
β’ Act as the point of contact for your assigned business domain
Requirements:
β’ GCP Experience
β’ Experience building data pipelines in GCP
β’ GCP Dataproc, GCS & BIGQuery experience
β’ Hands-on experience with developing data warehouse solutions and data products.
β’ Hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required
β’ Hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms.
β’ Experience with programming languages: Python, Java, Scala, etc.
β’ Experience with scripting languages: Perl, Shell, etc.
β’ Practice working with, processing, and managing large data sets (multi TB/PB scale).
β’ Exposure to test driven development and automated testing frameworks.
β’ Background in Scrum/Agile development methodologies.
β’ Capable of delivering on multiple competing priorities with little supervision.
β’ Excellent verbal and written communication skills.
Candidates will also have experience in the following:
β’ Gitflow
β’ Atlassian products β BitBucket, JIRA, Confluence etc.
β’ Continuous Integration tools such as Bamboo, Jenkins, or TFS