

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
September 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Jose, CA
-
π§ - Skills detailed
#SQL (Structured Query Language) #Scala #Data Architecture #Computer Science #Data Modeling #Hadoop #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Agile #GCP (Google Cloud Platform) #Looker #Cloud #BigQuery #Data Engineering #Data Extraction #SQL Queries #Data Quality
Role description
π Job Description: Data Engineer (GCP & BigQuery)
We are seeking a highly skilled Data Engineer with strong expertise in Google Cloud Platform (GCP), particularly BigQuery. This role is ideal for someone who thrives in a fast-paced environment, enjoys solving complex data challenges, and can translate business needs into scalable technical solutions.
Key Responsibilities
β’ Design, develop, test, and maintain ETL pipelines and data solutions within GCP, focusing on BigQuery.
β’ Write and optimize complex SQL queries for data extraction, transformation, and analysis.
β’ Discover and integrate raw data from diverse sources into consistent, scalable architectures.
β’ Manage cloud data ingress and egress using tools such as gcloud, gsutil, and bq command-line utilities.
β’ Collaborate with business stakeholders to understand analytical needs and translate them into technical requirements.
β’ Ensure data quality, integrity, and performance across all data processes.
β’ Work closely with cross-functional teams to support data-driven decision-making.
β’ Utilize LookML to build and maintain views, explores, dashboards, and persistent derived tables.
β
Required Skills & Experience
β’ Proven experience with Google Cloud Platform, especially BigQuery.
β’ Strong proficiency in SQL and data modeling.
β’ Hands-on experience with ETL development and cloud data architecture.
β’ Familiarity with GCP command-line tools (gcloud, gsutil, bq).
β’ Excellent analytical and problem-solving skills.
β’ Strong communication and collaboration abilities.
β’ Experience with Looker and LookML development.
β’ (Preferred) Experience with Hadoop, Spark/Hive, and UC4.
β¨ Nice to Have
β’ Experience with Hadoop, Spark/Hive, and UC4 scheduling tools.
β’ Bachelorβs degree in Computer Science, Engineering, Information Systems, or related field.
β’ Experience working in agile environments and with cross-functional teams.
Compensation:
$55-$60/hour
π Job Description: Data Engineer (GCP & BigQuery)
We are seeking a highly skilled Data Engineer with strong expertise in Google Cloud Platform (GCP), particularly BigQuery. This role is ideal for someone who thrives in a fast-paced environment, enjoys solving complex data challenges, and can translate business needs into scalable technical solutions.
Key Responsibilities
β’ Design, develop, test, and maintain ETL pipelines and data solutions within GCP, focusing on BigQuery.
β’ Write and optimize complex SQL queries for data extraction, transformation, and analysis.
β’ Discover and integrate raw data from diverse sources into consistent, scalable architectures.
β’ Manage cloud data ingress and egress using tools such as gcloud, gsutil, and bq command-line utilities.
β’ Collaborate with business stakeholders to understand analytical needs and translate them into technical requirements.
β’ Ensure data quality, integrity, and performance across all data processes.
β’ Work closely with cross-functional teams to support data-driven decision-making.
β’ Utilize LookML to build and maintain views, explores, dashboards, and persistent derived tables.
β
Required Skills & Experience
β’ Proven experience with Google Cloud Platform, especially BigQuery.
β’ Strong proficiency in SQL and data modeling.
β’ Hands-on experience with ETL development and cloud data architecture.
β’ Familiarity with GCP command-line tools (gcloud, gsutil, bq).
β’ Excellent analytical and problem-solving skills.
β’ Strong communication and collaboration abilities.
β’ Experience with Looker and LookML development.
β’ (Preferred) Experience with Hadoop, Spark/Hive, and UC4.
β¨ Nice to Have
β’ Experience with Hadoop, Spark/Hive, and UC4 scheduling tools.
β’ Bachelorβs degree in Computer Science, Engineering, Information Systems, or related field.
β’ Experience working in agile environments and with cross-functional teams.
Compensation:
$55-$60/hour