

GCP Big Query Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Big Query Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include ETL, Python, SQL, and GCP tools. Requires 7+ years of GCP experience and expertise in data pipeline creation and optimization.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
400
-
ποΈ - Date discovered
July 4, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Nashville, TN
-
π§ - Skills detailed
#Dataflow #Clustering #Data Analysis #Teradata SQL #YAML (YAML Ain't Markup Language) #"ETL (Extract #Transform #Load)" #Programming #Teradata #Python #Data Lake #Data Integrity #SQL (Structured Query Language) #Batch #SQL Queries #Airflow #Apache Airflow #Security #Data Warehouse #Data Engineering #Data Lakehouse #BI (Business Intelligence) #Jira #Cloud #GCP (Google Cloud Platform) #BigQuery #Storage #Scala #GitHub
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description :
Google Data Engineer, the individual is tasked with designing, building, and maintaining scalable data infrastructure and pipelines to enable data analytics and business intelligence efforts. This role includes collaborating with cross-functional teams to ensure efficient data movement, transformation, and storage from multiple sources, while upholding data integrity, quality, and security. It demands a blend of technical expertise, analytical abilities, and business insight to engage with stakeholders, gather requirements, and deliver effective data-driven solutions.
Key Responsibilities :
β’ Design and implement scalable ETL/ELT pipelines to ingest and transform large volumes of data using tools such as Python, SQL, and cloud-native services.
β’ Develop and maintain data models and data warehouse solutions using platforms like GCP.
β’ Collaborate with data analysts, scientists, and business teams to gather requirements and deliver data engineering solutions.
β’ Monitor and optimize data workflows to ensure performance, reliability, and cost-efficiency
Key Skills :
β’ ETL, Datawarehouse and Data LakeHouse
β’ Google Cloud Platform: Cloud Storage ; Cloud Pub/Sub; Cloud Dataflow; Apache Airflow; Cloud Functions; Big Query; Cloud Compose
β’ Programming Skills: Python , BigQuery SQL and Teradata SQL
β’ Tools: Jira, GitHub, Confluence
β’ Excellent Problem Solving and Communication Skills
o 7+ years of GCP experience required
o Experience in creating ETL pipeline using dataflow & Bigquery, Experience to create best performing tables with partitioning/clustering etc enabled keeping best practices in mind
o Write Complex SQL queries keeping execution cost in mind.
o Hands-on with DAG using YAML or Airflow (Python)
o Knowledge of Cloud Composer
o Load data into BigQuery using files or by streaming one record at a time .
o Create, load, and query partitioned tables for daily batch processing data.
o Implement fine-grained access control using roles and authorized views.
o Should have in-depth understanding of Bigquery architecture, table partitioning, clustering, best practices.
o Should know how to reduce BigQuery costs by reducing the amount of data processed by your queries.
o Should be able to speed up queries by using denormalized data structures, with or without nested repeated fields.
o Exploring and Preparing data using BigQuery.
o Implementing ETL jobs using Bigquery.
o Knowledge of Python