

Data Engineer (GCP, PySpark, BigQuery, Databricks)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (GCP, PySpark, BigQuery, Databricks) for a 12-month contract in Dallas, TX, offering competitive pay. Requires 8+ years of experience, expertise in GCP, Databricks, SQL, Python, and a preferred GCP Professional Data Engineer certification.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 12, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Data Processing #Scala #BigQuery #Infrastructure as Code (IaC) #GCP (Google Cloud Platform) #Batch #PySpark #Cloud #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Dataflow #Storage #Databricks #Indexing #dbt (data build tool) #Terraform #Data Engineering #Delta Lake #MLflow #Spark (Apache Spark) #Python #Data Pipeline
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer (GCP, PySpark, BigQuery, Databricks) - CONTRACT W2
Location: Dallas TX β Onsite
Duration: 12 months
Job Summary
We are seeking a skilled Data Engineer with expertise in Google Cloud Platform (GCP), PySpark, BigQuery, and Databricks to design, build, and optimize scalable data pipelines. The ideal candidate will have hands-on experience in migrating data workflows from Databricks to GCP, ensuring seamless integration and performance optimization.
Required Skills & Qualifications
β’ 8+ years of experience as a Data Engineer with expertise in GCP, PySpark, and BigQuery.
β’ Hands-on experience with Databricks (Spark, Delta Lake, DBFS) and migrating workloads to GCP.
β’ Strong proficiency in SQL, Python, and PySpark for data transformation.
β’ Experience with GCP services: BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub.
β’ Knowledge of data warehousing concepts, partitioning, and indexing in BigQuery.
β’ Familiarity with CI/CD pipelines, Terraform, and infrastructure-as-code (IaC) for GCP.
β’ Experience with real-time & batch data processing.
β’ Strong problem-solving skills and ability to optimize large-scale data pipelines.
Preferred Qualifications
β’ GCP Professional Data Engineer certification.
β’ Experience with Databricks Unity Catalog, MLflow, or Databricks workflows.
β’ Knowledge of dbt (data build tool) for transformation in BigQuery.
β’ Exposure to Data Mesh or Data Fabric architectures.