

Sr. Data Engineer With Strong GCP Background (12+ Yrs Minimum Experience Required)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with a strong GCP background, requiring 12+ years of experience. It offers a hybrid work location in Philadelphia, PA, with a focus on skills in PySpark, Big Data, and Google Cloud technologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Philadelphia, PA
-
π§ - Skills detailed
#SQL (Structured Query Language) #Dataflow #Batch #Big Data #Cloud #Security #Data Security #Terraform #Apache Kafka #Databases #Data Science #Looker #Data Governance #Data Mart #Apache Beam #Storage #"ETL (Extract #Transform #Load)" #Tableau #Hadoop #PySpark #Kafka (Apache Kafka) #ML (Machine Learning) #Version Control #BigQuery #GCP (Google Cloud Platform) #Data Engineering #Data Modeling #GitHub #Python #Scala #Data Warehouse #Programming #Airflow #Data Pipeline #Google Analytics #JavaScript #Spark (Apache Spark) #DevOps #GIT #Infrastructure as Code (IaC) #Datalakes #BI (Business Intelligence)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Sr. Data Engineer With strong GCP Background
Experience: 12+ Years
Location: Onsite/Remote- From Philadelphia PA (Need only Locals to PA) Candidate has to be from PA
Primary Skills- PySpark, Spark, Python, Big Data, GCP, Apache Beam, Dataflow, Airflow, Kafka and BigQuery
Good to Have: GFO, Google Analytics Javascript is Must
Job Description
β’ 12+ years experience as a data warehouse engineer/architect designing and deploying data systems in a startup environment
β’ Mastery of database and data warehouse methodologies and techniques from transactional databases to dimensional data modeling, to wide deformalized data marts
β’ Deep understanding of SQL-based Big Data systems and experience with modern ETL tools
β’ Expertise in designing data warehouses using Google Big Query
β’ Experience developing data pipelines in Python
β’ A firm believer in data-driven decision-making, and extensive experience developing for high/elastic scalable, 24x7x365 high availability digital marketing or e-commerce systems
β’ Hands-on experience with data computing, storage, and security components and using Cloud Platforms (preferably GCP) provided Big Data technologies
β’ Hands-on experience with real-time streaming processing as well as high volume batch processing, and skilled in Advanced SQL, GCP BigQuery, Apache Kafka, Data-Lakes, etc.
β’ Hands-on Experience in Big Data technologies - Hadoop, Hive, and Spark, and an enterprise-scale Customer Data Platform (CDP)
β’ Experience in at least one programming language (Python strongly preferred), cloud computing platforms (e.g., GCP), big data tools such as Spark/PySpark, columnar datastores (BigQuery preferred), DevOps processes/tooling (CI/CD, GitHub Actions), infrastructure as code frameworks (Terraform), BI Tools (e.g. DOMO, Tableau, Looker,), pipeline orchestration (eg. Airflow)
β’ Fluency in data science/machine learning basics (model types, data prep, training process, etc.)
β’ Experience using version control systems (Git is strongly preferred)
β’ Experience with data governance and data security
β’ Strong analytical, problem solving and interpersonal skills, a hunger to learn, and the ability to operate in a self-guided manner in a fast-paced rapidly changing environment