Sr. Data Engineer With Strong GCP Background

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with strong GCP background, offering a remote contract position. Requires 13 years of experience, expertise in PySpark, Big Data, and Google BigQuery, with a focus on e-commerce data engineering.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 23, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Data Engineering #Data Modeling #Airflow #"ETL (Extract #Transform #Load)" #PySpark #BI (Business Intelligence) #GCP (Google Cloud Platform) #BigQuery #Google Analytics #Databases #Infrastructure as Code (IaC) #ML (Machine Learning) #Looker #Python #Terraform #Batch #Dataflow #Big Data #GitHub #Tableau #Scala #Kafka (Apache Kafka) #Programming #Apache Beam #Apache Kafka #Hadoop #Data Warehouse #Version Control #Spark (Apache Spark) #Data Science #SQL (Structured Query Language) #DevOps #Data Pipeline #Datalakes #Data Mart #GIT #Security #Storage #Data Security #Cloud #Data Governance
Role description

Role: Sr. Data Engineer With strong GCP Background

Location: Remote

Mode: Contract

Primary Skills- PySpark, Spark, Python, Big Data, GCP, Apache Beam, Dataflow, Airflow, Kafka and BigQuery

Good to Have: GFO, Google Analytics

GCP Data Engineering and ecommerce is must

Job Description:

   • 13 years' experience as a data warehouse engineer/architect designing and deploying data systems in a startup environment

   • Mastery of database and data warehouse methodologies and techniques from transactional databases to dimensional data modeling, to wide deformalized data marts

   • Deep understanding of SQL-based Big Data systems and experience with modern ETL tools

   • Expertise in designing data warehouses using Google Big Query

   • Experience developing data pipelines in Python

   • A firm believer in data-driven decision-making, and extensive experience developing for high/elastic scalable, 24x7x365 high availability digital marketing or e-commerce systems

   • Hands-on experience with data computing, storage, and security components and using Cloud Platforms (preferably GCP) provided Big Data technologies

   • Hands-on experience with real-time streaming processing as well as high volume batch processing, and skilled in Advanced SQL, GCP BigQuery, Apache Kafka, Data-Lakes, etc.

   • Hands-on Experience in Big Data technologies - Hadoop, Hive, and Spark, and an enterprise-scale Customer Data Platform (CDP)

   • Experience in at least one programming language (Python strongly preferred), cloud computing platforms (e.g., GCP), big data tools such as Spark/PySpark, columnar datastores (BigQuery preferred), DevOps processes/tooling (CI/CD, GitHub Actions), infrastructure as code frameworks (Terraform), BI Tools (e.g. DOMO, Tableau, Looker,), pipeline orchestration (eg. Airflow)

   • Fluency in data science/machine learning basics (model types, data prep, training process, etc.)

   • Experience using version control systems (Git is strongly preferred)

   • Experience with data governance and data security

   • Strong analytical, problem solving and interpersonal skills, a hunger to learn, and the ability to operate in a self-guided manner in a fast-paced rapidly changing environment