

GCP Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer in Dearborn, MI, with a contract length of "unknown" and a pay rate of "unknown." Candidates must have 4+ years of experience in Data Engineering, proficiency in GCP, ETL, Apache Spark, Python, and SQL. A Bachelor's Degree is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 25, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dearborn, MI
-
π§ - Skills detailed
#Security #Apache Spark #Normalization #Data Science #Data Conversion #Storage #Data Architecture #Data Transformations #Data Engineering #Data Pipeline #Data Security #Data Storage #Data Warehouse #Data Ingestion #Data Analysis #Consulting #"ETL (Extract #Transform #Load)" #GitHub #Data Processing #SQL (Structured Query Language) #Apache Beam #Code Reviews #Data Lake #Dataflow #Scala #Big Data #Data Lakehouse #Kafka (Apache Kafka) #IAM (Identity and Access Management) #AI (Artificial Intelligence) #Data Quality #Automation #ML (Machine Learning) #Clustering #Spark (Apache Spark) #Cloud #Datasets #Java #Data Integrity #GCP (Google Cloud Platform) #GIT #Python #Logging
Role description
Details:
Job Description
Stefanini Group is hiring!
Stefanini is looking for a Data Engineer, Dearborn, MI
For quick apply, please reach out Pawan Rawat at 248-213-3605/pawansingh.rawat@stefanini.com
We are looking for the candidate who will be responsible for designing, building, and maintaining data solutions including data infrastructure, pipelines, etc. for collecting, storing, processing and analyzing large volumes of data efficiently and accurately
Responsibilities
β’ Design, build and maintain reliable, efficient and scalable data infrastructure for data collection, storage, transformation, and analysis
β’ Plan, design, build and maintain scalable data solutions including data pipelines, data models, and applications for efficient and reliable data workflow
β’ Design, implement and maintain existing and future data platforms like data warehouses, data lakes, data lakehouse etc. for structured and unstructured data
β’ Design and develop analytical tools, algorithms, and programs to support data engineering activities like writing scripts and automating tasks
Job Requirements
Details:
Experience Required
β’ Required 4+ years of Data Engineering work experience
β’ Google Cloud Platform, ETL, Apache Spark, Data Architecture, Python, SQL, KAFKA
β’ Data Pipeline Architecture & Development: Design, build, and maintain highly scalable, fault-tolerant, and performant data pipelines to ingest and process data from 10+ siloed sources, including both structured and unstructured formats.
β’ ML-Driven ETL Implementation: Operationalize ETL pipelines for intelligent data ingestion, automated cataloging, and sophisticated normalization of diverse datasets.
β’ Unified Data Model Creation: Architect and implement a unified data model capable of connecting all relevant data elements across various sources, optimized for efficient querying and insight generation by AI agents and chatbot interfaces.
β’ Big Data Processing: Utilize advanced distributed processing frameworks (Apache Beam, Apache Spark, Google Cloud Dataflow) to handle large-scale data transformations and data flow.
β’ Cloud-Native Data Infrastructure: Leverage GCP services to build and manage robust data storage, processing, and orchestration layers.
β’ Data Quality, Governance & Security: Implement rigorous data quality gates, validation rules, bad record handling, and comprehensive logging. Ensure strict adherence to data security policies, IAM role management, and GCP perimeter security.
β’ Automation & Orchestration: Develop shell scripts, Cloud Build YAMLs and utilize Cloud Scheduler/PubSub for E2E automation of data pipelines and infrastructure provisioning.
β’ Collaboration with AI/ML Teams: Work closely with AI/ML engineers, data scientists, and product managers to understand data reqts, integrate data solutions with multi-agentic systems, and optimize data delivery for chatbot functionalities.
β’ Testing & CI/CD: Implement robust testing strategies, maintain high code quality through active participation in Git/GitHub, perform code reviews, and manage CI/CD pipelines via Cloud Build.
β’ Perf. Tuning & Optimization: Continuously monitor, optimize, and troubleshoot data pipelines and BQ performance using techniques like table partitioning, clustering, and sharding.
Experience Preferred
β’ Java, PowerShell, Data Acquisition, Data Analysis, Data Collection, Data Conversion, Data Integrity, Data/Analytics dashboards
Education Required
β’ Bachelor's Degree
β’ Listed salary ranges may vary based on experience, qualifications, and local market. Also, some positions may include bonuses or other incentives
β’
β’
β’ Stefanini takes pride in hiring top talent and developing relationships with our future employees. Our talent acquisition teams will never make an offer of employment without having a phone conversation with you. Those face-to-face conversations will involve a description of the job for which you have applied. We also speak with you about the process, including interviews and job offers.
About Stefanini Group
The Stefanini Group is a global provider of offshore, onshore and near shore outsourcing, IT digital consulting, systems integration, application, and strategic staffing services to Fortune 1000 enterprises around the world. Our presence is in countries like the Americas, Europe, Africa, and Asia, and more than four hundred clients across a broad spectrum of markets, including financial services, manufacturing, telecommunications, chemical services, technology, public sector, and utilities. Stefanini is a CMM level 5, IT consulting company with a global presence. We are a CMM Level 5 company.
Details:
Job Description
Stefanini Group is hiring!
Stefanini is looking for a Data Engineer, Dearborn, MI
For quick apply, please reach out Pawan Rawat at 248-213-3605/pawansingh.rawat@stefanini.com
We are looking for the candidate who will be responsible for designing, building, and maintaining data solutions including data infrastructure, pipelines, etc. for collecting, storing, processing and analyzing large volumes of data efficiently and accurately
Responsibilities
β’ Design, build and maintain reliable, efficient and scalable data infrastructure for data collection, storage, transformation, and analysis
β’ Plan, design, build and maintain scalable data solutions including data pipelines, data models, and applications for efficient and reliable data workflow
β’ Design, implement and maintain existing and future data platforms like data warehouses, data lakes, data lakehouse etc. for structured and unstructured data
β’ Design and develop analytical tools, algorithms, and programs to support data engineering activities like writing scripts and automating tasks
Job Requirements
Details:
Experience Required
β’ Required 4+ years of Data Engineering work experience
β’ Google Cloud Platform, ETL, Apache Spark, Data Architecture, Python, SQL, KAFKA
β’ Data Pipeline Architecture & Development: Design, build, and maintain highly scalable, fault-tolerant, and performant data pipelines to ingest and process data from 10+ siloed sources, including both structured and unstructured formats.
β’ ML-Driven ETL Implementation: Operationalize ETL pipelines for intelligent data ingestion, automated cataloging, and sophisticated normalization of diverse datasets.
β’ Unified Data Model Creation: Architect and implement a unified data model capable of connecting all relevant data elements across various sources, optimized for efficient querying and insight generation by AI agents and chatbot interfaces.
β’ Big Data Processing: Utilize advanced distributed processing frameworks (Apache Beam, Apache Spark, Google Cloud Dataflow) to handle large-scale data transformations and data flow.
β’ Cloud-Native Data Infrastructure: Leverage GCP services to build and manage robust data storage, processing, and orchestration layers.
β’ Data Quality, Governance & Security: Implement rigorous data quality gates, validation rules, bad record handling, and comprehensive logging. Ensure strict adherence to data security policies, IAM role management, and GCP perimeter security.
β’ Automation & Orchestration: Develop shell scripts, Cloud Build YAMLs and utilize Cloud Scheduler/PubSub for E2E automation of data pipelines and infrastructure provisioning.
β’ Collaboration with AI/ML Teams: Work closely with AI/ML engineers, data scientists, and product managers to understand data reqts, integrate data solutions with multi-agentic systems, and optimize data delivery for chatbot functionalities.
β’ Testing & CI/CD: Implement robust testing strategies, maintain high code quality through active participation in Git/GitHub, perform code reviews, and manage CI/CD pipelines via Cloud Build.
β’ Perf. Tuning & Optimization: Continuously monitor, optimize, and troubleshoot data pipelines and BQ performance using techniques like table partitioning, clustering, and sharding.
Experience Preferred
β’ Java, PowerShell, Data Acquisition, Data Analysis, Data Collection, Data Conversion, Data Integrity, Data/Analytics dashboards
Education Required
β’ Bachelor's Degree
β’ Listed salary ranges may vary based on experience, qualifications, and local market. Also, some positions may include bonuses or other incentives
β’
β’
β’ Stefanini takes pride in hiring top talent and developing relationships with our future employees. Our talent acquisition teams will never make an offer of employment without having a phone conversation with you. Those face-to-face conversations will involve a description of the job for which you have applied. We also speak with you about the process, including interviews and job offers.
About Stefanini Group
The Stefanini Group is a global provider of offshore, onshore and near shore outsourcing, IT digital consulting, systems integration, application, and strategic staffing services to Fortune 1000 enterprises around the world. Our presence is in countries like the Americas, Europe, Africa, and Asia, and more than four hundred clients across a broad spectrum of markets, including financial services, manufacturing, telecommunications, chemical services, technology, public sector, and utilities. Stefanini is a CMM level 5, IT consulting company with a global presence. We are a CMM Level 5 company.