

Data Architect / Engineer – GCP
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect/Engineer – GCP, requiring 10+ years of experience. It offers a hybrid position in Dearborn, MI, with a pay rate of "N/A." Key skills include GCP, PySpark, ETL, and data lakehouse engineering.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
July 2, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Dearborn, MI
-
🧠 - Skills detailed
#Spark (Apache Spark) #Data Science #Data Pipeline #Data Lakehouse #REST API #BI (Business Intelligence) #Scripting #Bash #Data Engineering #ML (Machine Learning) #Airflow #Python #Schema Design #"ETL (Extract #Transform #Load)" #Data Governance #Data Architecture #Storage #PySpark #Visualization #Groovy #AI (Artificial Intelligence) #Monitoring #Data Lake #Tableau #Cloud #API (Application Programming Interface) #REST (Representational State Transfer) #Data Modeling #Unix #Microsoft Power BI #PostgreSQL #Deployment #Scala #GCP (Google Cloud Platform) #BigQuery
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Epitec is seeking a Data Engineer /Architect to join our leading Automotive client's team.
• Its a w2 Role , No C2C / 1099 Accepted
•
• Its a Hybrid role in Dearborn, MI , Candidate outside Michigan must relocate
•
• Its a mid- senior role, candidates less than 8+ years of experience will not be considered
•
• Candidates must have GCP Experience
• Summary:
This position is centered around designing and implementing modern data solutions, particularly in cloud environments like Google Cloud Platform (GCP). The role blends traditional data architecture with modern data engineering, analytics, and AI/ML capabilities.
Core Responsibilities
Cloud-Based Data Architecture
• Design scalable, secure, and high-performance data solutions in cloud or hybrid environments.
• Migrate legacy systems to modern architectures with a focus on performance and reliability.
ETL and Data Pipelines
• Build and optimize ETL jobs and data pipelines using tools like PySpark and Airflow.
• Ensure data workflows are efficient and support downstream analytics and reporting.
Data Lakehouse Engineering
• Develop solutions using GCP-native services (e.g., BigQuery, Dataproc, Cloud Storage).
• Work with both structured (relational) and unstructured (vector) data in lakehouse environments
BI and Visualization
• Design and support dashboards and reports using tools like Power BI or Tableau.
• Set up data pipelines that feed into these visualizations.
AI/ML and MLOps
• Collaborate with data scientists to build and deploy machine learning models.
• Design MLOps pipelines for scalable and repeatable model training and deployment.
Technical Support and Integration
• Address technical inquiries related to integration, customization, and enterprise architecture.
• Provide guidance on best practices and tool usage.
Required Skills
• Strong experience with PySpark, especially using RDDs and DataFrames.
• Deep understanding of data modeling, schema design, and data governance.
• Hands-on experience with GCP services and building data lakehouse solutions.
• Proficiency in scripting (Python, Bash, Groovy).
• Familiarity with PostgreSQL and REST API development.
• Experience with Unix-based systems and application monitoring.
Experience Level
• 10+ years of experience in data engineering, architecture, or related fields.