

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 10+ years of experience, offering a hybrid work location in Dearborn, MI. It requires expertise in GCP, ETL, data architecture, and machine learning, along with a Bachelor's degree in a relevant field.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
640
-
ποΈ - Date discovered
July 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dearborn, MI
-
π§ - Skills detailed
#Computer Science #Data Warehouse #BI (Business Intelligence) #Spark (Apache Spark) #Databases #Migration #Groovy #Visualization #Scripting #Scala #Data Pipeline #AI (Artificial Intelligence) #API (Application Programming Interface) #Unix #Data Engineering #Bash #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Data Science #GCP (Google Cloud Platform) #Data Lake #REST (Representational State Transfer) #Database Administration #REST API #PySpark #Data Migration #PostgreSQL #Monitoring #Datasets #Data Architecture #Python #Data Lakehouse #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Senior Data Engineer
Location: Hybrid β Dearborn, MI
Experience Level: 10+ Years
Education: Bachelorβs Degree in Computer Science, Computer Information Systems, or equivalent experience (Masterβs in Data Science preferred)
Position Overview:
We are seeking a highly experienced and innovative Senior Data Engineer to lead the design and implementation of advanced data solutions across cloud and on-premises environments. This role requires deep expertise in data architecture, cloud platforms (especially GCP), ETL pipelines, and modern data science practices.
Key Responsibilities:
β’ Design and implement scalable data solutions using the latest cloud and on-premises technologies.
β’ Lead the migration of legacy data environments with a focus on performance, reliability, and minimal disruption.
β’ Contribute to data architecture by assessing data sources, models, schemas, and workflows.
β’ Design and optimize ETL jobs, data pipelines, and data workflows.
β’ Develop and support BI solutions, including dynamic dashboards and reporting pipelines.
β’ Architect and implement machine learning and AI applications, including MLOps pipelines.
β’ Provide technical guidance on customization, integration, and enterprise architecture of data products.
β’ Build and manage data lakehouse solutions in GCP, including relational and vector databases, data warehouses, and distributed systems.
β’ Utilize PySpark APIs for processing resilient distributed datasets (RDDs) and data frames.
Required Skills:
β’ Proven experience designing data solutions in cloud and on-prem environments.
β’ Strong background in legacy data migration and modernization.
β’ Expertise in data architecture, ETL, and data pipeline design.
β’ Proficiency in BI tools and data visualization strategies.
β’ Experience with machine learning, AI, and MLOps practices.
β’ Hands-on experience with GCP data services and lakehouse architecture.
β’ Advanced knowledge of PySpark, RDDs, and data frames.
Preferred Skills:
β’ Scripting in Bash, Python, and Groovy.
β’ Experience with VM-based application installation, performance monitoring, and log analysis on Unix systems.
β’ PostgreSQL database administration.
β’ REST API development using Python frameworks.