

Brooksource
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "$/hour". Work is remote. Requires 3+ years of experience in data engineering, strong SQL and programming skills, and familiarity with cloud platforms like AWS, Azure, or GCP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dearborn, MI
-
🧠 - Skills detailed
#GIT #Programming #Kafka (Apache Kafka) #Python #Data Quality #AWS (Amazon Web Services) #Migration #Snowflake #Databricks #Agile #Data Migration #Monitoring #Data Pipeline #Data Engineering #Java #MDM (Master Data Management) #Data Processing #JSON (JavaScript Object Notation) #SQL (Structured Query Language) #Redshift #Data Catalog #GCP (Google Cloud Platform) #Spark (Apache Spark) #Deployment #BigQuery #Version Control #Docker #Kubernetes #Synapse #Airflow #"ETL (Extract #Transform #Load)" #Security #Dataflow #Azure #Scala #Data Modeling #Metadata #Cloud #Data Architecture
Role description
Data Engineer
Overview
We are seeking a Data Engineer to support large‑scale data initiatives, build modern data pipelines, and help transform legacy data systems into scalable, cloud‑based platforms. The ideal candidate has strong experience with ETL/ELT development, cloud technologies, big‑data processing, and enterprise data models. This role partners closely with architects, product teams, and business stakeholders to deliver high‑quality, governed, and reliable data solutions.
Responsibilities
• Design, build, and maintain scalable data pipelines to ingest, transform, and deliver data across multiple sources and environments.
• Migrate data from legacy/on‑prem systems to modern cloud data platforms.
• Develop ETL/ELT workflows using tools such as Databricks, Spark, Glue, Airflow, Dataflow, or similar technologies.
• Build and optimize data models to support analytics, reporting, and application use cases.
• Work with structured, semi‑structured, and unstructured data (CSV, JSON, Parquet, APIs, streaming data).
• Collaborate with data architects and engineers to implement best practices in data architecture, quality, governance, and security.
• Troubleshoot and optimize data pipelines for performance, reliability, and cost.
• Implement data quality checks, monitoring, and alerting to ensure trust and consistency across environments.
• Support CI/CD pipelines for data engineering workflows and automate deployment processes.
• Participate in Agile ceremonies and work closely with product owners, analysts, and business partners.
Required Qualifications
• 3+ years of professional experience in data engineering or software engineering with strong data work.
• Hands‑on experience with ETL/ELT pipelines, big‑data processing frameworks, and data modeling.
• Strong proficiency in SQL and one programming language (Python, Java, or Scala).
• Experience working with at least one major cloud platform (AWS, Azure, or GCP).
• Familiarity with data warehousing concepts, distributed systems, and pipeline orchestration tools.
• Experience with version control tools (Git) and CI/CD pipelines.
• Strong understanding of data quality, lineage, metadata, and governance.
• Ability to troubleshoot complex data issues and work in a fast-paced, collaborative environment.
Preferred Qualifications
• Experience with Databricks, Spark, Snowflake, BigQuery, Redshift, or Synapse.
• Background in large-scale data migrations or modernizing legacy systems.
• Experience with streaming technologies (Kafka, Pub/Sub, Kinesis, EventHub).
• Exposure to MDM, data cataloging, and enterprise governance frameworks.
• Experience in highly regulated industries (automotive, finance, healthcare, etc.).
• Familiarity with containerization tools (Docker, Kubernetes).
Data Engineer
Overview
We are seeking a Data Engineer to support large‑scale data initiatives, build modern data pipelines, and help transform legacy data systems into scalable, cloud‑based platforms. The ideal candidate has strong experience with ETL/ELT development, cloud technologies, big‑data processing, and enterprise data models. This role partners closely with architects, product teams, and business stakeholders to deliver high‑quality, governed, and reliable data solutions.
Responsibilities
• Design, build, and maintain scalable data pipelines to ingest, transform, and deliver data across multiple sources and environments.
• Migrate data from legacy/on‑prem systems to modern cloud data platforms.
• Develop ETL/ELT workflows using tools such as Databricks, Spark, Glue, Airflow, Dataflow, or similar technologies.
• Build and optimize data models to support analytics, reporting, and application use cases.
• Work with structured, semi‑structured, and unstructured data (CSV, JSON, Parquet, APIs, streaming data).
• Collaborate with data architects and engineers to implement best practices in data architecture, quality, governance, and security.
• Troubleshoot and optimize data pipelines for performance, reliability, and cost.
• Implement data quality checks, monitoring, and alerting to ensure trust and consistency across environments.
• Support CI/CD pipelines for data engineering workflows and automate deployment processes.
• Participate in Agile ceremonies and work closely with product owners, analysts, and business partners.
Required Qualifications
• 3+ years of professional experience in data engineering or software engineering with strong data work.
• Hands‑on experience with ETL/ELT pipelines, big‑data processing frameworks, and data modeling.
• Strong proficiency in SQL and one programming language (Python, Java, or Scala).
• Experience working with at least one major cloud platform (AWS, Azure, or GCP).
• Familiarity with data warehousing concepts, distributed systems, and pipeline orchestration tools.
• Experience with version control tools (Git) and CI/CD pipelines.
• Strong understanding of data quality, lineage, metadata, and governance.
• Ability to troubleshoot complex data issues and work in a fast-paced, collaborative environment.
Preferred Qualifications
• Experience with Databricks, Spark, Snowflake, BigQuery, Redshift, or Synapse.
• Background in large-scale data migrations or modernizing legacy systems.
• Experience with streaming technologies (Kafka, Pub/Sub, Kinesis, EventHub).
• Exposure to MDM, data cataloging, and enterprise governance frameworks.
• Experience in highly regulated industries (automotive, finance, healthcare, etc.).
• Familiarity with containerization tools (Docker, Kubernetes).






