

Brooksource
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (5–10 years) on a contract basis in Birmingham, AL. Pay rate is competitive. Key skills include SQL, Azure, Databricks, and big data technologies. Experience with data pipelines and cloud platforms is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 7, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Birmingham, AL
-
🧠 - Skills detailed
#BI (Business Intelligence) #Data Modeling #Big Data #Agile #Databricks #DevOps #Data Quality #Synapse #Databases #Data Engineering #SSAS (SQL Server Analysis Services) #Docker #SSIS (SQL Server Integration Services) #Data Lake #Azure Databricks #ML (Machine Learning) #Oracle GoldenGate #Azure Data Factory #Data Processing #Batch #Datasets #Monitoring #Oracle #Spark (Apache Spark) #Microsoft Azure #Microsoft Power BI #Scala #Azure #SQL Server #Web Services #AI (Artificial Intelligence) #Data Pipeline #Hadoop #Informatica #"ETL (Extract #Transform #Load)" #NoSQL #Cloud #Schema Design #Python #Vault #R #ADF (Azure Data Factory) #SQL (Structured Query Language)
Role description
Data Engineer III
Location: Birmingham, AL (Onsite – The Energy Center area)
Experience Level: Senior (5–10 years)
Employment Type: Contract
Overview
We are seeking a senior‑level Data Engineer to support enterprise data analytics and engineering initiatives. This role focuses on building scalable data pipelines, optimizing data models, and enabling analytics and AI/ML solutions across large, complex datasets. The ideal candidate has strong experience working with both on‑prem and cloud‑based data platforms, particularly within Azure and Databricks environments.
Responsibilities
• Design, build, and maintain scalable batch and real‑time data pipelines
• Normalize and model data to ensure performance, quality, and usability
• Combine and transform data from multiple sources into reliable, machine‑readable formats
• Develop datasets that support analytics, reporting, and business intelligence needs
• Create functional and technical designs for data engineering and analytics solutions
• Implement data quality frameworks and monitoring processes
• Support data sourcing, enrichment, and delivery using APIs and web services
• Collaborate within Agile teams and contribute to CI/CD and DevOps practices
Required Qualifications
• 5–10 years of experience manipulating and engineering data in a software engineering or data engineering capacity
• Strong expertise in SQL, data modeling, and schema design
• Hands‑on experience with big data technologies including Spark, Hive, Hadoop
• Proven experience building Databricks pipelines across multiple data sources
• Experience with batch and real‑time data processing frameworks
• Working knowledge of relational databases, NoSQL systems, and data lakes
• Experience with data quality tools and best practices
• Experience designing and implementing data models across diverse data sources
On‑Premise Technology Experience
• 5+ years working with tools such as:
• MSBI (SSIS, SSAS)
• Informatica
• Oracle GoldenGate
• Oracle, SQL Server
Cloud & Modern Data Stack Experience
• 3+ years hands‑on experience with Microsoft Azure tools, including:
• Azure Data Lake
• Azure Data Factory
• Azure Databricks
• Azure Synapse
• Azure Key Vault
• Python
• Power BI
Additional Technical Experience
• Hands‑on development of statistical models and/or AI/ML solutions using Python and/or R
• Experience with containerization technologies (Docker, OpenShift, etc.)
• Familiarity with Agile methodologies, DevOps, and CI/CD pipelines
Why This Role
• Work on complex, high‑impact data initiatives
• Collaborate with experienced analytics and engineering teams
• Gain exposure to modern cloud data platforms and advanced analytics solutions
Data Engineer III
Location: Birmingham, AL (Onsite – The Energy Center area)
Experience Level: Senior (5–10 years)
Employment Type: Contract
Overview
We are seeking a senior‑level Data Engineer to support enterprise data analytics and engineering initiatives. This role focuses on building scalable data pipelines, optimizing data models, and enabling analytics and AI/ML solutions across large, complex datasets. The ideal candidate has strong experience working with both on‑prem and cloud‑based data platforms, particularly within Azure and Databricks environments.
Responsibilities
• Design, build, and maintain scalable batch and real‑time data pipelines
• Normalize and model data to ensure performance, quality, and usability
• Combine and transform data from multiple sources into reliable, machine‑readable formats
• Develop datasets that support analytics, reporting, and business intelligence needs
• Create functional and technical designs for data engineering and analytics solutions
• Implement data quality frameworks and monitoring processes
• Support data sourcing, enrichment, and delivery using APIs and web services
• Collaborate within Agile teams and contribute to CI/CD and DevOps practices
Required Qualifications
• 5–10 years of experience manipulating and engineering data in a software engineering or data engineering capacity
• Strong expertise in SQL, data modeling, and schema design
• Hands‑on experience with big data technologies including Spark, Hive, Hadoop
• Proven experience building Databricks pipelines across multiple data sources
• Experience with batch and real‑time data processing frameworks
• Working knowledge of relational databases, NoSQL systems, and data lakes
• Experience with data quality tools and best practices
• Experience designing and implementing data models across diverse data sources
On‑Premise Technology Experience
• 5+ years working with tools such as:
• MSBI (SSIS, SSAS)
• Informatica
• Oracle GoldenGate
• Oracle, SQL Server
Cloud & Modern Data Stack Experience
• 3+ years hands‑on experience with Microsoft Azure tools, including:
• Azure Data Lake
• Azure Data Factory
• Azure Databricks
• Azure Synapse
• Azure Key Vault
• Python
• Power BI
Additional Technical Experience
• Hands‑on development of statistical models and/or AI/ML solutions using Python and/or R
• Experience with containerization technologies (Docker, OpenShift, etc.)
• Familiarity with Agile methodologies, DevOps, and CI/CD pipelines
Why This Role
• Work on complex, high‑impact data initiatives
• Collaborate with experienced analytics and engineering teams
• Gain exposure to modern cloud data platforms and advanced analytics solutions






