Azure Data Engineer Level 3

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer Level 3, offering a contract of unknown length with a competitive pay rate. Requires 5+ years in data engineering, expertise in Azure, Databricks, and SQL, along with relevant certifications and experience in behavioral data management.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 12, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Sharonville, OH
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Pipeline #Compliance #Data Architecture #Data Transformations #Data Catalog #Databricks #Data Processing #SQL (Structured Query Language) #Azure #Jira #Scala #Data Management #Metadata #Synapse #Data Science #Alation #Datasets #Delta Lake #Semantic Models #Data Governance #Microsoft Power BI #Cloud #Data Engineering #AI (Artificial Intelligence) #Spark (Apache Spark) #Data Ingestion #Data Mart #Agile #BI (Business Intelligence) #Documentation #Azure Databricks #ML (Machine Learning)
Role description
Job Description Position Overview β€’ We are looking for a highly skilled, hands-on Senior Data Engineer to join our Data & Analytics team. β€’ In this role, you will play a key part in building and scaling behavioral and Ecommerce data platform, enabling trusted analytics and making AI-ready data available across the organization. β€’ You will be responsible for designing and implementing robust, scalable data pipelines, modeling complex, high-volume datasets, and delivering high-quality, well-structured data products to power business insights and future AI capabilities. β€’ This is a delivery-focused engineering position demanding deep technical expertise in Azure, Databricks, and Synapse, combined with experience managing large-scale behavioral data in enterprise environments. Required Qualifications β€’ 5+ years of experience in data engineering or similar roles, with hands-on delivery of cloud-based data solutions. β€’ Preferred Engineering certifications: Microsoft Certified: Azure Data Engineer Associate and Databricks: Databricks Certified Data Engineer Professional β€’ Strong expertise in Databricks and Azure Synapse, with practical experience in Spark-based data processing. β€’ Proficient in modern data architectures (Lakehouse, ELT/ETL pipelines, real-time data processing). β€’ Advanced SQL skills for data transformation and performance optimization. β€’ Proven ability to model and manage large-scale, complex behavioral and Ecommerce datasets. β€’ Expert in BI and best practices for data enablement and self-service analytics. β€’ Hands-on experience with Unity Catalog and data cataloging tools (e.g., Alation) for governance and metadata management. β€’ Working knowledge of behavioral analytics platforms (Adobe Analytics, Adobe Customer Journey Analytics). β€’ Strong collaboration and communication skills β€’ Experience operating in agile delivery environments, balancing speed, scalability, and solution quality. Key Responsibilities β€’ Design, build, and maintain robust, scalable ELT/ETL pipelines and data transformations using Databricks, Spark, and Synapse. β€’ Model high-volume, complex event-level datasets (digital behavior, Ecommerce transactions, marketing interactions) to support dashboards, experimentation, ML models, and marketing activation. β€’ Enforce data governance, discoverability, and stewardship using Unity Catalog and Alation, ensuring compliance and lineage tracking. β€’ Validate and reconcile data pipelines against established behavioral datasets such as Adobe Customer Journey Analytics (CJA) and Adobe Analytics. β€’ Partner with data architects, analysts, data scientists, and marketing teams to deliver trusted, reusable, and well-structured datasets that power BI dashboards and decision-making. β€’ Mature the data ingestion, processing, orchestration, and curation capabilities leveraging Delta Lake optimization, Databricks Workflows, and Synapse for analytical consumption. β€’ Support and optimize semantic models and data marts that enable self-service analytics through AI/BI Dashboards and Power BI. β€’ Participate in agile delivery processes (sprint planning, backlog refinement, documentation), collaborating through Jira and Confluence. β€’ Document data assets, transformations, and pipelines for discoverability, transparency, and long-term maintainability.