Lead Data Enginee -Azure and SAP

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include Azure Data Factory, Databricks, SQL, and Python. Requires 5+ years of data engineering experience and leadership in enterprise projects.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
San Jose, CA
-
🧠 - Skills detailed
#Data Pipeline #Computer Science #SAP #Python #Azure #Agile #ADF (Azure Data Factory) #DevOps #Azure Data Factory #VPN (Virtual Private Network) #Compliance #Data Engineering #Leadership #SQL (Structured Query Language) #SAP Hana #Jira #Data Lake #Cloud #"ETL (Extract #Transform #Load)" #Scala #Fivetran #Data Ingestion #Data Governance #Data Integrity #Microsoft Azure #Security #Migration #Data Lakehouse #Azure DevOps #Databricks #Scrum
Role description
We are seeking a seasoned Lead Data Engineer to architect and execute scalable data engineering and migration strategies. You will be responsible for the end-to-end migration of data from legacy systems to modern cloud platforms, ensuring data integrity, minimal downtime, and robust data governance. This role requires a technical leader who can drive excellence, mentor teams, and deliver optimized data pipelines that enable advanced analytics. Key Responsibilities β€’ Architect, develop, and implement high-performance ETL solutions utilizing Azure Data Factory (ADF) and SAP Data Services. β€’ Lead the migration of data logic, including the analysis and conversion of stored procedures from SAP HANA to Databricks using SQL/PL-SQL. β€’ Utilize Fivetran to automate and manage data ingestion pipelines from source systems like SAP S4 into the data lakehouse. β€’ Design, build, and maintain complex, scalable data pipelines using Python and Databricks. β€’ Champion data governance, security, and compliance standards across all data engineering initiatives. β€’ Provide technical leadership, mentorship, and guidance to data engineering teams on best practices and architecture. β€’ Proactively identify, troubleshoot, and resolve performance bottlenecks and network/VPN-related data flow issues. β€’ Collaborate with stakeholders to translate business requirements into technical solutions and provide regular project updates. β€’ Document data flows, pipeline designs, and lineage to ensure clarity and maintainability. β€’ Actively participate in Agile/Scrum ceremonies using tools like Jira or Azure DevOps. Required Experience & Skills β€’ 5+ years of professional experience in data engineering, platform development, with proven leadership/architecture responsibilities. β€’ Must have led a minimum of 3 end-to-end enterprise data projects utilizing the Microsoft Azure tech stack and Databricks. β€’ 5+ years of hands-on experience building ETL/ELT pipelines with Azure Data Factory (ADF). β€’ Demonstrable expertise in SQL and experience migrating logic from platforms like SAP HANA. β€’ Practical experience with Fivetran for automated data ingestion. β€’ Solid understanding of networking concepts and experience resolving VPN and data flow issues. β€’ Familiarity with data governance, security protocols, and compliance frameworks. β€’ Proficiency in Python for data pipeline development. β€’ Strong interpersonal and communication skills, with the ability to collaborate effectively with both technical teams and business stakeholders. β€’ Bachelor’s degree (BS/MS) in Computer Science, Information Systems, or a related field. β€’ Prior experience working in an Agile/Scrum environment with tools like Jira or Azure DevOps. Skills: data engineering,sql,sap,10 years,azure,databricks,pipelines,architecture