

E-Solutions
Azure Data Lead
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Lead with 12-15 years of data engineering experience, focusing on data platform strategy, ETL processes, and real-time processing using PySpark and Azure. Remote work, competitive pay, SQL and NoSQL expertise required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Spark (Apache Spark) #Oracle #Data Migration #SQL (Structured Query Language) #Data Pipeline #"ETL (Extract #Transform #Load)" #PySpark #Data Management #NoSQL #MySQL #Migration #Data Quality #Snowflake #MongoDB #Apache Spark #Strategy #Azure #Synapse #Agile #Scala #Data Engineering
Role description
Role: Azure Data Lead
Remote (PST)
data engineer pypark and synapse
Job Description:
• Candidate should have 12-15 years of experience in Data Engineering and Architect area
• Data platform strategy, Data migration strategy, Data validation strategy.
• Designing, creating, testing and maintaining the complete data management & processing systems.
• Working closely with the stakeholders
• Contribute towards defining platform roadmap/Architecture/ solution, Design, POCs, prototype, technical evaluation for tech stack finalization and guiding principle for the best practices etc.
• Contribute towards understanding the business problems, identify and propose the best technical solution.
• Design / Implement DWH, data model, Data Pipelines/ Analytics; Handling NFR & benchmarking
• Creating data models to reduce system complexities and hence increase efficiency & reduce cost.
• Introducing new data management tools & technologies into the existing system to make it more efficient.
• Ensuring architecture meets the business requirements.
• Building highly scalable, robust & fault-tolerant systems.
• Taking care of the complete ETL process.
• Must have knowledge and working experience in Real-time processing Framework (Apache Spark), PySpark and in Azure
• Must have experience on SQL-based technologies (e.g. MySQL/ Oracle DB) and NoSQL technologies (e.g. Cassandra and MongoDB)
• Experience in Snowflake
• Discovering data acquisitions opportunities
• Finding ways & methods to find value out of existing data.
• Improving data quality, reliability & efficiency of the individual components & the complete system.
• Setting & achieving individual as well as the team goal.
• Problem solving mindset working in agile environment
Role: Azure Data Lead
Remote (PST)
data engineer pypark and synapse
Job Description:
• Candidate should have 12-15 years of experience in Data Engineering and Architect area
• Data platform strategy, Data migration strategy, Data validation strategy.
• Designing, creating, testing and maintaining the complete data management & processing systems.
• Working closely with the stakeholders
• Contribute towards defining platform roadmap/Architecture/ solution, Design, POCs, prototype, technical evaluation for tech stack finalization and guiding principle for the best practices etc.
• Contribute towards understanding the business problems, identify and propose the best technical solution.
• Design / Implement DWH, data model, Data Pipelines/ Analytics; Handling NFR & benchmarking
• Creating data models to reduce system complexities and hence increase efficiency & reduce cost.
• Introducing new data management tools & technologies into the existing system to make it more efficient.
• Ensuring architecture meets the business requirements.
• Building highly scalable, robust & fault-tolerant systems.
• Taking care of the complete ETL process.
• Must have knowledge and working experience in Real-time processing Framework (Apache Spark), PySpark and in Azure
• Must have experience on SQL-based technologies (e.g. MySQL/ Oracle DB) and NoSQL technologies (e.g. Cassandra and MongoDB)
• Experience in Snowflake
• Discovering data acquisitions opportunities
• Finding ways & methods to find value out of existing data.
• Improving data quality, reliability & efficiency of the individual components & the complete system.
• Setting & achieving individual as well as the team goal.
• Problem solving mindset working in agile environment






