

Madison-Davis, LLC
Azure Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include Azure Databricks, Python, PySpark, SQL, and experience in cybersecurity. Strong data engineering experience and collaboration skills are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
May 5, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Data Lakehouse #"ETL (Extract #Transform #Load)" #REST (Representational State Transfer) #Compliance #Azure #Azure Databricks #Spark (Apache Spark) #Databricks #Data Engineering #FastAPI #Data Pipeline #NoSQL #Databases #Data Integration #REST API #Cybersecurity #Data Quality #Python #Scala #Agile #PySpark #API (Application Programming Interface) #Synapse #Security #Data Processing #SQL (Structured Query Language) #Data Lake
Role description
Job Description:
A leading enterprise organization is seeking an Azure Data Engineer to support the buildout of a centralized data lakehouse environment focused on cybersecurity and enterprise data integration. This role will play a key part in designing and developing scalable data pipelines that ingest and transform large volumes of structured and unstructured data.
You will work closely with cross-functional teams including data owners, security engineering, and architecture to ensure reliable, high-performance data delivery while supporting modernization initiatives across the platform.
What You’ll Tackle
• Design and build automated data pipelines using Azure and Databricks
• Develop ingestion frameworks for API-based and external data sources
• Optimize and tune pipeline performance for large-scale data processing
• Implement and maintain data lake and lakehouse architectures
• Collaborate with stakeholders across engineering and security teams
• Ensure data quality, reliability, and compliance with internal standards
• Support agile delivery cycles and sprint-based development
What You Bring
• Proven data engineering experience
• Strong expertise in Python, PySpark, and SQL
• Hands-on experience with Azure Databricks
• Experience with Azure services (Data Lake Gen2, Data Factory, APIs)
• Proven experience building pipelines using REST API data sources
• Experience with performance tuning and optimization
• Strong understanding of relational and/or NoSQL databases
• Experience building APIs using Python frameworks (e.g., FastAPI)
• Strong communication and stakeholder collaboration skills
Nice to Have
• Experience in cybersecurity or security data environments
• Familiarity with Azure Synapse or Microsoft Fabric
• Experience with enterprise-scale data lakehouse platforms
• Azure certifications
Job Description:
A leading enterprise organization is seeking an Azure Data Engineer to support the buildout of a centralized data lakehouse environment focused on cybersecurity and enterprise data integration. This role will play a key part in designing and developing scalable data pipelines that ingest and transform large volumes of structured and unstructured data.
You will work closely with cross-functional teams including data owners, security engineering, and architecture to ensure reliable, high-performance data delivery while supporting modernization initiatives across the platform.
What You’ll Tackle
• Design and build automated data pipelines using Azure and Databricks
• Develop ingestion frameworks for API-based and external data sources
• Optimize and tune pipeline performance for large-scale data processing
• Implement and maintain data lake and lakehouse architectures
• Collaborate with stakeholders across engineering and security teams
• Ensure data quality, reliability, and compliance with internal standards
• Support agile delivery cycles and sprint-based development
What You Bring
• Proven data engineering experience
• Strong expertise in Python, PySpark, and SQL
• Hands-on experience with Azure Databricks
• Experience with Azure services (Data Lake Gen2, Data Factory, APIs)
• Proven experience building pipelines using REST API data sources
• Experience with performance tuning and optimization
• Strong understanding of relational and/or NoSQL databases
• Experience building APIs using Python frameworks (e.g., FastAPI)
• Strong communication and stakeholder collaboration skills
Nice to Have
• Experience in cybersecurity or security data environments
• Familiarity with Azure Synapse or Microsoft Fabric
• Experience with enterprise-scale data lakehouse platforms
• Azure certifications






