

Drillo.AI
Azure Data Engineer with Snowflake Experiance
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with Snowflake experience, onsite in New Jersey or New York. Contract length is unspecified, with a pay rate of "unknown." Requires 5+ years in data engineering, expertise in Azure, Databricks, and Snowflake.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 3, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Berkeley Heights, NJ
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Snowflake #Azure Data Factory #Data Processing #Security #Compliance #PySpark #Cloud #Azure cloud #Version Control #GIT #Scala #Data Science #Python #Data Engineering #Databricks #Azure #SQL (Structured Query Language) #DevOps #Data Transformations #Data Governance #Spark (Apache Spark) #SQL Queries #Azure DevOps #ADF (Azure Data Factory) #Storage #Data Lake #Data Pipeline #Azure Databricks #Data Modeling
Role description
This Role is Onsite only role and accepting only residents of New Jersey and New York, Also accepting only Green cards and US Citizens at this moment.
About the Role
The Azure Data Engineer will design, build, and optimize scalable data pipelines and solutions using Azure Data services, Databricks, and Snowflake administration. This position requires deep technical expertise in Azure cloud, strong ETL/data pipeline development experience with Databricks, and hands-on administration and optimization of Snowflake for analytics and reporting.
Responsibilities
• Design, develop, and maintain scalable data pipelines using Azure Data Factory, Azure Databricks, and Snowflake.
• Administer, configure, and monitor Databricks clusters, workspaces, integrations, RBAC, ACLs, and manage workloads for optimal performance and cost.
• Administer, configure, and optimize Snowflake warehouses, users, policies, roles, RBAC/permissions, resource monitors, storage, and compute.
• Develop ETL and ELT solutions in Databricks with Spark, Python (PySpark), and SQL for large data sets.
• Implement data modeling techniques and maintain data structures in Snowflake (star schema, Data Cubes, etc.).
• Build and troubleshoot complex SQL queries; tune and optimize data transformations for performance.
• Collaborate with business users, data scientists, and analysts to understand requirements and deliver robust, secure data solutions.
• Ensure data governance, security, privacy, and compliance across Azure and Snowflake platforms.
• Monitor and tune data workflows for efficiency and cost effectiveness; proactively identify and resolve platform issues.
• Document data workflows, configurations, and provide knowledge transfer to peers and stakeholders.
Qualifications
• 5+ years of experience in data engineering or related data platform roles.
• Demonstrated expertise in Azure Data Lake, Databricks, and Data Factory.
• Expert-level skills in Spark (PySpark) and Python for building scalable ETL workflows.
• Strong experience with Snowflake: user/role management, warehouse configuration, query optimization, resource control.
• Deep knowledge of data modeling/design and warehouse concepts (star, snowflake schema).
• Solid working knowledge of SQL optimization and data processing on cloud platforms.
• Familiarity with DevOps, CI/CD, version control (Azure DevOps, Git, pipelines in Azure).
• Strong troubleshooting, communication, and collaboration skills.
\`\`\`
This Role is Onsite only role and accepting only residents of New Jersey and New York, Also accepting only Green cards and US Citizens at this moment.
About the Role
The Azure Data Engineer will design, build, and optimize scalable data pipelines and solutions using Azure Data services, Databricks, and Snowflake administration. This position requires deep technical expertise in Azure cloud, strong ETL/data pipeline development experience with Databricks, and hands-on administration and optimization of Snowflake for analytics and reporting.
Responsibilities
• Design, develop, and maintain scalable data pipelines using Azure Data Factory, Azure Databricks, and Snowflake.
• Administer, configure, and monitor Databricks clusters, workspaces, integrations, RBAC, ACLs, and manage workloads for optimal performance and cost.
• Administer, configure, and optimize Snowflake warehouses, users, policies, roles, RBAC/permissions, resource monitors, storage, and compute.
• Develop ETL and ELT solutions in Databricks with Spark, Python (PySpark), and SQL for large data sets.
• Implement data modeling techniques and maintain data structures in Snowflake (star schema, Data Cubes, etc.).
• Build and troubleshoot complex SQL queries; tune and optimize data transformations for performance.
• Collaborate with business users, data scientists, and analysts to understand requirements and deliver robust, secure data solutions.
• Ensure data governance, security, privacy, and compliance across Azure and Snowflake platforms.
• Monitor and tune data workflows for efficiency and cost effectiveness; proactively identify and resolve platform issues.
• Document data workflows, configurations, and provide knowledge transfer to peers and stakeholders.
Qualifications
• 5+ years of experience in data engineering or related data platform roles.
• Demonstrated expertise in Azure Data Lake, Databricks, and Data Factory.
• Expert-level skills in Spark (PySpark) and Python for building scalable ETL workflows.
• Strong experience with Snowflake: user/role management, warehouse configuration, query optimization, resource control.
• Deep knowledge of data modeling/design and warehouse concepts (star, snowflake schema).
• Solid working knowledge of SQL optimization and data processing on cloud platforms.
• Familiarity with DevOps, CI/CD, version control (Azure DevOps, Git, pipelines in Azure).
• Strong troubleshooting, communication, and collaboration skills.
\`\`\`






