

Snowflake Data Engineer with Databricks Expertise
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a contract position for a Snowflake Data Engineer with Databricks expertise, located in Iselin, NJ (Hybrid – 3 days onsite). Key skills include Databricks, Snowflake, Azure services, and PySpark. Relevant certifications are preferred. Pay rate is DOE.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
July 31, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#SnowSQL #Compliance #PySpark #Delta Lake #Version Control #Azure cloud #Azure #Big Data #Spark (Apache Spark) #Databases #GitLab #Snowflake #Scala #Deployment #Data Security #Datasets #Data Pipeline #Cloud #Databricks #"ETL (Extract #Transform #Load)" #ADLS (Azure Data Lake Storage) #Data Engineering #Azure Blob Storage #SnowPipe #Storage #BI (Business Intelligence) #Data Processing #Data Integrity #Security #SQL (Structured Query Language)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Snowflake Data Engineer with Databricks Expertise
Location: Iselin, NJ (Hybrid – 3 Days Onsite per Week)
Type: Contract
Rate: DOE
Job Summary:
We are looking for a highly skilled Snowflake Data Engineer with strong experience in Databricks to join our dynamic data team. The ideal candidate will be responsible for designing, developing, and optimizing large-scale data pipelines, ensuring scalability, performance, and reliability. This role requires close collaboration with cross-functional teams and business stakeholders to deliver enterprise-grade data solutions.
Key Responsibilities:
Data Pipeline Development
• Build and maintain scalable ETL/ELT pipelines using Databricks.
• Leverage PySpark/Spark and SQL to process and transform large-scale datasets.
• Integrate data from multiple sources including Azure Blob Storage, ADLS, and other relational/non-relational databases.
Collaboration & Analysis
• Collaborate with teams to prepare data for dashboards and BI tools.
• Work closely with stakeholders to understand requirements and provide tailored data solutions.
Performance & Optimization
• Optimize Databricks workloads for performance and cost-efficiency.
• Monitor and troubleshoot pipelines to ensure data integrity and reliability.
Governance & Security
• Implement data security, access controls, and governance using Unity Catalog.
• Ensure compliance with organizational and regulatory data policies.
Deployment & Best Practices
• Use Databricks Asset Bundles for deployment of jobs, notebooks, and configurations.
• Manage version control (e.g., GitLab) and follow development best practices.
Technical Skills:
• Expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Delta Live Tables, etc.)
• Strong hands-on experience with Snowflake, Snowpipe, and SnowSQL
• Proficiency in Azure Cloud Services
• Solid understanding of Spark/PySpark for big data processing
• Experience working with relational databases
• Knowledge of Databricks Asset Bundles and GitLab
Preferred Qualifications:
• Familiarity with Databricks Runtime configurations
• Experience with Spark Streaming and real-time data solutions
• Relevant certifications such as:
o Azure Data Engineer Associate
o Databricks Certified Data Engineer Associate