Snowflake Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 9, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Azure #Data Security #SnowPipe #Data Processing #Snowflake #Spark (Apache Spark) #Scala #Delta Lake #Big Data #Databases #Compliance #Databricks #Datasets #Azure cloud #Cloud #GitLab #BI (Business Intelligence) #Data Pipeline #Deployment #"ETL (Extract #Transform #Load)" #Storage #Version Control #PySpark #ADLS (Azure Data Lake Storage) #Triggers #SnowSQL #Security #Data Engineering #Azure Blob Storage
Role description
Job Title \_ Snowflake Data Engineer Location \_ Iselin, NJ (Hybrid, 3 days in a week) Duration \_12+ Months Contract Job Description We are seeking highly skilled Snowflake Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions. Responsibilities: Data Pipeline Development: β€’ Build and maintain scalable ETL/ELT pipelines using Databricks. β€’ Leverage PySpark/Spark and SQL to transform and process large datasets. β€’ Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems. β€’ Collaboration & Analysis: β€’ Work Closely with multiple teams to prepare data for dashboard and BI Tools. β€’ Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions. Performance & Optimization: β€’ Optimize Databricks workloads for cost efficiency and performance. β€’ Monitor and troubleshoot data pipelines to ensure reliability and accuracy. β€’ Governance & Security β€’ Implement and manage data security, access controls and governance standards using Unity Catalog. β€’ Ensure compliance with organizational and regulatory data policies. Deployment: β€’ Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments. β€’ Manage version control for Databricks artifacts and collaborate with team to maintain development best practices. Technical Skills: β€’ Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table, Triggers, Delta Live Pipelines, Databricks Runtime etc.) β€’ Experience in Snowflake, Snowpipe & SnowSql β€’ Proficiency in Azure Cloud Services. β€’ Solid Understanding of Spark and PySpark for big data processing. β€’ Experience in relational databases. β€’ Knowledge on Databricks Asset Bundles and GitLab. Preferred Experience: β€’ Familiarity with Databrick s Runtimes and advanced configurations. β€’ Knowledge of streaming frameworks like Spark Streaming. β€’ Experience in developing real-time data solutions. Certifications: β€’ Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification. (Optional)