

High Trail
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Iselin, NJ (Hybrid) with a contract length of "unknown" and a pay rate of "unknown." Key skills include Databricks, Snowflake, Azure Cloud, and Spark. Experience in banking and relevant certifications are preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
October 16, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#Data Lake #Triggers #Scala #SnowPipe #Data Modeling #Deployment #Cloud #Data Pipeline #Snowflake #Data Quality #Data Ingestion #Databases #Delta Lake #Spark (Apache Spark) #GitLab #Version Control #Business Analysis #Databricks #Synapse #SnowSQL #Data Engineering #"ETL (Extract #Transform #Load)" #PySpark #Azure #Programming #SQL (Structured Query Language) #Data Architecture #Agile #Azure cloud #Data Processing #Automation
Role description
Title: Data Engineer
Location: Iselin, NJ (Hybrid)
Job Summary:
We are seeking an experienced Snowflake Data Engineer to join our team supporting a major banking client in Iselin, NJ. The ideal candidate will have deep expertise in Databricks, Snowflake, and Azure Cloud, with a proven track record of building and optimizing scalable data pipelines, lakehouse architectures, and real-time analytics solutions.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and ETL processes using Databricks and Snowflake.
• Implement Delta Lake architectures, including Delta Live Pipelines and Databricks Unity Catalog.
• Build and manage data ingestion frameworks using Snowpipe and SnowSQL.
• Integrate and orchestrate data across Azure Cloud Services (e.g., Data Lake, Synapse, Data Factory).
• Optimize Spark and PySpark jobs for performance and reliability.
• Collaborate with data architects, business analysts, and other engineering teams to ensure data quality and governance.
• Utilize GitLab for version control and Databricks Asset Bundles for deployment automation.
• Develop and implement best practices for data modeling, testing, and product ionization.
Technical Skills Required:
• Databricks Expertise: Delta Lake, Unity Catalog, Lakehouse Architecture, Tables, Triggers, Delta Live Pipelines, Databricks Runtime.
• Snowflake Expertise: Snowpipe, SnowSQL, data ingestion, and performance optimization.
• Cloud: Proficiency in Azure Cloud Services (Data Lake, Synapse, Data Factory).
• Programming: Strong understanding of Spark and PySpark for large-scale data processing.
• Databases: Solid experience with relational databases and SQL performance tuning.
• Version Control & CI/CD: Knowledge of GitLab and Databricks Asset Bundles.
Preferred Qualifications:
• Familiarity with Databricks Runtimes and advanced configurations.
• Experience developing real-time streaming solutions using Spark Streaming or equivalent frameworks.
• Strong analytical, problem-solving, and communication skills.
• Ability to work effectively in cross-functional teams in an Agile environment.
Certifications (Preferred, Not Required):
• Microsoft Certified: Azure Data Engineer Associate
• Databricks Certified: Data Engineer Associate
Title: Data Engineer
Location: Iselin, NJ (Hybrid)
Job Summary:
We are seeking an experienced Snowflake Data Engineer to join our team supporting a major banking client in Iselin, NJ. The ideal candidate will have deep expertise in Databricks, Snowflake, and Azure Cloud, with a proven track record of building and optimizing scalable data pipelines, lakehouse architectures, and real-time analytics solutions.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and ETL processes using Databricks and Snowflake.
• Implement Delta Lake architectures, including Delta Live Pipelines and Databricks Unity Catalog.
• Build and manage data ingestion frameworks using Snowpipe and SnowSQL.
• Integrate and orchestrate data across Azure Cloud Services (e.g., Data Lake, Synapse, Data Factory).
• Optimize Spark and PySpark jobs for performance and reliability.
• Collaborate with data architects, business analysts, and other engineering teams to ensure data quality and governance.
• Utilize GitLab for version control and Databricks Asset Bundles for deployment automation.
• Develop and implement best practices for data modeling, testing, and product ionization.
Technical Skills Required:
• Databricks Expertise: Delta Lake, Unity Catalog, Lakehouse Architecture, Tables, Triggers, Delta Live Pipelines, Databricks Runtime.
• Snowflake Expertise: Snowpipe, SnowSQL, data ingestion, and performance optimization.
• Cloud: Proficiency in Azure Cloud Services (Data Lake, Synapse, Data Factory).
• Programming: Strong understanding of Spark and PySpark for large-scale data processing.
• Databases: Solid experience with relational databases and SQL performance tuning.
• Version Control & CI/CD: Knowledge of GitLab and Databricks Asset Bundles.
Preferred Qualifications:
• Familiarity with Databricks Runtimes and advanced configurations.
• Experience developing real-time streaming solutions using Spark Streaming or equivalent frameworks.
• Strong analytical, problem-solving, and communication skills.
• Ability to work effectively in cross-functional teams in an Agile environment.
Certifications (Preferred, Not Required):
• Microsoft Certified: Azure Data Engineer Associate
• Databricks Certified: Data Engineer Associate