

Compunnel Inc.
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." It requires expertise in Snowflake, PySpark, SQL, and banking domain experience, focusing on data pipelines, ETL processes, and compliance standards.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 6, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#Python #Databricks #Data Architecture #Scala #Cloud #Data Science #Kafka (Apache Kafka) #Snowflake #SQL (Structured Query Language) #Azure #Compliance #Data Engineering #"ETL (Extract #Transform #Load)" #Security #Data Pipeline #Deployment #Scripting #Spark (Apache Spark) #Apache Kafka #Version Control #Data Governance #SQL Queries #PySpark #DevOps #Data Quality #Data Analysis #Data Processing #Automation #Documentation
Role description
Key Responsibilities:
• Design, develop, and maintain scalable and efficient data pipelines using Snowflake, PySpark, and SQL.
• Write optimized and complex SQL queries to extract, transform, and load data.
• Develop and implement data models, schemas, and architecture that support banking domain requirements.
• Collaborate with data analysts, data scientists, and business stakeholders to gather data requirements.
• Automate data workflows and ensure data quality, accuracy, and integrity.
• Manage and coordinate release processes for data pipelines and analytics solutions.
• Monitor, troubleshoot, and optimize the performance of data systems.
• Ensure compliance with data governance, security, and privacy standards within the banking domain.
• Maintain documentation of data architecture, pipelines, and processes.
• Stay updated with the latest industry trends and incorporate best practices.
Required Skills and Experience:
• Proven experience as a Data Engineer or in a similar role with a focus on Snowflake, Python, PySpark, and SQL.
• Strong understanding of data warehousing concepts and cloud data platforms, especially Snowflake.
• Hands-on experience with release management, deployment, and version control practices.
• Solid understanding of banking and financial services industry data and compliance requirements.
• Proficiency in Python scripting and Pyspark for data processing and automation.
• Experience with ETL/ELT processes and tools.
• Knowledge of data governance, security, and privacy standards.
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration abilities.
Preferred Qualifications:
• Good Knowledge in Azure and Databricks in highly preferred.
• Knowledge of Apache Kafka or other streaming technologies.
• Familiarity with DevOps practices and CI/CD pipelines.
• Prior experience working in the banking or financial services industry.
Key Responsibilities:
• Design, develop, and maintain scalable and efficient data pipelines using Snowflake, PySpark, and SQL.
• Write optimized and complex SQL queries to extract, transform, and load data.
• Develop and implement data models, schemas, and architecture that support banking domain requirements.
• Collaborate with data analysts, data scientists, and business stakeholders to gather data requirements.
• Automate data workflows and ensure data quality, accuracy, and integrity.
• Manage and coordinate release processes for data pipelines and analytics solutions.
• Monitor, troubleshoot, and optimize the performance of data systems.
• Ensure compliance with data governance, security, and privacy standards within the banking domain.
• Maintain documentation of data architecture, pipelines, and processes.
• Stay updated with the latest industry trends and incorporate best practices.
Required Skills and Experience:
• Proven experience as a Data Engineer or in a similar role with a focus on Snowflake, Python, PySpark, and SQL.
• Strong understanding of data warehousing concepts and cloud data platforms, especially Snowflake.
• Hands-on experience with release management, deployment, and version control practices.
• Solid understanding of banking and financial services industry data and compliance requirements.
• Proficiency in Python scripting and Pyspark for data processing and automation.
• Experience with ETL/ELT processes and tools.
• Knowledge of data governance, security, and privacy standards.
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration abilities.
Preferred Qualifications:
• Good Knowledge in Azure and Databricks in highly preferred.
• Knowledge of Apache Kafka or other streaming technologies.
• Familiarity with DevOps practices and CI/CD pipelines.
• Prior experience working in the banking or financial services industry.






