

zAnswer LLC
ETL Developer - No C2C - Onsite Role - No Relo
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focused on ETL/ELT pipeline development within AWS, requiring deep Banking/Financial Services experience. Contract length is unspecified, with a pay rate of "unknown" and an onsite work location. Key skills include AWS Glue, PySpark, and SQL.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 23, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Corp-to-Corp (C2C)
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Data Lake #Airflow #Data Engineering #Data Quality #PySpark #SQL (Structured Query Language) #Data Encryption #"ETL (Extract #Transform #Load)" #Amazon RDS (Amazon Relational Database Service) #VPC (Virtual Private Cloud) #Data Storage #Monitoring #AWS (Amazon Web Services) #Data Warehouse #Documentation #RDS (Amazon Relational Database Service) #Python #Scala #Data Catalog #Data Governance #DevOps #Security #Databases #Spark (Apache Spark) #Amazon Redshift #AWS Glue #AWS Lambda #Storage #Data Integration #IAM (Identity and Access Management) #Data Ingestion #Web Services #Redshift #Lambda (AWS Lambda) #Compliance #S3 (Amazon Simple Storage Service) #Apache Airflow
Role description
Position Summary
We are seeking a highly skilled Senior Data Engineer to design, develop, and manage robust Extract, Transform, Load (ETL) and Extract, Load, Transform (ELT) pipelines entirely within the Amazon Web Services (AWS) ecosystem. This role is crucial for integrating diverse financial data sources into performant data warehouses and data lakes, ensuring scalability, security, and data quality. Deep domain experience within the Banking/Financial Services industry is mandatory.
Core Responsibilities
• Pipeline Architecture: Architect and implement robust, scalable ETL/ELT pipelines leveraging native AWS services for optimal data ingestion and processing.
• Data Integration: Integrate data from a variety of sources, including external APIs, transactional databases, and flat files, into centralized AWS-based data platforms.
• Transformation Development: Develop complex data transformation logic utilizing PySpark, Python, and SQL, primarily executed within AWS Glue and AWS Lambda environments.
• Orchestration & Monitoring: Establish, monitor, and maintain workflow orchestration using tools such as AWS Step Functions, Glue Workflows, or Apache Airflow on Amazon MWAA.
• Data Governance & Quality: Ensure stringent data quality, consistency, and lineage tracking, utilizing services like the AWS Glue Data Catalog and AWS Lake Formation.
• Performance Optimization: Proactively identify and execute optimizations for ETL performance and cost-efficiency through techniques like partitioning, parallelism, and resource tuning.
• Security & Compliance: Implement and enforce security best practices, including data encryption, management of IAM roles, and precise VPC configurations.
• Collaboration & Documentation: Partner closely with data engineers, analysts, and DevOps teams to support critical analytics and reporting needs. Maintain comprehensive documentation of ETL processes, data flows, and architectural diagrams.
Required Technical Stack
The candidate must demonstrate expert proficiency in developing scalable data solutions using the following AWS services and technologies:
• Serverless ETL: AWS Glue for serverless data integration and AWS Lambda for lightweight, real-time transformations.
• Data Storage: Amazon S3 for building and managing the core Data Lake.
• Data Warehousing: Experience with Amazon Redshift or Amazon RDS for relational data warehousing needs.
• Orchestration: Experience with AWS Step Functions, AWS Glue Workflows, or Apache Airflow on Amazon MWAA.
• Transformation Languages: Expert level skills in PySpark, Python, and SQL.
Position Summary
We are seeking a highly skilled Senior Data Engineer to design, develop, and manage robust Extract, Transform, Load (ETL) and Extract, Load, Transform (ELT) pipelines entirely within the Amazon Web Services (AWS) ecosystem. This role is crucial for integrating diverse financial data sources into performant data warehouses and data lakes, ensuring scalability, security, and data quality. Deep domain experience within the Banking/Financial Services industry is mandatory.
Core Responsibilities
• Pipeline Architecture: Architect and implement robust, scalable ETL/ELT pipelines leveraging native AWS services for optimal data ingestion and processing.
• Data Integration: Integrate data from a variety of sources, including external APIs, transactional databases, and flat files, into centralized AWS-based data platforms.
• Transformation Development: Develop complex data transformation logic utilizing PySpark, Python, and SQL, primarily executed within AWS Glue and AWS Lambda environments.
• Orchestration & Monitoring: Establish, monitor, and maintain workflow orchestration using tools such as AWS Step Functions, Glue Workflows, or Apache Airflow on Amazon MWAA.
• Data Governance & Quality: Ensure stringent data quality, consistency, and lineage tracking, utilizing services like the AWS Glue Data Catalog and AWS Lake Formation.
• Performance Optimization: Proactively identify and execute optimizations for ETL performance and cost-efficiency through techniques like partitioning, parallelism, and resource tuning.
• Security & Compliance: Implement and enforce security best practices, including data encryption, management of IAM roles, and precise VPC configurations.
• Collaboration & Documentation: Partner closely with data engineers, analysts, and DevOps teams to support critical analytics and reporting needs. Maintain comprehensive documentation of ETL processes, data flows, and architectural diagrams.
Required Technical Stack
The candidate must demonstrate expert proficiency in developing scalable data solutions using the following AWS services and technologies:
• Serverless ETL: AWS Glue for serverless data integration and AWS Lambda for lightweight, real-time transformations.
• Data Storage: Amazon S3 for building and managing the core Data Lake.
• Data Warehousing: Experience with Amazon Redshift or Amazon RDS for relational data warehousing needs.
• Orchestration: Experience with AWS Step Functions, AWS Glue Workflows, or Apache Airflow on Amazon MWAA.
• Transformation Languages: Expert level skills in PySpark, Python, and SQL.






