

New York Technology Partners
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer in Newark, NJ, on a contract basis. The position requires expertise in AWS Glue, PySpark, Redshift, and CI/CD deployment. Preferred certifications include AWS Certified Data Engineer – Associate.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 14, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Newark, NJ
-
🧠 - Skills detailed
#Data Integration #Monitoring #SQL (Structured Query Language) #DMS (Data Migration Service) #S3 (Amazon Simple Storage Service) #Security #Cloud #Deployment #Data Catalog #SaaS (Software as a Service) #Lambda (AWS Lambda) #AWS S3 (Amazon Simple Storage Service) #AWS Glue #Data Engineering #GitHub #AWS (Amazon Web Services) #PySpark #Python #Automation #SAP #Data Lake #Spark (Apache Spark) #DevOps #Metadata #Programming #"ETL (Extract #Transform #Load)" #Redshift
Role description
Senior AWS Data Engineer (Glue, PySpark, Redshift)
• Job Title: Senior AWS Data Engineer (Glue, PySpark, Redshift)
• Location: Newark NJ
• Client: CTS/PSEG
Job Summary
We are seeking a highly skilled Senior AWS Data Engineer responsible for designing, developing, and deploying robust ELT/ETL pipelines within the AWS environment. The core focus is on end-to-end data integration from diverse enterprise sources (SAP, Salesforce, OKTA) through AWS S3 into Redshift for analytical use. The candidate must be proficient in using AWS Glue, Step Functions, and Lambda and possess strong automation skills using Python/PySpark and Redshift SQL. Expertise in implementing CI/CD deployment via GitHub and adhering to PSEG governance standards is required.
Key Focus Areas & Required Technical Skills
• Pipeline Development: Develop and maintain ELT/ETL pipelines using AWS Glue, Step Functions, Lambda, DMS, and AppFlow.
• Programming & Automation: Automate transformations and model refresh using Python, PySpark, and SQL.
• Data Flow: Implement end-to-end source-to-target data integration across the entire data lake architecture: Data Source AWS S3 Raw Curated Redshift.
• Data Warehousing: Proficient in Redshift SQL for transformations, optimization, and model refresh.
• Source Integration: Experience integrating with on-prem and SaaS data sources, such as SAP (via Simplement), Salesforce, OKTA, MuleSoft, and JAMS.
• DevOps & Governance: Implement CI/CD deployment using GitHub and deploy code following PSEG governance and change control processes.
• Metadata & Security: Manage metadata and lineage via AWS Glue Data Catalog. Familiarity with CloudWatch, CloudTrail, and Secrets Manager for monitoring and security.
Preferred Certifications
• AWS Certified Data Engineer – Associate
• AWS Certified Developer – Associate
Senior AWS Data Engineer (Glue, PySpark, Redshift)
• Job Title: Senior AWS Data Engineer (Glue, PySpark, Redshift)
• Location: Newark NJ
• Client: CTS/PSEG
Job Summary
We are seeking a highly skilled Senior AWS Data Engineer responsible for designing, developing, and deploying robust ELT/ETL pipelines within the AWS environment. The core focus is on end-to-end data integration from diverse enterprise sources (SAP, Salesforce, OKTA) through AWS S3 into Redshift for analytical use. The candidate must be proficient in using AWS Glue, Step Functions, and Lambda and possess strong automation skills using Python/PySpark and Redshift SQL. Expertise in implementing CI/CD deployment via GitHub and adhering to PSEG governance standards is required.
Key Focus Areas & Required Technical Skills
• Pipeline Development: Develop and maintain ELT/ETL pipelines using AWS Glue, Step Functions, Lambda, DMS, and AppFlow.
• Programming & Automation: Automate transformations and model refresh using Python, PySpark, and SQL.
• Data Flow: Implement end-to-end source-to-target data integration across the entire data lake architecture: Data Source AWS S3 Raw Curated Redshift.
• Data Warehousing: Proficient in Redshift SQL for transformations, optimization, and model refresh.
• Source Integration: Experience integrating with on-prem and SaaS data sources, such as SAP (via Simplement), Salesforce, OKTA, MuleSoft, and JAMS.
• DevOps & Governance: Implement CI/CD deployment using GitHub and deploy code following PSEG governance and change control processes.
• Metadata & Security: Manage metadata and lineage via AWS Glue Data Catalog. Familiarity with CloudWatch, CloudTrail, and Secrets Manager for monitoring and security.
Preferred Certifications
• AWS Certified Data Engineer – Associate
• AWS Certified Developer – Associate