Intellectt Inc

Sr. AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. AWS Data Engineer in New Jersey (Hybrid) with a long-term contract. Requires 15+ years of experience, strong Terraform skills, and expertise in AWS services, ETL/ELT pipelines, and data governance.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 5, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New Jersey, United States
-
🧠 - Skills detailed
#PySpark #Data Storage #Data Governance #Metadata #Kafka (Apache Kafka) #Spark (Apache Spark) #Apache Iceberg #Cloud #Qlik #SQL Server #SQL (Structured Query Language) #SSIS (SQL Server Integration Services) #Storage #Programming #Aurora #Lambda (AWS Lambda) #AWS (Amazon Web Services) #GitHub #Infrastructure as Code (IaC) #Redshift #S3 (Amazon Simple Storage Service) #Data Quality #Python #Data Modeling #"ETL (Extract #Transform #Load)" #NoSQL #Schema Design #Terraform #Data Engineering
Role description
Role: Sr. AWS Data Engineer Location - New Jersey Hybrid Duration: Long Term Must Have 15+ Years of Expirence Should be strong in Terraform 1. Cloud Services & Infrastructure β€’ Data Storage & Processing: S3, Redshift, Aurora Postgres, Glue, EMR, Lambda β€’ Orchestration & Workflow: Step Functions, CloudWatch β€’ Infrastructure as Code: Terraform, Terraform Enterprise & HCP β€’ CI/CD Pipelines: Concourse, Github actions 1. Data Engineering Foundations β€’ ETL/ELT Pipelines: β€’ Designing and optimizing pipelines using Glue, PySpark, and Kafka β€’ Prior experience in developing ETLs using SSIS and SQL server β€’ Data Modeling: Dimensional and NoSQL modeling, schema design β€’ Data Governance: Data quality, lineage, and stewardship practices 1. Programming & Tools β€’ Languages: Python, SQL, PySpark β€’ Tools: GitHub Actions, Concourse, Qlik Replicate 1. Performance & Cost Optimization β€’ Efficient use of EMR and PySpark to reduce compute costs β€’ Metadata-driven upserts using Apache Iceberg for historical data