PamTen Inc

Sr Data Engineer with AWS

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer with AWS, offering a hybrid position in Newark, NJ. Contract length is unspecified, with a pay rate of "unknown." Requires 3+ years of AWS data engineering experience, strong Python, SQL, and ETL skills; AWS certifications preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Newark, NJ
-
🧠 - Skills detailed
#Data Engineering #AWS Lambda #Programming #"ETL (Extract #Transform #Load)" #Python #Aurora #DynamoDB #Data Warehouse #RDS (Amazon Relational Database Service) #Data Ingestion #S3 (Amazon Simple Storage Service) #Data Modeling #Athena #Data Quality #Data Governance #Storage #BI (Business Intelligence) #Lambda (AWS Lambda) #AWS (Amazon Web Services) #DevOps #Compliance #Spark (Apache Spark) #Code Reviews #Scala #Shell Scripting #IAM (Identity and Access Management) #Scripting #Kafka (Apache Kafka) #Databases #Data Pipeline #Data Lake #API (Application Programming Interface) #SQS (Simple Queue Service) #Unit Testing #Cloud #Data Science #Redshift #Elasticsearch #Terraform #Security #SQL (Structured Query Language) #Big Data
Role description
Are you passionate about building scalable data solutions and working with modern cloud technologies? This could be your next big move! We are seeking a skilled and motivated Sr Data Engineer with AWS, The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and data solutions to support analytics and business intelligence needs. Role: Data Engineers (AWS) Location: Newark, NJ Hybrid (Onsite MUST) Note: We are not sponsoring any visas at this moment. Job Summary: We are seeking a talented AWS Data Engineer to join our dynamic Data Engineering team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and architectures in the AWS cloud environment. This role will collaborate closely with data scientists, analysts, and other business stakeholders to deliver robust data solutions. Key Responsibilities: • Design, build, and maintain efficient, reusable, and reliable architecture and code for data pipelines and data applications on AWS. • Build robust data ingestion pipelines (from on-prem to AWS and within AWS) using AWS services such as Glue, Redshift, S3, Lambda, EMR/Spark, Kinesis, and SQS. • Develop and manage ETL/ELT processes to collect, process, and store data from multiple sources, ensuring data quality, integrity, and security. • Architect and implement end-to-end data solutions (ingestion, storage, integration, processing, access) on AWS, with a focus on data lakes and data warehouses. • Participate in the architecture and system design discussions for high-scale data engineering projects. • Independently perform hands-on development, unit testing, and participate in code reviews to ensure adherence to best practices. • Implement serverless applications using AWS Lambda, API Gateway, Step Functions, and other AWS technologies. • Migrate data from traditional relational databases, file systems, and APIs to AWS-based data lakes (S3), RDS, Aurora, and Redshift. • Implement high-velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred). • Architect and implement CI/CD strategies for enterprise data platforms. • Collaborate with product, operations, QA, and cross-functional teams throughout the software development cycle. • Stay abreast of new technology developments, implement POCs for new tools/technologies, and onboard them for real-world use cases. • Identify and resolve performance issues and continuously optimize for cost, reliability, and scalability. Required Qualifications: • 3+ years of experience implementing and supporting data lakes, data warehouses, and data applications on AWS for large enterprises. • Strong programming experience with Python, Shell scripting, and SQL. • Solid experience with AWS services: CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, Secrets Manager. • Experience in serverless application development and data pipeline orchestration. • Experience in system analysis, design, development, and implementation of data ingestion pipelines in AWS. • Knowledge of ETL/ELT, data modeling, and big data technologies. • Familiarity with data warehousing concepts and cloud-based architecture. • Strong problem-solving skills and attention to detail. • Excellent communication and teamwork abilities. Preferred Qualifications: • Experience with additional AWS services: API Gateway, Elasticsearch, SQS. • Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). • Experience with DevOps practices and CI/CD pipelines. • Experience implementing end-to-end streaming solutions (Amazon Kinesis, SQS, Kafka). • AWS Solutions Architect or AWS Developer Certification preferred. • Understanding of Lakehouse/data cloud architecture. • Knowledge of data governance and compliance standards.