

PeakIT
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer, remote for 12+ months, offering competitive pay. Requires 3–8+ years in data engineering, expertise in AWS data services (S3, Glue, Redshift), SQL proficiency, and experience with orchestration tools like Airflow. AWS certifications preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Databases #Data Warehouse #Datasets #Scala #IAM (Identity and Access Management) #Amazon EMR (Amazon Elastic MapReduce) #Amazon Redshift #Hadoop #Airflow #AWS IAM (AWS Identity and Access Management) #Cloud #Redshift #Data Modeling #Database Replication #Compliance #Spark (Apache Spark) #Batch #Data Lake #AWS Glue #Data Integration #Lambda (AWS Lambda) #Security #Data Quality #Database Migration #JSON (JavaScript Object Notation) #DMS (Data Migration Service) #S3 (Amazon Simple Storage Service) #Amazon QuickSight #AWS DMS (AWS Database Migration Service) #Data Science #BI (Business Intelligence) #SQL (Structured Query Language) #Storage #Data Governance #Data Access #SaaS (Software as a Service) #Replication #Apache Airflow #Data Ingestion #AWS Lambda #Migration #Data Pipeline #DevOps #Observability #Deployment #EDW (Enterprise Data Warehouse) #Amazon CloudWatch #OpenSearch #DynamoDB #"ETL (Extract #Transform #Load)" #Data Catalog #AWS (Amazon Web Services) #Data Processing #Athena #Data Engineering
Role description
We are looking for AWS Data Engineer for our client. Please check the details below and let me know if you are interested.
Job Title: AWS Data Engineer
Location: Remote – EST Time support
Duration – 12+ Months
The ideal candidate will have hands-on expertise in AWS-native data services, data lake architecture, ETL/ELT development, orchestration frameworks, and data governance best practices. Key Responsibilities1. Data Ingestion & Integration
• Design and implement real-time and batch data ingestion pipelines using:
• Amazon Kinesis (Data Streams and Data Firehose)
• AWS Database Migration Service (DMS) for CDC and database replication
• AWS AppFlow for SaaS data integration
• Ingest structured and unstructured data from applications, databases, SaaS platforms, and event streams.
• Ensure data quality, reliability, and performance of ingestion frameworks.
1. Data Lake & Storage Architecture
• Build and manage scalable data lake architectures using Amazon S3 as the core storage layer.
• Design optimized storage formats (Parquet, ORC, Avro, JSON, CSV) for performance and cost efficiency.
• Implement enterprise data warehouse solutions using Amazon Redshift, including Redshift Serverless where appropriate.
• Support operational and low-latency use cases leveraging Amazon DynamoDB.
1. Data Processing & Transformation
• Develop scalable ETL/ELT pipelines using AWS Glue (Glue Jobs, Crawlers, Data Catalog).
• Design and manage distributed processing workloads on Amazon EMR for complex Spark/Hadoop-based transformations.
• Implement lightweight, event-driven transformations using AWS Lambda.
• Optimize job performance, cost efficiency, and pipeline reliability.
1. Analytics & Query Enablement
• Enable serverless analytics using Amazon Athena integrated with the Glue Data Catalog.
• Implement hybrid data lake/warehouse architectures using Amazon Redshift Spectrum.
• Support log analytics and search use cases using Amazon OpenSearch Service. Deliver optimized datasets to BI platforms and analytics teams.
1. Orchestration & Workflow Management
• Develop and manage data workflows using:
• AWS Step Functions
• Amazon Managed Workflows for Apache Airflow (MWAA)
• Build DAG-based pipeline orchestration for complex, multi-stage data processing environments.
• Ensure fault tolerance, retry logic, dependency management, and operational visibility.
1. Data Governance, Security & Observability
• Implement fine-grained data access controls using AWS Lake Formation.
• Design secure IAM role structures using AWS Identity and Access Management (IAM).
• Monitor pipelines and system health using Amazon CloudWatch.
• Ensure compliance with enterprise security, audit, and data governance policies.
1. Business Intelligence & Data Consumption
• Deliver curated datasets to analytics stakeholders using Amazon QuickSight.
• Optimize data models for reporting, dashboards, and advanced analytics.
• Partner with data scientists, analysts, and business teams to support analytical use cases.
Required Qualifications
• 3–8+ years of experience in data engineering or cloud data platform development.
• Strong hands-on experience with AWS data services, particularly S3, Glue, Redshift, Athena, and Kinesis.
• Experience designing scalable batch and/or streaming data pipelines.
• Proficiency in SQL and distributed processing frameworks (e.g., Spark).
• Experience with workflow orchestration tools such as Airflow (MWAA) or Step Functions.
• Strong understanding of data modeling, partitioning strategies, and performance optimization.
• Knowledge of IAM-based security and data governance controls.
Preferred Qualifications
• AWS certifications (e.g., AWS Certified Data Analytics – Specialty or Solutions Architect).
• Experience supporting real-time streaming architectures.
• Experience building enterprise-scale data lakes and hybrid warehouse architectures.
• Familiarity with DevOps and CI/CD practices for infrastructure-as-code deployments.
We are looking for AWS Data Engineer for our client. Please check the details below and let me know if you are interested.
Job Title: AWS Data Engineer
Location: Remote – EST Time support
Duration – 12+ Months
The ideal candidate will have hands-on expertise in AWS-native data services, data lake architecture, ETL/ELT development, orchestration frameworks, and data governance best practices. Key Responsibilities1. Data Ingestion & Integration
• Design and implement real-time and batch data ingestion pipelines using:
• Amazon Kinesis (Data Streams and Data Firehose)
• AWS Database Migration Service (DMS) for CDC and database replication
• AWS AppFlow for SaaS data integration
• Ingest structured and unstructured data from applications, databases, SaaS platforms, and event streams.
• Ensure data quality, reliability, and performance of ingestion frameworks.
1. Data Lake & Storage Architecture
• Build and manage scalable data lake architectures using Amazon S3 as the core storage layer.
• Design optimized storage formats (Parquet, ORC, Avro, JSON, CSV) for performance and cost efficiency.
• Implement enterprise data warehouse solutions using Amazon Redshift, including Redshift Serverless where appropriate.
• Support operational and low-latency use cases leveraging Amazon DynamoDB.
1. Data Processing & Transformation
• Develop scalable ETL/ELT pipelines using AWS Glue (Glue Jobs, Crawlers, Data Catalog).
• Design and manage distributed processing workloads on Amazon EMR for complex Spark/Hadoop-based transformations.
• Implement lightweight, event-driven transformations using AWS Lambda.
• Optimize job performance, cost efficiency, and pipeline reliability.
1. Analytics & Query Enablement
• Enable serverless analytics using Amazon Athena integrated with the Glue Data Catalog.
• Implement hybrid data lake/warehouse architectures using Amazon Redshift Spectrum.
• Support log analytics and search use cases using Amazon OpenSearch Service. Deliver optimized datasets to BI platforms and analytics teams.
1. Orchestration & Workflow Management
• Develop and manage data workflows using:
• AWS Step Functions
• Amazon Managed Workflows for Apache Airflow (MWAA)
• Build DAG-based pipeline orchestration for complex, multi-stage data processing environments.
• Ensure fault tolerance, retry logic, dependency management, and operational visibility.
1. Data Governance, Security & Observability
• Implement fine-grained data access controls using AWS Lake Formation.
• Design secure IAM role structures using AWS Identity and Access Management (IAM).
• Monitor pipelines and system health using Amazon CloudWatch.
• Ensure compliance with enterprise security, audit, and data governance policies.
1. Business Intelligence & Data Consumption
• Deliver curated datasets to analytics stakeholders using Amazon QuickSight.
• Optimize data models for reporting, dashboards, and advanced analytics.
• Partner with data scientists, analysts, and business teams to support analytical use cases.
Required Qualifications
• 3–8+ years of experience in data engineering or cloud data platform development.
• Strong hands-on experience with AWS data services, particularly S3, Glue, Redshift, Athena, and Kinesis.
• Experience designing scalable batch and/or streaming data pipelines.
• Proficiency in SQL and distributed processing frameworks (e.g., Spark).
• Experience with workflow orchestration tools such as Airflow (MWAA) or Step Functions.
• Strong understanding of data modeling, partitioning strategies, and performance optimization.
• Knowledge of IAM-based security and data governance controls.
Preferred Qualifications
• AWS certifications (e.g., AWS Certified Data Analytics – Specialty or Solutions Architect).
• Experience supporting real-time streaming architectures.
• Experience building enterprise-scale data lakes and hybrid warehouse architectures.
• Familiarity with DevOps and CI/CD practices for infrastructure-as-code deployments.






