

Raas Infotek
AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Required skills include AWS, Glue, Python, and SQL. Candidates must have 13+ years of experience and hold USC/Green Card status.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 26, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Engineering #Data Ingestion #SQS (Simple Queue Service) #Storage #NumPy #Databases #Cloud #Data Storage #PySpark #Pandas #Python #Data Pipeline #Spark (Apache Spark) #SQL (Structured Query Language) #MongoDB #Data Architecture #Data Lake #NoSQL #Kafka (Apache Kafka) #Libraries #DynamoDB #AWS (Amazon Web Services) #Data Access #Scala #SQL Server #SNS (Simple Notification Service) #"ETL (Extract #Transform #Load)" #Datasets
Role description
Position: AWS Data engineer
Location: USA\_ Remote
Contract
Implementation partner
Visa: USC/ Green Card- 13 plus years experience
Job description:
Please share with your Submission.
Primary Skills
Skills
AWS
Glue
SNS/SQS
Python
Py-Spark
Data Lake
Cloud Watch
Cloud Trail
DB Design
SQL
Detail- JDs;
β’ Lead the team technically to complete milestones on time.
β’ Understand complete requirement, create Architecture and update all stakeholders.
β’ Create POCs
β’ Manage delivery/release to customer
β’ Develop Services to enable data ingestion from and synchronization with system which exposes required data access mechanisms ensuring near-real-time updates
β’ Ingest data from multiple sources using the python and any other ETL tools
β’ Design and implement an event-driven architecture using AWS EventBridge, Kafka, or SNS/SQS for real-time data streaming
β’ Design, implement, and maintain scalable data pipelines that integrate both on-prem and AWS cloud environments.
β’ Develop efficient Python scripts and applications using libraries like pandas, NumPy, etc., to handle and process large datasets.
β’ Work with various NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) to support high-performance data storage and retrieval.
β’ Develop and deploy applications in a cloud-native architecture, leveraging modern cloud technologies for scalability and resilience.
β’ Continuously monitor data workflows and systems, troubleshoot issues, and optimize performance for reliability and scalability
Transition existing pipeline to MSSQL server
β’ Collaborate with the business application owner on the existing data architecture, including data ingestion, data pipelines, business logic, data consumption patterns, and analytics requirements
β’ Design and document the target data architecture, pipelines, processing and analytics architecture
β’ Identify opportunities for optimization and consolidation
β’ Collaboration with data team on decomposition of business logic and data transformation patterns
Ritesh Rawat
Raas infotek corporation
262 Chapman road, Suite 105A, Newark, DE-19702
Phone: 302-286-9831 Ext: 142,
Email: ritesh.rawat@raasinfotek.com
Website: raasinfotek.com
Position: AWS Data engineer
Location: USA\_ Remote
Contract
Implementation partner
Visa: USC/ Green Card- 13 plus years experience
Job description:
Please share with your Submission.
Primary Skills
Skills
AWS
Glue
SNS/SQS
Python
Py-Spark
Data Lake
Cloud Watch
Cloud Trail
DB Design
SQL
Detail- JDs;
β’ Lead the team technically to complete milestones on time.
β’ Understand complete requirement, create Architecture and update all stakeholders.
β’ Create POCs
β’ Manage delivery/release to customer
β’ Develop Services to enable data ingestion from and synchronization with system which exposes required data access mechanisms ensuring near-real-time updates
β’ Ingest data from multiple sources using the python and any other ETL tools
β’ Design and implement an event-driven architecture using AWS EventBridge, Kafka, or SNS/SQS for real-time data streaming
β’ Design, implement, and maintain scalable data pipelines that integrate both on-prem and AWS cloud environments.
β’ Develop efficient Python scripts and applications using libraries like pandas, NumPy, etc., to handle and process large datasets.
β’ Work with various NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) to support high-performance data storage and retrieval.
β’ Develop and deploy applications in a cloud-native architecture, leveraging modern cloud technologies for scalability and resilience.
β’ Continuously monitor data workflows and systems, troubleshoot issues, and optimize performance for reliability and scalability
Transition existing pipeline to MSSQL server
β’ Collaborate with the business application owner on the existing data architecture, including data ingestion, data pipelines, business logic, data consumption patterns, and analytics requirements
β’ Design and document the target data architecture, pipelines, processing and analytics architecture
β’ Identify opportunities for optimization and consolidation
β’ Collaboration with data team on decomposition of business logic and data transformation patterns
Ritesh Rawat
Raas infotek corporation
262 Chapman road, Suite 105A, Newark, DE-19702
Phone: 302-286-9831 Ext: 142,
Email: ritesh.rawat@raasinfotek.com
Website: raasinfotek.com





