

Sibitalent Corp
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 4+ years of experience, focusing on AWS Redshift and EMR. It is a remote, W2 position requiring strong skills in Python, ETL processes, and Agile methodologies. Contract length and pay rate are unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 12, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
New York, United States
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Data Ingestion #Data Engineering #Databases #DynamoDB #AWS (Amazon Web Services) #Jira #Agile #Data Management #Storage #Data Processing #Data Lake #AWS Lambda #Scrum #Data Integrity #Data Pipeline #Redshift #Data Storage #Python
Role description
Role - Data Engineering expertise in AWS (Data Lake).
Itβs remote.
No C2C
only W2
Overview:
Seeking a skilled Data Engineer to join a dynamic team responsible for building and maintaining a Data Lake. The team leverages AWS Redshift to consume and store data from various sources, making it available for both internal and external stakeholders. The core of the work revolves around establishing seamless system-to-system connections via ETL processes and transforming data files for integration into the Data Lake. The primary database is Redshift, supported by DynamoDB for configuration management and EMR as the processing framework. Additionally, Postgres is utilized as the database type, and Python scripts along with Lambda functions are key tools for data ingestion and transformation. The team operates in an Agile environment, following Scrum methodology with 2-week sprints. This role requires a comprehensive understanding of the full stack of technologies used, as team members are expected to contribute across all areas of the data management ecosystem.
Responsibilities:
β’ Develop and maintain ETL processes to load data into the Data Lake using Python, Lambda functions, and other AWS services
β’ Design and implement data ingestion pipelines that handle various file types and formats, ensuring they are parsed and prepared for consumption
β’ Work with AWS Redshift to manage data storage and retrieval, ensuring optimal performance and data integrity
β’ Utilize EMR for data processing tasks and DynamoDB for configuration management
β’ Collaborate with team members in a full-stack environment, contributing to all aspects of data engineering from ingestion to storage
β’ Participate in Agile ceremonies, contributing to sprint planning, daily stand-ups, and retrospective meetings
Required Skills:
β’ 4+ years of experience in data engineering
β’ 2+ years of experience working with AWS Redshift and EMR
β’ Strong experience with AWS Lambda and Python for building data pipelines
β’ Experience with Postgres databases
β’ Knowledge of DynamoDB for configuration management
β’ Strong understanding of ETL processes and data lake architecture
β’ Familiarity with Agile/Scrum methodologies, particularly in a JIRA-managed environment.
Role - Data Engineering expertise in AWS (Data Lake).
Itβs remote.
No C2C
only W2
Overview:
Seeking a skilled Data Engineer to join a dynamic team responsible for building and maintaining a Data Lake. The team leverages AWS Redshift to consume and store data from various sources, making it available for both internal and external stakeholders. The core of the work revolves around establishing seamless system-to-system connections via ETL processes and transforming data files for integration into the Data Lake. The primary database is Redshift, supported by DynamoDB for configuration management and EMR as the processing framework. Additionally, Postgres is utilized as the database type, and Python scripts along with Lambda functions are key tools for data ingestion and transformation. The team operates in an Agile environment, following Scrum methodology with 2-week sprints. This role requires a comprehensive understanding of the full stack of technologies used, as team members are expected to contribute across all areas of the data management ecosystem.
Responsibilities:
β’ Develop and maintain ETL processes to load data into the Data Lake using Python, Lambda functions, and other AWS services
β’ Design and implement data ingestion pipelines that handle various file types and formats, ensuring they are parsed and prepared for consumption
β’ Work with AWS Redshift to manage data storage and retrieval, ensuring optimal performance and data integrity
β’ Utilize EMR for data processing tasks and DynamoDB for configuration management
β’ Collaborate with team members in a full-stack environment, contributing to all aspects of data engineering from ingestion to storage
β’ Participate in Agile ceremonies, contributing to sprint planning, daily stand-ups, and retrospective meetings
Required Skills:
β’ 4+ years of experience in data engineering
β’ 2+ years of experience working with AWS Redshift and EMR
β’ Strong experience with AWS Lambda and Python for building data pipelines
β’ Experience with Postgres databases
β’ Knowledge of DynamoDB for configuration management
β’ Strong understanding of ETL processes and data lake architecture
β’ Familiarity with Agile/Scrum methodologies, particularly in a JIRA-managed environment.






