

Crea services LLC
ETL Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an ETL Developer with a contract length of "unknown", offering a pay rate of "unknown". Key skills include Ab Initio, Kafka, Python, and AWS services (S3, Lambda, Glue). A bachelor's degree in a related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 3, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Chicago, IL
-
π§ - Skills detailed
#Data Processing #Computer Science #"ETL (Extract #Transform #Load)" #AWS Glue #Kafka (Apache Kafka) #AWS S3 (Amazon Simple Storage Service) #Data Manipulation #Lambda (AWS Lambda) #Python #AWS (Amazon Web Services) #S3 (Amazon Simple Storage Service) #Data Architecture #Data Integration #Scripting #AWS Lambda #Ab Initio
Role description
Key Responsibilities:
β’ Design, develop, and maintain data processing pipelines using Ab Initio to handle large volumes of data.
β’ Integrate and process data from Kafka queues and APIs, ensuring efficient data flow and real-time processing.
β’ Develop and implement data processing solutions in AWS, leveraging services such as AWS S3, AWS Lambda, and AWS Glue.
β’ Write Python scripts to support data transformation and manipulation tasks.
β’ Collaborate with data architects, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs.
β’ Monitor and optimize data processing workflows for performance and reliability.
β’ Troubleshoot and resolve issues related to data processing and integration.
β’ Document processes, designs, and code to ensure maintainability and knowledge sharing.
Qualifications:
β’ Bachelorβs degree in Computer Science, Information Technology, or a related field (or equivalent experience).
β’ Proven experience with Ab Initio, including developing and deploying data processing workflows.
β’ Strong experience with Kafka for data streaming and real-time data processing.
β’ Proficiency in Python for scripting and data manipulation.
β’ Hands-on experience with AWS services such as S3, Lambda, Glue, and related tools.
β’ Familiarity with data integration and ETL processes.
β’ Excellent problem-solving skills and attention to detail.
β’ Strong communication skills and the ability to work collaboratively in a team environment.
Key Responsibilities:
β’ Design, develop, and maintain data processing pipelines using Ab Initio to handle large volumes of data.
β’ Integrate and process data from Kafka queues and APIs, ensuring efficient data flow and real-time processing.
β’ Develop and implement data processing solutions in AWS, leveraging services such as AWS S3, AWS Lambda, and AWS Glue.
β’ Write Python scripts to support data transformation and manipulation tasks.
β’ Collaborate with data architects, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs.
β’ Monitor and optimize data processing workflows for performance and reliability.
β’ Troubleshoot and resolve issues related to data processing and integration.
β’ Document processes, designs, and code to ensure maintainability and knowledge sharing.
Qualifications:
β’ Bachelorβs degree in Computer Science, Information Technology, or a related field (or equivalent experience).
β’ Proven experience with Ab Initio, including developing and deploying data processing workflows.
β’ Strong experience with Kafka for data streaming and real-time data processing.
β’ Proficiency in Python for scripting and data manipulation.
β’ Hands-on experience with AWS services such as S3, Lambda, Glue, and related tools.
β’ Familiarity with data integration and ETL processes.
β’ Excellent problem-solving skills and attention to detail.
β’ Strong communication skills and the ability to work collaboratively in a team environment.