

Net2Source Inc.
Talend Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Talend Architect in Reading, PA, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Talend Cloud (8.0), PySpark, AWS services, and experience in ETL/ELT workflows and data pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reading, PA
-
🧠 - Skills detailed
#Cloud #Data Pipeline #Spark (Apache Spark) #AWS S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Athena #Data Engineering #PySpark #Datasets #AWS (Amazon Web Services) #Data Processing #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Talend #Data Quality #Spark SQL #Scala #Data Lake
Role description
Title- Data Engineer
Location- Reading, PA, Onsite
The Role
As a Senior Data Engineer, you will design, build, and maintain end-to-end data pipelines leveraging Talend Cloud (8.0), PySpark, and AWS services. You will play a key role in ingesting, transforming, and optimizing large-scale structured and unstructured datasets while ensuring scalability, performance, and data quality across the platform.
Key responsibilities include:
• Designing and developing ETL/ELT workflows using Talend 8.0 on Cloud
• Integrating data from APIs, flat files, and streaming sources
• Ingesting and managing data in AWS S3–based data lakes
• Developing PySpark jobs for large-scale data processing and transformations
• Implementing Spark SQL for complex transformations and schema management
• Building and supporting cloud-native data pipelines using AWS services such as Glue, Lambda, Athena, and EMR
• Applying performance tuning and optimization techniques for Spark workloads
Title- Data Engineer
Location- Reading, PA, Onsite
The Role
As a Senior Data Engineer, you will design, build, and maintain end-to-end data pipelines leveraging Talend Cloud (8.0), PySpark, and AWS services. You will play a key role in ingesting, transforming, and optimizing large-scale structured and unstructured datasets while ensuring scalability, performance, and data quality across the platform.
Key responsibilities include:
• Designing and developing ETL/ELT workflows using Talend 8.0 on Cloud
• Integrating data from APIs, flat files, and streaming sources
• Ingesting and managing data in AWS S3–based data lakes
• Developing PySpark jobs for large-scale data processing and transformations
• Implementing Spark SQL for complex transformations and schema management
• Building and supporting cloud-native data pipelines using AWS services such as Glue, Lambda, Athena, and EMR
• Applying performance tuning and optimization techniques for Spark workloads






