

Talend Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Talend Developer in NewPort, OH (Hybrid, 12 months). Requires 3–6 years in data ingestion, proficiency in Apache NiFi, Talend, SQL, and cloud platforms (AWS, Azure). Bachelor's degree in Computer Science or related field required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
June 17, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Newport, OH
-
🧠 - Skills detailed
#Talend #Databases #Data Management #Cloud #Azure Data Factory #Synapse #StreamSets #Redshift #Version Control #AWS Glue #Scripting #Kafka (Apache Kafka) #Lambda (AWS Lambda) #Data Architecture #Delta Lake #Data Extraction #Snowflake #Dataflow #Data Governance #Apache NiFi #Kubernetes #SQL (Structured Query Language) #XML (eXtensible Markup Language) #Scala #Python #AWS (Amazon Web Services) #Data Quality #Docker #JSON (JavaScript Object Notation) #BigQuery #Spark (Apache Spark) #Metadata #Data Mapping #GCP (Google Cloud Platform) #GIT #Informatica #Data Ingestion #Azure #Java #Programming #Data Lake #"ETL (Extract #Transform #Load)" #NiFi (Apache NiFi) #Data Engineering #Data Warehouse #Computer Science #ADF (Azure Data Factory) #Data Integrity #Airflow
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Talend Developer
Work location: NewPort, OH (Hybrid 2-3 days/week from office)
Duration: 12 month
We are looking for a skilled Data Ingestion Developer to join our data engineering team. The ideal candidate will be responsible for building and maintaining scalable, efficient data ingestion pipelines that enable seamless data flow from various sources into our data platform. You will work closely with data engineers, architects, and business stakeholders to support data-driven initiatives and ensure data quality, performance, and reliability.
Key Responsibilities:
• Design, develop, and manage data ingestion pipelines from structured and unstructured data sources (APIs, databases, files, streaming platforms, etc.).
• Develop ETL/ELT processes to transform and load data into data warehouses or data lakes.
• Integrate data from cloud and on-premise sources using tools like Apache NiFi, Kafka, AWS Glue, Azure Data Factory, or similar.
• Ensure high performance, availability, and data integrity of the ingestion workflows.
• Collaborate with data architects and analysts to understand data requirements and ensure proper data mapping and transformations.
• Monitor and troubleshoot data ingestion failures and performance issues.
• Implement data quality checks, validation, and error handling mechanisms.
Required Skills and Qualifications:
• Bachelor's degree in Computer Science, Information Systems, or a related field.
• 3–6 years of experience in data ingestion or data engineering roles.
• Hands-on experience with ingestion tools like Apache NiFi, Talend, Informatica, StreamSets, or custom ingestion frameworks.
• Experience with cloud platforms such as AWS (Glue, Kinesis, Lambda), Azure (ADF, Synapse), or GCP (Dataflow, Pub/Sub).
• Strong programming/scripting skills in Python, Java, or Scala.
• Proficient in writing SQL for data extraction and transformation.
• Experience with data formats like JSON, Parquet, Avro, XML, CSV.
• Familiarity with version control systems (e.g., Git) and CI/CD pipelines.
Preferred Qualifications:
• Experience with real-time data ingestion using Kafka, Spark Streaming, or Flink.
• Knowledge of data governance, cataloging, and metadata management.
• Familiarity with data lake and data warehouse technologies (e.g., Snowflake, Redshift, BigQuery, Delta Lake).
• Exposure to containerization (Docker, Kubernetes) and orchestration tools (Airflow, Prefect).