

Net2Source Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a contract length of "unknown" and a pay rate of "unknown," located in Reading, PA (Hybrid). Key skills include Apache Kafka, Apache Spark Structured Streaming, and AWS expertise. Experience with IoT platforms is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date
October 28, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Reading, PA
-
π§ - Skills detailed
#Programming #Cloud #Java #IoT (Internet of Things) #Data Lake #Lambda (AWS Lambda) #DevOps #PySpark #Scala #AWS Kinesis #S3 (Amazon Simple Storage Service) #Storage #Redshift #Data Processing #Spark (Apache Spark) #Data Architecture #Leadership #Debugging #Data Engineering #Python #Apache Spark #Data Ingestion #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Apache Kafka
Role description
Job Title: Lead Data Engineerβ Real-Time Streaming & Event-Driven Systems (If worked in similar project on real time streaming it is plus)
Location: Reading, PA (Hybrid)
Role Overview:
We are looking for a seasoned Lead Data Engineer with deep hands-on expertise in designing and delivering event-driven architectures and real-time streaming systems. The ideal candidate will have extensive experience with Apache Kafka, Apache Spark Structured Streaming, Apache Flink, and messaging queues, and a strong background in building highly resilient IoT data platforms on AWS.
Key Responsibilities:
Architecture & Design
β’ Design event-driven systems using Kafka, Flink, and Spark Structured Streaming.
β’ Define data models, schemas, and integration patterns for IoT and telemetry data.
Technical Leadership
β’ Lead the technical direction of the data engineering team, ensuring best practices in streaming architecture and cloud-native design.
β’ Provide hands-on guidance in coding, debugging, and performance tuning of streaming applications.
β’ Collaborate with product, engineering, and DevOps teams to align data architecture with business needs.
Implementation & Delivery
β’ Build and deploy real-time data processing solutions using Apache Flink and Spark Structured Streaming.
β’ Integrate messaging systems (Kafka, Kinesis, RabbitMQ, etc.) with cloud-native services on AWS.
β’ Ensure high availability, scalability, and resilience of data platforms supporting IoT and telemetry use cases.
Innovation & Optimization
β’ Continuously evaluate and improve system performance, latency, and throughput.
β’ Explore emerging technologies in stream processing, edge computing, and cloud-native data platforms.
β’ DevOps,CI/CD, and infrastructure-as-code practices.
Required Technical Skills:
β’ Mandatory Expertise:
β’ Apache Flink (real-time stream processing)
β’ Apache Spark Structured Streaming
β’ Apache Kafka or equivalent messaging queues (e.g., RabbitMQ, AWS Kinesis)
β’ Event-driven architecture design
β’ AWS services: S3, Lambda, Kinesis, EMR, Glue, Redshift
β’ Additional Skills:
β’ Strong programming skills in Pyspark, Java, or Python
β’ Experience with containerization (OpenShift)
β’ Familiarity with IoT protocols and resilient data ingestion patterns
β’ Knowledge of data lake and lakehouse architectures(Iceberg) S3 storage
Preferred Qualifications:
β’ Experience in building large-scale IoT platforms or telemetry systems.
β’ AWS Certified Data Analytics or Solutions Architect.
Job Title: Lead Data Engineerβ Real-Time Streaming & Event-Driven Systems (If worked in similar project on real time streaming it is plus)
Location: Reading, PA (Hybrid)
Role Overview:
We are looking for a seasoned Lead Data Engineer with deep hands-on expertise in designing and delivering event-driven architectures and real-time streaming systems. The ideal candidate will have extensive experience with Apache Kafka, Apache Spark Structured Streaming, Apache Flink, and messaging queues, and a strong background in building highly resilient IoT data platforms on AWS.
Key Responsibilities:
Architecture & Design
β’ Design event-driven systems using Kafka, Flink, and Spark Structured Streaming.
β’ Define data models, schemas, and integration patterns for IoT and telemetry data.
Technical Leadership
β’ Lead the technical direction of the data engineering team, ensuring best practices in streaming architecture and cloud-native design.
β’ Provide hands-on guidance in coding, debugging, and performance tuning of streaming applications.
β’ Collaborate with product, engineering, and DevOps teams to align data architecture with business needs.
Implementation & Delivery
β’ Build and deploy real-time data processing solutions using Apache Flink and Spark Structured Streaming.
β’ Integrate messaging systems (Kafka, Kinesis, RabbitMQ, etc.) with cloud-native services on AWS.
β’ Ensure high availability, scalability, and resilience of data platforms supporting IoT and telemetry use cases.
Innovation & Optimization
β’ Continuously evaluate and improve system performance, latency, and throughput.
β’ Explore emerging technologies in stream processing, edge computing, and cloud-native data platforms.
β’ DevOps,CI/CD, and infrastructure-as-code practices.
Required Technical Skills:
β’ Mandatory Expertise:
β’ Apache Flink (real-time stream processing)
β’ Apache Spark Structured Streaming
β’ Apache Kafka or equivalent messaging queues (e.g., RabbitMQ, AWS Kinesis)
β’ Event-driven architecture design
β’ AWS services: S3, Lambda, Kinesis, EMR, Glue, Redshift
β’ Additional Skills:
β’ Strong programming skills in Pyspark, Java, or Python
β’ Experience with containerization (OpenShift)
β’ Familiarity with IoT protocols and resilient data ingestion patterns
β’ Knowledge of data lake and lakehouse architectures(Iceberg) S3 storage
Preferred Qualifications:
β’ Experience in building large-scale IoT platforms or telemetry systems.
β’ AWS Certified Data Analytics or Solutions Architect.





