

Eton Solution
Staff Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Staff Data Engineer with 8+ years of experience, specializing in Flink SQL, to design and maintain real-time data pipelines. Contract length is unspecified, pay rate is competitive, and the work location is hybrid.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 18, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Bellevue, WA
-
π§ - Skills detailed
#Azure #Apache Kafka #Libraries #Migration #AWS Kinesis #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Kubernetes #SQL Queries #AWS (Amazon Web Services) #Observability #Data Processing #"ETL (Extract #Transform #Load)" #Cloud #Kafka (Apache Kafka) #Scala #Docker #Deployment #JSON (JavaScript Object Notation) #Spark (Apache Spark) #Data Engineering #Data Pipeline
Role description
β’ Immigration sponsorship is not available in this role
β’ We are looking for an experienced Data Engineer (8+ years of experience) with deep expertise in Flink SQL to join our engineering team. This role is ideal for someone who thrives on building robust real-time data processing pipelines and has hands-on experience designing and optimizing Flink SQL jobs in a production environment.
Youβll work closely with data engineers, platform teams, and product stakeholders to create scalable, low-latency data solutions that power intelligent applications and dashboards.
βΈ»
Key Responsibilities:
β’ Design, develop, and maintain real-time streaming data pipelines using Apache Flink SQL.
β’ Collaborate with platform engineers to scale and optimize Flink jobs for performance and reliability.
β’ Build reusable data transformation logic and deploy to production-grade Flink clusters.
β’ Ensure high availability and correctness of real-time data pipelines.
β’ Work with product and analytics teams to understand requirements and translate them into Flink SQL jobs.
β’ Monitor and troubleshoot job failures, backpressure, and latency issues.
β’ Contribute to internal tooling and libraries that improve Flink developer productivity.
Required Qualifications:
β’ Deep hands-on experience with Flink SQL and the Apache Flink ecosystem.
β’ Strong understanding of event time vs processing time semantics, watermarks, and state management.
β’ 3+ years of experience in data engineering, with strong focus on real-time/streaming data.
β’ Experience writing complex Flink SQL queries, UDFs, and windowing operations.
β’ Proficiency in working with streaming data formats such as Avro, Protobuf, or JSON.
β’ Experience with messaging systems like Apache Kafka or Pulsar.
β’ Familiarity with containerized deployments (Docker, Kubernetes) and CI/CD pipelines.
β’ Solid understanding of distributed system design and performance optimization.
Nice to Have:
β’ Experience with other stream processing frameworks (e.g., Spark Structured Streaming, Kafka Streams).
β’ Familiarity with cloud-native data stacks (AWS Kinesis, GCP Pub/Sub, Azure Event Hub).
β’ Experience in building internal tooling for observability or schema evolution.
β’ Prior contributions to the Apache Flink community or similar open-source projects.
Why Join Us:
β’ Work on cutting-edge real-time data infrastructure that powers critical business use cases.
β’ Be part of a high-caliber engineering team with a culture of autonomy and excellence.
β’ Flexible working arrangements with competitive compensation.
β’ Immigration sponsorship is not available in this role
β’ We are looking for an experienced Data Engineer (8+ years of experience) with deep expertise in Flink SQL to join our engineering team. This role is ideal for someone who thrives on building robust real-time data processing pipelines and has hands-on experience designing and optimizing Flink SQL jobs in a production environment.
Youβll work closely with data engineers, platform teams, and product stakeholders to create scalable, low-latency data solutions that power intelligent applications and dashboards.
βΈ»
Key Responsibilities:
β’ Design, develop, and maintain real-time streaming data pipelines using Apache Flink SQL.
β’ Collaborate with platform engineers to scale and optimize Flink jobs for performance and reliability.
β’ Build reusable data transformation logic and deploy to production-grade Flink clusters.
β’ Ensure high availability and correctness of real-time data pipelines.
β’ Work with product and analytics teams to understand requirements and translate them into Flink SQL jobs.
β’ Monitor and troubleshoot job failures, backpressure, and latency issues.
β’ Contribute to internal tooling and libraries that improve Flink developer productivity.
Required Qualifications:
β’ Deep hands-on experience with Flink SQL and the Apache Flink ecosystem.
β’ Strong understanding of event time vs processing time semantics, watermarks, and state management.
β’ 3+ years of experience in data engineering, with strong focus on real-time/streaming data.
β’ Experience writing complex Flink SQL queries, UDFs, and windowing operations.
β’ Proficiency in working with streaming data formats such as Avro, Protobuf, or JSON.
β’ Experience with messaging systems like Apache Kafka or Pulsar.
β’ Familiarity with containerized deployments (Docker, Kubernetes) and CI/CD pipelines.
β’ Solid understanding of distributed system design and performance optimization.
Nice to Have:
β’ Experience with other stream processing frameworks (e.g., Spark Structured Streaming, Kafka Streams).
β’ Familiarity with cloud-native data stacks (AWS Kinesis, GCP Pub/Sub, Azure Event Hub).
β’ Experience in building internal tooling for observability or schema evolution.
β’ Prior contributions to the Apache Flink community or similar open-source projects.
Why Join Us:
β’ Work on cutting-edge real-time data infrastructure that powers critical business use cases.
β’ Be part of a high-caliber engineering team with a culture of autonomy and excellence.
β’ Flexible working arrangements with competitive compensation.






