

MM International, LLC
Data Engineer – Apache Flink
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in Apache Flink, offering a hybrid contract in Chicago, IL, at $40-45/hour. Key skills include expertise in Apache Flink, Kafka, SQL, and cloud platforms (AWS, Azure, GCP) within a financial services environment.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
360
-
🗓️ - Date
February 26, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Data Engineering #Storage #Databases #Cloud #Batch #Python #DevOps #Data Pipeline #Spark (Apache Spark) #Data Quality #GCP (Google Cloud Platform) #SQL (Structured Query Language) #Kafka (Apache Kafka) #Azure #Scala #AWS (Amazon Web Services) #Hadoop #"ETL (Extract #Transform #Load)" #Data Modeling
Role description
Job Description
We are seeking an experienced Data Engineer with strong expertise in Apache Flink to design and build scalable real-time and batch data pipelines within a financial services environment.
This role involves working with high-volume streaming data systems and distributed architectures. The ideal candidate will have hands-on experience building production-grade streaming pipelines and collaborating with cross-functional analytics and engineering teams.
Key Responsibilities:
• Design, develop, and maintain real-time and batch data pipelines using Apache Flink
• Process and transform large volumes of structured and unstructured data
• Integrate with Kafka, databases, and cloud storage systems
• Ensure data quality, performance, and reliability
• Collaborate with analytics, product, and engineering stakeholders
Required Skills:
• Strong hands-on experience with Apache Flink
• Experience with Kafka and event-driven streaming architectures
• Solid SQL and data modeling knowledge
• Experience with Hadoop, Spark, or similar distributed frameworks
• Cloud platform experience (AWS, Azure, or GCP)
Preferred Skills:
• Python or Scala
• Data warehousing experience
• CI/CD and DevOps exposure
Location: Hybrid – Chicago, IL
Contract Rate: 40-45/Hour on W2 (Inclusive all taxes)
Job Description
We are seeking an experienced Data Engineer with strong expertise in Apache Flink to design and build scalable real-time and batch data pipelines within a financial services environment.
This role involves working with high-volume streaming data systems and distributed architectures. The ideal candidate will have hands-on experience building production-grade streaming pipelines and collaborating with cross-functional analytics and engineering teams.
Key Responsibilities:
• Design, develop, and maintain real-time and batch data pipelines using Apache Flink
• Process and transform large volumes of structured and unstructured data
• Integrate with Kafka, databases, and cloud storage systems
• Ensure data quality, performance, and reliability
• Collaborate with analytics, product, and engineering stakeholders
Required Skills:
• Strong hands-on experience with Apache Flink
• Experience with Kafka and event-driven streaming architectures
• Solid SQL and data modeling knowledge
• Experience with Hadoop, Spark, or similar distributed frameworks
• Cloud platform experience (AWS, Azure, or GCP)
Preferred Skills:
• Python or Scala
• Data warehousing experience
• CI/CD and DevOps exposure
Location: Hybrid – Chicago, IL
Contract Rate: 40-45/Hour on W2 (Inclusive all taxes)






