Flink with Datastreams API

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Flink with Datastreams API specialist, offering a contract of unspecified length, with a pay rate of "unknown." Requires 10 years in back-end development and 5 years with Kafka, Flink, and Azure. Remote work location.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Dallas, TX
🧠 - Skills detailed
#Azure ADLS (Azure Data Lake Storage) #Disaster Recovery #Dynatrace #MongoDB #Kafka (Apache Kafka) #Storage #SQL (Structured Query Language) #Integration Testing #Azure #Cloud #API (Application Programming Interface) #Monitoring #Java #Data Lake #Databases #Logging #ADLS (Azure Data Lake Storage) #Argo #GitHub #Deployment
Role description

Company Description

   • Katalyst Healthcares & Life Sciences is hiring entry level candidates for several positions for contract research in Clinical trials of drugs, biologics and medical devices.

   • We have a few immediate job opportunities available in Drug Safety and Pharmacovigilance and Clinical Research field. We work with University hospitals, pharmaceutical companies and recruiting partners.

Job Description

Equipment for Resource:

   • Proficient in writing and supporting both functional and non-functional aspects of Flink with DataStreams API.

Flink Functional Requirements:

   • Expertise in Flink APIs (DataStream, Process functions, etc.).

   • Competence in state management (checkpoints and savepoints) with local storage.

   • Configuration of connectors like EventHub, Kafka, and MongoDB.

   • Implementation of Flink API Aggregators.

   • Handling watermarks for out-of-order events.

   • Management of state using Azure Data Lake Storage (ADLS).

Flink Non-Functional Requirements:

   • Set up a private Flink cluster within a designated AKS environment.

   • Configure both session-based and application-type deployments.

   • Define and build nodes and slots.

   • Manage and configure Job/Task Managers.

   • Establish necessary connectors, e.g., external storage for the Flink Cluster.

   • Configure heap memory and Rocks DB for state management.

   • Define and set up checkpoints and savepoints for state recovery.

   • Enable Auto-Pilot capabilities.

   • Integrate network resources, such as Azure EventHub and external databases like MongoDB.

   • Implement integration with Argo CD for job submissions.

   • Install LTM agents for logging and Dynatrace agents for monitoring purposes.

   • Provide access to the Flink Dashboard.

   • Establish High Availability (HA) and Disaster Recovery (DR) configurations.

Experience:

   • 10 years of hands-on design and java coding experience in back-end system development.

   • 5 years hands-on experience with Kafka, Flink, Cloud, Unit/Functional/Integration testing, SQL or kSQL, Java, Github Actions, Dynatrace, Code scanner, and MongoDB.

Additional details:

Delivery of a fully configured Confluent Cloud infrastructure with FLINK integration and automated CI/CD pipeline to deploy in all environments (Test, Stage, Prod).

Additional Information

All your information will be kept confidential according to EEO guidelines.