Big Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include Java, Apache Flink, Kafka, Docker, Kubernetes, and Terraform. Experience in streaming data pipelines and distributed systems is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Texas, United States
-
🧠 - Skills detailed
#Version Control #AWS (Amazon Web Services) #Data Pipeline #GIT #API (Application Programming Interface) #Deployment #Data Engineering #Kafka (Apache Kafka) #Docker #Big Data #Metadata #Programming #Terraform #Java #Kubernetes #Debugging #Automation
Role description
Responsibilities: β€’ Excellent expertise in Object oriented programming Java and API's. β€’ Experience in Streaming data pipelines using Apache Flink. β€’ Experience in distributed systems specially Kafka and its eco systems like Gateways, Publisher, Consumer. β€’ Good understanding of containerization using Docker, Kubernetes and EKS. β€’ Good experience of deployment tools like terraform. β€’ Understanding of version control like Git and CI/CD workflow. β€’ Understanding of File formats like Parquet and Avro. β€’ Debugging and trouble shooting. β€’ Good Communication and excellent collaboration skills to interact with internal customers for collecting requirements. Deliverables: β€’ Automation of cluster setup in Machinery for PFA customer. Service deployment (Kafka, Schema Registry, - Metadata. β€’ Service, Data Platform Gateway, and Kafka Admin Operator) in new AWS accounts. β€’ Integrate PFA with Kafka Automation, Schema Automation and Journal automation. β€’ Integrate PFA with Apple Ingest Control Plane. β€’ Implement the required enhancements to existing streaming pipelines developed using Apache Flink and Java. β€’ Onboarding topics, Flinks jobs for journaling into iceberg format. β€’ Monitor and address user requests and issues as they arise. β€’ Support Kafka infrastructure and Flink streaming pipelines to meet defined SLA’s and SLO’s.