Data Engineer - Apache Iceberg SME

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Apache Iceberg SME with a contract length of "unknown" and a pay rate of "unknown." Key skills include Apache Iceberg, Python, Scala, Spark, and experience transitioning from Hadoop to open-source solutions. A Bachelor's degree in Computer Science or related field is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 28, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Scala #Java #Trino #NoSQL #Apache Iceberg #Airflow #SQL (Structured Query Language) #DynamoDB #Data Management #Spark (Apache Spark) #Programming #Kafka (Apache Kafka) #Data Engineering #Computer Science #Metadata #Hadoop #Apache Kafka #Databases #Apache Airflow #JDBC (Java Database Connectivity) #Data Processing #Python
Role description
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code. Responsibilities β€’ Designing the environment, defining implementation steps, and driving the transition from Hadoop to a new open-source solution. Qualifications β€’ Bachelor's degree or equivalent experience in Computer Science or related field β€’ Open-source development with modern data platforms β€’ Data formats: Apache Iceberg, Parquet, ORC β€’ Catalogs & metadata management: JDBC, Nessie, Polaris β€’ Programming in Python, Scala, and Java β€’ Data processing with Spark, Trino, and Flink β€’ SQL & NoSQL databases (Cassandra, DynamoDB) β€’ Workflow orchestration & streaming: Apache Airflow, Apache Kafka β€’ Driving modernization initiatives: transitioning from Hadoop to next-gen open-source solutions