

Data Engineer - Apache Iceberg SME
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Apache Iceberg SME with a contract length of "unknown" and a pay rate of "unknown." Key skills include Apache Iceberg, Python, Scala, Spark, and experience transitioning from Hadoop to open-source solutions. A Bachelor's degree in Computer Science or related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 28, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Scala #Java #Trino #NoSQL #Apache Iceberg #Airflow #SQL (Structured Query Language) #DynamoDB #Data Management #Spark (Apache Spark) #Programming #Kafka (Apache Kafka) #Data Engineering #Computer Science #Metadata #Hadoop #Apache Kafka #Databases #Apache Airflow #JDBC (Java Database Connectivity) #Data Processing #Python
Role description
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code.
Responsibilities
β’ Designing the environment, defining implementation steps, and driving the transition from Hadoop to a new open-source solution.
Qualifications
β’ Bachelor's degree or equivalent experience in Computer Science or related field
β’ Open-source development with modern data platforms
β’ Data formats: Apache Iceberg, Parquet, ORC
β’ Catalogs & metadata management: JDBC, Nessie, Polaris
β’ Programming in Python, Scala, and Java
β’ Data processing with Spark, Trino, and Flink
β’ SQL & NoSQL databases (Cassandra, DynamoDB)
β’ Workflow orchestration & streaming: Apache Airflow, Apache Kafka
β’ Driving modernization initiatives: transitioning from Hadoop to next-gen open-source solutions
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code.
Responsibilities
β’ Designing the environment, defining implementation steps, and driving the transition from Hadoop to a new open-source solution.
Qualifications
β’ Bachelor's degree or equivalent experience in Computer Science or related field
β’ Open-source development with modern data platforms
β’ Data formats: Apache Iceberg, Parquet, ORC
β’ Catalogs & metadata management: JDBC, Nessie, Polaris
β’ Programming in Python, Scala, and Java
β’ Data processing with Spark, Trino, and Flink
β’ SQL & NoSQL databases (Cassandra, DynamoDB)
β’ Workflow orchestration & streaming: Apache Airflow, Apache Kafka
β’ Driving modernization initiatives: transitioning from Hadoop to next-gen open-source solutions