Jobs via Dice

Sr Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer on a contract basis, potentially leading to permanent employment. Key skills include Apache Iceberg, Trino, Airflow, and Kubernetes. Requires hands-on experience in a HIPAA-regulated environment. Remote work location.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#"ACID (Atomicity #Consistency #Isolation #Durability)" #C# #BI (Business Intelligence) #Data Governance #Storage #Data Lakehouse #Python #SQL (Structured Query Language) #Scala #Data Lake #Data Pipeline #Trino #Visualization #OpenStack #AI (Artificial Intelligence) #Apache Iceberg #Data Engineering #Infrastructure as Code (IaC) #Airflow #Deployment #Data Ingestion #Security #Kubernetes #Apache Airflow
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, XTGlobal, is seeking the following. Apply via Dice today! CLIENT IS NOT SPONSORING VISAS AT THIS TIME Sr Data Engineer Remote role Contract role, but may go perm down the line They would like to have at least 2-3 of the technologies together. They are building out the data lakehouse. We re standing up a modern, open-source Data Lakehouse in our datacenter, engineered for petabyte-scale analytics and real-time workloads. The core stack includes: • Apache Iceberg (table format, ACID, time travel, schema evolution) • Project Nessie (as the Iceberg catalog/metastore replacing Hive Metastore) • Apache Ranger (security and audit layer for Trino, Iceberg, and the lakehouse) • Trino (distributed SQL engine) • Ray (scalable Python/AI compute) • Apache AirByte (data ingestion) • Apache Airflow (workflow orchestration) • Apache Pinot (real-time OLAP) • Apache Superset (BI/visualization) • Amundsen & OpenLineage (data governance, discovery, lineage) • Pure Storage FlashBlade/FlashArray (object/block storage) • OpenStack (IaaS, K8s orchestration) What Client needs: • Data engineers and architects with hands-on experience in Iceberg, Nessie, Ranger, Trino, and Airflow at our scale. Read about it or saw it one time is not going to cut it for us. • Folks who know their way around Kubernetes and IaC. • Python and JVM skills are a must; bonus points for Kotlin, C#, and experience with Airflow DAGs and AirByte connectors. • Real-world experience with data governance (Amundsen, OpenLineage), security frameworks, and high-availability deployments. This is a highly regulated HIPPA environment. • Ability to work as part of a cross-functional team, help design and implement scalable, secure, and automated data pipelines.