

Chelsoft Solutions Co.
Senior Data Engineer_only on W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a W2 contract for 10+ years of experience. Key skills include expert Python, Apache Iceberg, ETL design, and Azure Data Lake Storage. Familiarity with Docker and data governance is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 8, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Omaha, NE
-
🧠 - Skills detailed
#Data Management #Spark (Apache Spark) #Data Governance #Azure #Apache Iceberg #Docker #FastAPI #Data Lake #Pandas #Metadata #Python #Batch #Storage #Flask #Cloud #Azure ADLS (Azure Data Lake Storage) #ADLS (Azure Data Lake Storage) #Programming #Snowflake #Data Engineering #"ETL (Extract #Transform #Load)"
Role description
Senior Data Engineer_only on W2_10+years exp
Required Skills & Experience
• Expert-level Python programming with strong data engineering background.
• Hands-on experience with Apache Iceberg (PyIceberg/Polaris).
• Proficiency with Pandas, PyArrow, DuckDB, and Parquet.
• Deep understanding of data lake architectures and columnar storage.
• Strong experience with ETL design, batch processing, and schema evolution.
• Cloud experience with Azure Data Lake Storage (Gen2) or similar systems.
• Familiarity with containerized environments (Docker, VS Code/Cursor, Azure pipelines).
Preferred Qualifications
• Experience integrating data workflows with FastAPI or Flask.
• Background in data governance, metadata management, and schema evolution.
• Exposure to Snowflake, Sigma Computing, and distributed processing frameworks (Spark, Flink).
Senior Data Engineer_only on W2_10+years exp
Required Skills & Experience
• Expert-level Python programming with strong data engineering background.
• Hands-on experience with Apache Iceberg (PyIceberg/Polaris).
• Proficiency with Pandas, PyArrow, DuckDB, and Parquet.
• Deep understanding of data lake architectures and columnar storage.
• Strong experience with ETL design, batch processing, and schema evolution.
• Cloud experience with Azure Data Lake Storage (Gen2) or similar systems.
• Familiarity with containerized environments (Docker, VS Code/Cursor, Azure pipelines).
Preferred Qualifications
• Experience integrating data workflows with FastAPI or Flask.
• Background in data governance, metadata management, and schema evolution.
• Exposure to Snowflake, Sigma Computing, and distributed processing frameworks (Spark, Flink).






