

Lead Data Engineer -Snowflake, Iceberg and DBT
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a 6-month contract, offering a pay rate of "pay rate". Work location is "work location". Key skills include Snowflake, Apache Iceberg, DBT, SQL, and cloud platforms. Requires 10+ years of data engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 21, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Documentation #Data Processing #SQL (Structured Query Language) #Data Governance #Data Engineering #Data Pipeline #Data Architecture #GIT #Batch #Deployment #S3 (Amazon Simple Storage Service) #SnowPipe #Data Lakehouse #Monitoring #Snowflake #Data Analysis #Macros #Version Control #Programming #AWS (Amazon Web Services) #Python #dbt (data build tool) #BI (Business Intelligence) #Scala #Airflow #Apache Iceberg #Lambda (AWS Lambda) #Data Science #Data Ingestion #GCP (Google Cloud Platform) #Azure #Cloud #Data Lake #"ETL (Extract #Transform #Load)" #Delta Lake #Observability #DevOps
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Sr Data Engineer/ Tech Data Leader
We are looking for a highly skilled Senior Data Engineer with deep hands-on experience in modern data architecture and tools, especially Snowflake, Apache Iceberg, and DBT. As a senior member of our data engineering team, you will play a critical role in designing, building, and maintaining our data infrastructure, pipelines, and modeling layer to support business intelligence, advanced analytics, and data science initiatives.
Key Responsibilities
β’ Design, implement, and optimize scalable and reliable data pipelines using Snowflake and DBT.
β’ Lead the adoption and integration of Apache Iceberg for data lakehouse architecture and large-scale data processing.
β’ Collaborate with data analysts, data scientists, and stakeholders to define data requirements and deliver high-quality data models and transformations.
β’ Build and maintain batch and streaming data ingestion workflows using best practices for performance, reliability, and scalability.
β’ Define and enforce data governance, lineage, quality, and documentation standards.
β’ Mentor junior data engineers and contribute to setting engineering best practices.
β’ Troubleshoot data pipeline issues and provide support for data-related operational challenges.
Required Qualifications
β’ 10+ years of professional experience in data engineering or a related role.
β’ Strong expertise with Snowflake (warehousing, performance tuning, Snowpipe, data sharing, etc.).
β’ Proven experience working with Apache Iceberg or similar table formats (e.g., Delta Lake, Hudi).
β’ Deep understanding and practical use of the DBT stack (models, tests, snapshots, macros).
β’ Strong proficiency in SQL and at least one programming language (e.g., Python or Scala).
β’ Solid understanding of data architecture principles, dimensional modeling, and ELT best practices.
β’ Experience with cloud platforms (e.g., AWS, GCP, Azure) and services like S3, EMR, Lambda, etc.
β’ Familiarity with CI/CD practices and tools for data pipeline deployments.
Preferred Qualifications
β’ Experience with orchestration tools (e.g., Airflow, Prefect).
β’ Knowledge of data lake and lakehouse architecture patterns.
β’ Familiarity with version control systems (e.g., Git) and DevOps principles.
β’ Exposure to data observability and monitoring tools (e.g., Monte Carlo, Datafold).
β’ Prior experience in a regulated or data-sensitive industry (e.g., insurance, finance, healthcare) is a plus.