Curate Partners

Data Engineer/Modeler

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer/Modeler with a contract length of "unknown" and a pay rate of "$/hour." Located remotely, it requires 4–7+ years of data engineering experience, strong Snowflake expertise, and proficiency in SQL, Python, and dbt.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 30, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Documentation #Python #Data Quality #Observability #Regression #Snowflake #dbt (data build tool) #Data Engineering #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Lineage #Security #Monitoring
Role description
Our client is looking for a Data Engineer / Modeler to own the end-to-end flow of data—from source systems into a semantic model in Snowflake. This role is ideal for someone who loves solving complex data lineage challenges, building reliable ELT/ETL pipelines, and improving observability across production environments. What You’ll Do As the Data Engineer / Modeler, you will: • Map and document current-state data lineage from source through staging, transformation, and semantic modeling • Design and automate robust ELT/ETL pipelines using tools like SQL/Python, dbt, Snowflake Tasks/Streams • Implement data quality and observability (freshness checks, schema drift detection, thresholds, alerting, and runbooks) • Build monitoring dashboards and support incident response through structured on-call and root-cause analysis • Optimize Snowflake environments for performance, cost efficiency, and security (RBAC, masking, PII handling) What We’re Looking For • 4–7+ years of data engineering experience with strong Snowflake expertise • Expert-level SQL + strong Python • Experience with dbt (or similar transformation frameworks) • Strong background in semantic modeling (star/snowflake schemas, canonical layers, data products) • Proven ownership of production pipelines: SLAs, incident response, postmortems, stakeholder communication How Success Will Be Measured Pipeline SLA adherence + load freshness Data quality pass rate + reduced failures Cost/performance improvements (query latency, $/TB) Increased lineage and documentation coverage Faster incident resolution and fewer regressions