

Ampstek
Data Engineer || NYC or Chicago, IL(Hybrid Onsite)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis, located in NYC or Chicago, IL (Hybrid). The position requires 4-6 years of data engineering experience, proficiency in SQL and Python, and familiarity with cloud data warehouses like Snowflake or Redshift.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#BI (Business Intelligence) #Python #Data Pipeline #Consulting #DevOps #Databases #Snowflake #Data Modeling #Tableau #AI (Artificial Intelligence) #Fivetran #AWS (Amazon Web Services) #Data Engineering #Redshift #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Migration #Amplitude #S3 (Amazon Simple Storage Service) #Alteryx #Lambda (AWS Lambda) #Automation #Data Warehouse #Schema Design #Cloud #BigQuery #Looker
Role description
Job Title :: Data Engineer
Location :: NYC or Chicago, IL(Hybrid onsite)
Job Type :: Contract
## What you'll do
Improve and scale existing client data pipelines. One early project involves a Tableau dashboard that's grown well beyond its original scope, with accumulated data and aggregation logic that needs to be rebuilt for performance. Expect similar work across other clients: improving data systems while keeping client deliverables running smoothly.
Support a Snowflake migration. We're moving our data warehouse from Snowflake to another system, with our DevOps team handling infrastructure provisioning. You'd be the internal technical counterpart: documenting current schemas, validating that the new architecture preserves existing functionality, testing the migration with real client data, and making sure the D&I team can continue working with minimal disruption.
Build and maintain data pipelines. Configure and monitor Fivetran connectors for paid media data. Build pipelines for data sources that Fivetran doesn't cover, including automating ingestion from partner agencies and DSP partners. Write validation and transformation steps. Load clean data into the database.
Connect data to BI tools. Tableau is the primary output, but some clients use PowerBI or Looker. You'll build views and clean tables that analysts can connect to directly without needing to write complex SQL.
Support the D&I team's client work on an ongoing basis. Some weeks that's adjusting tables because a client asked for a new data cut. Some weeks it's setting up a new client's data pipeline. You're part of our infrastructure team, supporting client-facing analysts with the data layer behind their deliverables. You need to be comfortable toggling between deep engineering work and lighter support requests.
Contribute to AI and data product development. Zeno is building proprietary AI tools and LLM integrations for data validation, insight generation, and workflow automation. You'd help build the data layer underneath these products, including ingestion pipelines, APIs, and programmatic access layers that connect data sources to LLM interfaces.
## Who you are
4-6 years of hands-on data engineering experience. You've built pipelines, debugged data issues, and written clean SQL in a production environment. You've worked with at least one cloud data warehouse (Redshift, Snowflake, BigQuery) and understand ETL/ELT patterns. You're comfortable with Python for data work. AWS experience is preferred.
You don't need to be a senior architect, but you do need to hold your own in architecture conversations. We have a DevOps team that handles infrastructure provisioning, but they're not data engineers. You'd be in the room for decisions about schema design, warehouse selection, and data modeling, making sure what gets built actually serves the team's needs.
You work well with a team that is analytically strong and ready for better tooling. Our analysts have deep client and data experience from running operations at previous agencies. You need to listen to what they need, understand the client context, and build things the team can maintain and use alongside you.
Familiarity with LLM tooling, vector databases, or document processing pipelines is more than a nice-to-have. Part of this role is helping identify where LLMs can plug into our data workflows, so you should be curious about that intersection and ready to explore it.
Consulting or agency experience is a plus.
## Technical environment
- Current: Snowflake, Fivetran, Tableau, Alteryx, Excel
- Target: AWS (Redshift, S3, Lambda), Fivetran, Tray.ai, Tableau/PowerBI/Looker
- Adjacent: Claude/LLM integrations via MCP, custom dashboard development
- Data types: paid media performance (Meta, Google, TikTok), earned media (Muckrack, Cision), employee engagement event data (Amplitude), creator/influencer campaign data
Job Title :: Data Engineer
Location :: NYC or Chicago, IL(Hybrid onsite)
Job Type :: Contract
## What you'll do
Improve and scale existing client data pipelines. One early project involves a Tableau dashboard that's grown well beyond its original scope, with accumulated data and aggregation logic that needs to be rebuilt for performance. Expect similar work across other clients: improving data systems while keeping client deliverables running smoothly.
Support a Snowflake migration. We're moving our data warehouse from Snowflake to another system, with our DevOps team handling infrastructure provisioning. You'd be the internal technical counterpart: documenting current schemas, validating that the new architecture preserves existing functionality, testing the migration with real client data, and making sure the D&I team can continue working with minimal disruption.
Build and maintain data pipelines. Configure and monitor Fivetran connectors for paid media data. Build pipelines for data sources that Fivetran doesn't cover, including automating ingestion from partner agencies and DSP partners. Write validation and transformation steps. Load clean data into the database.
Connect data to BI tools. Tableau is the primary output, but some clients use PowerBI or Looker. You'll build views and clean tables that analysts can connect to directly without needing to write complex SQL.
Support the D&I team's client work on an ongoing basis. Some weeks that's adjusting tables because a client asked for a new data cut. Some weeks it's setting up a new client's data pipeline. You're part of our infrastructure team, supporting client-facing analysts with the data layer behind their deliverables. You need to be comfortable toggling between deep engineering work and lighter support requests.
Contribute to AI and data product development. Zeno is building proprietary AI tools and LLM integrations for data validation, insight generation, and workflow automation. You'd help build the data layer underneath these products, including ingestion pipelines, APIs, and programmatic access layers that connect data sources to LLM interfaces.
## Who you are
4-6 years of hands-on data engineering experience. You've built pipelines, debugged data issues, and written clean SQL in a production environment. You've worked with at least one cloud data warehouse (Redshift, Snowflake, BigQuery) and understand ETL/ELT patterns. You're comfortable with Python for data work. AWS experience is preferred.
You don't need to be a senior architect, but you do need to hold your own in architecture conversations. We have a DevOps team that handles infrastructure provisioning, but they're not data engineers. You'd be in the room for decisions about schema design, warehouse selection, and data modeling, making sure what gets built actually serves the team's needs.
You work well with a team that is analytically strong and ready for better tooling. Our analysts have deep client and data experience from running operations at previous agencies. You need to listen to what they need, understand the client context, and build things the team can maintain and use alongside you.
Familiarity with LLM tooling, vector databases, or document processing pipelines is more than a nice-to-have. Part of this role is helping identify where LLMs can plug into our data workflows, so you should be curious about that intersection and ready to explore it.
Consulting or agency experience is a plus.
## Technical environment
- Current: Snowflake, Fivetran, Tableau, Alteryx, Excel
- Target: AWS (Redshift, S3, Lambda), Fivetran, Tray.ai, Tableau/PowerBI/Looker
- Adjacent: Claude/LLM integrations via MCP, custom dashboard development
- Data types: paid media performance (Meta, Google, TikTok), earned media (Muckrack, Cision), employee engagement event data (Amplitude), creator/influencer campaign data






