

Data Engineer (Associate) (Remote)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Associate) (Remote) with a contract length of "unknown" and a pay rate of "$X/hour." Requires 3–5+ years in data engineering, mastery of SQL and dbt, and experience with cloud data warehouses.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
July 28, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Engineering #Cloud #Data Quality #Documentation #BigQuery #Scripting #Data Ingestion #SQL (Structured Query Language) #Data Warehouse #Slowly Changing Dimensions #AI (Artificial Intelligence) #Snowflake #Python #Observability #Data Modeling #Fivetran #Code Reviews #Redshift #dbt (data build tool)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Verified Job On Employer Career Site
Job Summary:
Honeycomb.io is a rapidly growing company focused on defining observability and enhancing developer tools, having recently closed Series D funding. They are looking for a Data Engineer to expand their data modeling layer and ensure reliable data foundations for various teams across the company.
Responsibilities:
• Own and expand our dbt modeling layer, ensuring it is structured, documented, and reliable
• Ingest and normalize data from diverse systems (e.g. Stripe, Salesforce, Hubspot, Netsuite) via Fivetran and custom pipelines
• Collaborate with stakeholders across the business to understand how data is used and what it needs to represent
• Build analysis-ready marts that power dashboards, experimentation, and planning
• Partner with our Staff Data Engineer on ingestion architecture and orchestration, while taking the lead on modeling and semantic logic
• Ensure data quality via testing, validation frameworks, and proactively identifying upstream issues
• Create leverage through documentation, code reusability, and adoption of AI-assisted tooling to boost development velocity, reduce manual overhead, and enhance documentation/testing
Qualifications:
Required:
• 3–5+ years of experience in data engineering or analytics engineering
• Mastery of SQL and dbt, with an eye for structure, testing, and maintainability
• Working experience with Python, especially for data ingestion, orchestration, and basic scripting
• Comfort working with cloud data warehouses like Snowflake (or Redshift, BigQuery)
• Comfort working through ambiguous situations - our data team is small and young so some role shaping and willingness to work across the data stack (e.g. occasional dashboard building, analysis) is critical to early success in this role
• Experience implementing structured data models, architectures and marts
• Strong intuition around data contracts, late-arriving data, slowly changing dimensions, and business logic edge cases
• Strong intuition around data quality and governance including experience developing tests and building out documentation
• Experience working across functions (e.g., Finance, Product, CS) to map business needs into durable models
• Hands-on usage of AI tooling in your data engineering workflow (e.g., Copilot for dbt model stubs, AI-assisted test generation, documentation, SQL generation/refactoring, or code reviews)—you don’t just use it, you use it thoughtfully to create leverage
• A mindset of ownership, iteration, and clarity—you care about the business logic behind the data, and building systems others can trust and build on
Company:
Honeycomb is the observability platform that enables engineering teams to find and solve problems they couldn't before. Founded in 2016, the company is headquartered in San Francisco, California, USA, with a team of 201-500 employees. The company is currently Late Stage.