

AfterQuery Experts
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a remote Data Engineer position requiring 1–3 years of experience in data engineering or ETL development. Pay ranges from $60 to $100/hour, with a commitment of 10–20 hours per week. Proficiency in tools like DBT, Kafka, and Spark is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
800
-
🗓️ - Date
February 5, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#dbt (data build tool) #Redshift #Datasets #AI (Artificial Intelligence) #Snowflake #Data Engineering #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Spark (Apache Spark) #Data Pipeline #Data Architecture #Data Integration #Computer Science #Cloud
Role description
Job Description:
• This is a remote, project-based role for data engineering professionals with 1–3 years of work experience. You will complete tasks similar to those performed in data engineering, data infrastructure, ETL development, or analytics engineering roles. Work is asynchronous, and assigned on a project-by-project basis, with an expected commitment of 10–20 hours per week for the projects you accept. This position offers highly competitive pay, exposure to real-world data pipeline and infrastructure challenges, and an excellent opportunity to boost your resume.
Why Apply:
• Fantastic Pay & Growth Opportunities – project based pay starts at est. $60/hour and can go up to $100/hour
• Flexible Time Commitment – Work on your schedule while gaining valuable experience
• Startup Exposure – Work directly with an early-stage Y Combinator-backed company, gaining hands-on experience that sets you apart
• Portfolio Building - Gain experience with modern data architectures and tools
Responsibilities:
• Document technical solutions, data flows, and pipeline architectures
• Collaborate asynchronously on data integration and warehousing projects
• Build and deploy data infrastructure using cloud services and modern tools
• Troubleshoot and resolve data pipeline failures and performance issues
Required Qualifications:
• 1–3 years of professional experience in data engineering, ETL development, or related fields
• Strong proficiency in one or more of the following; Moreso, DBT, Kafka, Spark, Snowflake, Redshift
• Bachelor's degree in Computer Science, Engineering, Information Systems, or related technical field
• Strong problem-solving skills and ability to work independently on technical tasks
Company Description:
• AfterQuery is a research lab investigating the boundaries of artificial intelligence through novel datasets and experimentation. We believe great AI comes from exceptional, human-generated data.
• We're backed by top investors, including Y Combinator and Box Group, and support all leading AI labs.
• Apply today to contribute to pushing the frontier of AI.
Job Description:
• This is a remote, project-based role for data engineering professionals with 1–3 years of work experience. You will complete tasks similar to those performed in data engineering, data infrastructure, ETL development, or analytics engineering roles. Work is asynchronous, and assigned on a project-by-project basis, with an expected commitment of 10–20 hours per week for the projects you accept. This position offers highly competitive pay, exposure to real-world data pipeline and infrastructure challenges, and an excellent opportunity to boost your resume.
Why Apply:
• Fantastic Pay & Growth Opportunities – project based pay starts at est. $60/hour and can go up to $100/hour
• Flexible Time Commitment – Work on your schedule while gaining valuable experience
• Startup Exposure – Work directly with an early-stage Y Combinator-backed company, gaining hands-on experience that sets you apart
• Portfolio Building - Gain experience with modern data architectures and tools
Responsibilities:
• Document technical solutions, data flows, and pipeline architectures
• Collaborate asynchronously on data integration and warehousing projects
• Build and deploy data infrastructure using cloud services and modern tools
• Troubleshoot and resolve data pipeline failures and performance issues
Required Qualifications:
• 1–3 years of professional experience in data engineering, ETL development, or related fields
• Strong proficiency in one or more of the following; Moreso, DBT, Kafka, Spark, Snowflake, Redshift
• Bachelor's degree in Computer Science, Engineering, Information Systems, or related technical field
• Strong problem-solving skills and ability to work independently on technical tasks
Company Description:
• AfterQuery is a research lab investigating the boundaries of artificial intelligence through novel datasets and experimentation. We believe great AI comes from exceptional, human-generated data.
• We're backed by top investors, including Y Combinator and Box Group, and support all leading AI labs.
• Apply today to contribute to pushing the frontier of AI.






