

Brooksource
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer focused on API ingestion and data lake initiatives. It offers a 6-month contract at $50-$60/hour, remote work, and requires 2+ years of experience in API pipelines, SQL, and AWS proficiency.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date
April 21, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
1099 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Documentation #RDS (Amazon Relational Database Service) #Data Pipeline #Security #Data Modeling #AWS S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Data Engineering #EC2 #Big Data #Datasets #ML (Machine Learning) #AI (Artificial Intelligence) #AWS (Amazon Web Services) #API (Application Programming Interface) #SQL Server #S3 (Amazon Simple Storage Service) #Strategy #SQL (Structured Query Language) #Data Lake #Kafka (Apache Kafka)
Role description
Data Engineer (API Ingestion & Data Lake)
Pay Range: $50-$60/hour DOE
We are seeking a hands-on, mid-level Data Engineer to support a security-focused data lake initiative. This is a contract, project-based role focused on building reliable API ingestion pipelines, creating structured SQL datasets, and preparing data for downstream analytics and future AI use cases. This role is execution-heavy and ideal for a builder who enjoys owning data pipelines end-to-end.
What Success Looks Like:
You bring 2+ years of experience building API-based ingestion pipelines at scale, managing data across multiple sources (14 total) and supporting large, complex datasets with thousands of fields.
Role Details
- Contract Length: 6 months (likely extension, no guaranteed conversion)
- Location: Remote (U.S.-based)
- Work Type: Individual contributor, hands-on technical role
- Start: ASAP
-Interview: Wed. Feb 4th and Thurs. Feb 5th
Core Requirement (Non-Negotiable)
What You Will Do
- Build and own API-based ingestion pipelines pulling data from third-party systems
- Handle authentication, pagination, retries, failures, and schema changes
- Ingest, transform, and structure raw data into SQL-based datasets
- Validate incoming data for accuracy, completeness, and consistency
- Create and maintain clear documentation for datasets, fields, and ingestion logic
- Manage and support datasets across multiple data sources (approx. 14 total)
- Prepare clean, well-structured data for downstream analytics and AI consumption
- Operate independently and take ownership of deliverables with minimal hand-holding
Required Skills & Experience
- 2+ years of hands-on experience writing and maintaining API ingestion pipelines in a big data environment, managing datasets across 14 known data sources and supporting datasets with 10β15K+ fields.
- 3β6 years of hands-on experience as a Data Engineer or similar role
- Strong experience building API-based ingestion pipelines
- Proficiency with SQL / SQL Server and relational data modeling
- Experience working in AWS (S3, Lambda, EC2, RDS, or similar)
- Ability to validate and QA large, complex datasets
- Comfortable working with large field counts and evolving schemas
- Strong documentation and communication skills
Nice to Have
- Exposure to security, device, endpoint, or asset data
- Experience with tools such as Qualys, CrowdStrike, Intune, JAMF, Wiz, or SCCM
- Familiarity with data lake concepts (raw vs curated layers)
- Experience supporting analytics or AI teams (data prep only, not model building)
What This Role Is NOT
- Not an architecture-only or strategy-only role
- Not focused on AI/ML model development
- Not a streaming/Kafka-heavy platform engineering role
- Not a people management position
Ideal Candidate Profile
The ideal candidate is a practical builder who enjoys solving messy data problems, can clearly explain how theyβve built ingestion pipelines in the past, and is comfortable working in a fast-moving, project-based environment.
LinkedIn format:
EEO Statement:
Brooksource is an equal opportunity employer that does not discriminate on the basis of actual or perceived race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, lactation and related medical conditions), gender identity or gender expression, sexual orientation, marital status, military service and veteran status, physical or mental disability, protected medical condition as defined by applicable state or local law, genetic information, or any other characteristic protected by applicable federal,
state, or local laws and ordinances.
Pay Disclaimer:
The pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law.
Benefits & Perks:
Brooksource offers competitive medical, dental, vision, Health Savings Account, Dependent Care FSA, and supplemental coverage with plans that can fit each employeeβs needs. We offer a 401k plan that includes a company match and is fully vested after you become eligible, paid time off, and sick time. Paid company holidays are subject to hour requirements and completion of the new hire period (13 weeks). We also offer an Employee Assistance Program (EAP) that provides services like virtual counseling, financial services, legal services, life coaching, etc.
Data Engineer (API Ingestion & Data Lake)
Pay Range: $50-$60/hour DOE
We are seeking a hands-on, mid-level Data Engineer to support a security-focused data lake initiative. This is a contract, project-based role focused on building reliable API ingestion pipelines, creating structured SQL datasets, and preparing data for downstream analytics and future AI use cases. This role is execution-heavy and ideal for a builder who enjoys owning data pipelines end-to-end.
What Success Looks Like:
You bring 2+ years of experience building API-based ingestion pipelines at scale, managing data across multiple sources (14 total) and supporting large, complex datasets with thousands of fields.
Role Details
- Contract Length: 6 months (likely extension, no guaranteed conversion)
- Location: Remote (U.S.-based)
- Work Type: Individual contributor, hands-on technical role
- Start: ASAP
-Interview: Wed. Feb 4th and Thurs. Feb 5th
Core Requirement (Non-Negotiable)
What You Will Do
- Build and own API-based ingestion pipelines pulling data from third-party systems
- Handle authentication, pagination, retries, failures, and schema changes
- Ingest, transform, and structure raw data into SQL-based datasets
- Validate incoming data for accuracy, completeness, and consistency
- Create and maintain clear documentation for datasets, fields, and ingestion logic
- Manage and support datasets across multiple data sources (approx. 14 total)
- Prepare clean, well-structured data for downstream analytics and AI consumption
- Operate independently and take ownership of deliverables with minimal hand-holding
Required Skills & Experience
- 2+ years of hands-on experience writing and maintaining API ingestion pipelines in a big data environment, managing datasets across 14 known data sources and supporting datasets with 10β15K+ fields.
- 3β6 years of hands-on experience as a Data Engineer or similar role
- Strong experience building API-based ingestion pipelines
- Proficiency with SQL / SQL Server and relational data modeling
- Experience working in AWS (S3, Lambda, EC2, RDS, or similar)
- Ability to validate and QA large, complex datasets
- Comfortable working with large field counts and evolving schemas
- Strong documentation and communication skills
Nice to Have
- Exposure to security, device, endpoint, or asset data
- Experience with tools such as Qualys, CrowdStrike, Intune, JAMF, Wiz, or SCCM
- Familiarity with data lake concepts (raw vs curated layers)
- Experience supporting analytics or AI teams (data prep only, not model building)
What This Role Is NOT
- Not an architecture-only or strategy-only role
- Not focused on AI/ML model development
- Not a streaming/Kafka-heavy platform engineering role
- Not a people management position
Ideal Candidate Profile
The ideal candidate is a practical builder who enjoys solving messy data problems, can clearly explain how theyβve built ingestion pipelines in the past, and is comfortable working in a fast-moving, project-based environment.
LinkedIn format:
EEO Statement:
Brooksource is an equal opportunity employer that does not discriminate on the basis of actual or perceived race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, lactation and related medical conditions), gender identity or gender expression, sexual orientation, marital status, military service and veteran status, physical or mental disability, protected medical condition as defined by applicable state or local law, genetic information, or any other characteristic protected by applicable federal,
state, or local laws and ordinances.
Pay Disclaimer:
The pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law.
Benefits & Perks:
Brooksource offers competitive medical, dental, vision, Health Savings Account, Dependent Care FSA, and supplemental coverage with plans that can fit each employeeβs needs. We offer a 401k plan that includes a company match and is fully vested after you become eligible, paid time off, and sick time. Paid company holidays are subject to hour requirements and completion of the new hire period (13 weeks). We also offer an Employee Assistance Program (EAP) that provides services like virtual counseling, financial services, legal services, life coaching, etc.






