

Lawrence Harvey
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term contract-to-hire basis, offering $[pay rate] in a hybrid setting (4 days remote, 1 day onsite in Long Beach, CA). Key skills include Python, SQL, AWS, and ETL/ELT pipeline experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
800
-
ποΈ - Date
February 4, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Long Beach, CA
-
π§ - Skills detailed
#GIT #Databases #Python #Data Quality #AWS (Amazon Web Services) #Monitoring #Data Pipeline #Data Engineering #BI (Business Intelligence) #Data Processing #GitHub #SQL (Structured Query Language) #DataOps #Data Lake #Airflow #"ETL (Extract #Transform #Load)" #dbt (data build tool) #Scala #Batch #Jenkins #Cloud #Data Warehouse #Data Architecture
Role description
Long-term contract-to-hire | Hybrid: 4 days remote, 1 day onsite (Long Beach, CA)
Weβre hiring a hands-on Data Engineer to support and evolve a modern data platform within an enterprise environment thatβs investing heavily in its data foundations. This is a long-term contract-to-hire role, offering stability, meaningful ownership, and a clear path into a permanent position.
This role sits within an established technology organisation and is focused on building, running, and improving production-grade data pipelines that support both operational and analytical use cases. Youβll work closely with a Data Architect and BI team, turning architectural direction into reliable, scalable data systems.
What youβll be doing
β’ Design, build, and maintain Python and SQL-based ETL / ELT pipelines
β’ Support both batch and near real-time data processing use cases
β’ Integrate data from core operational systems, including enterprise databases such as DB2
β’ Build and maintain cloud-based data platforms (AWS preferred), including data warehouses and data lakes
β’ Implement orchestration, monitoring, and alerting to ensure pipeline reliability and performance
β’ Apply CI/CD and DataOps practices to data workflows
β’ Troubleshoot pipeline failures, data quality issues, and performance bottlenecks
β’ Collaborate closely with data architecture, BI, and IT teams to deliver well-governed data solutions
Tech environment
β’ Python and SQL as core languages
β’ Cloud data platforms (AWS preferred)
β’ ETL / ELT orchestration and workflow tooling (e.g. Airflow, dbt or similar)
β’ Streaming or event-driven data technologies
β’ CI/CD tooling (Git, Jenkins, GitHub Actions)
β’ Enterprise data systems, including DB2
What theyβre looking for
β’ Strong hands-on experience as a Data Engineer
β’ Excellent Python and SQL skills
β’ Experience building and supporting production data pipelines
β’ Familiarity with cloud-based data platforms and modern data architectures
β’ Comfort working in enterprise environments with a mix of modern and existing systems
β’ A pragmatic, engineering-led mindset focused on reliability and ownership
This is not a BI or reporting role. Itβs for someone who enjoys owning pipelines end to end and working close to the systems the business relies on.
Please apply to be considered.
Long-term contract-to-hire | Hybrid: 4 days remote, 1 day onsite (Long Beach, CA)
Weβre hiring a hands-on Data Engineer to support and evolve a modern data platform within an enterprise environment thatβs investing heavily in its data foundations. This is a long-term contract-to-hire role, offering stability, meaningful ownership, and a clear path into a permanent position.
This role sits within an established technology organisation and is focused on building, running, and improving production-grade data pipelines that support both operational and analytical use cases. Youβll work closely with a Data Architect and BI team, turning architectural direction into reliable, scalable data systems.
What youβll be doing
β’ Design, build, and maintain Python and SQL-based ETL / ELT pipelines
β’ Support both batch and near real-time data processing use cases
β’ Integrate data from core operational systems, including enterprise databases such as DB2
β’ Build and maintain cloud-based data platforms (AWS preferred), including data warehouses and data lakes
β’ Implement orchestration, monitoring, and alerting to ensure pipeline reliability and performance
β’ Apply CI/CD and DataOps practices to data workflows
β’ Troubleshoot pipeline failures, data quality issues, and performance bottlenecks
β’ Collaborate closely with data architecture, BI, and IT teams to deliver well-governed data solutions
Tech environment
β’ Python and SQL as core languages
β’ Cloud data platforms (AWS preferred)
β’ ETL / ELT orchestration and workflow tooling (e.g. Airflow, dbt or similar)
β’ Streaming or event-driven data technologies
β’ CI/CD tooling (Git, Jenkins, GitHub Actions)
β’ Enterprise data systems, including DB2
What theyβre looking for
β’ Strong hands-on experience as a Data Engineer
β’ Excellent Python and SQL skills
β’ Experience building and supporting production data pipelines
β’ Familiarity with cloud-based data platforms and modern data architectures
β’ Comfort working in enterprise environments with a mix of modern and existing systems
β’ A pragmatic, engineering-led mindset focused on reliability and ownership
This is not a BI or reporting role. Itβs for someone who enjoys owning pipelines end to end and working close to the systems the business relies on.
Please apply to be considered.





