Lawrence Harvey

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term contract-to-hire basis, offering $[pay rate] in a hybrid setting (4 days remote, 1 day onsite in Long Beach, CA). Key skills include Python, SQL, AWS, and ETL/ELT pipeline experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
800
-
πŸ—“οΈ - Date
February 4, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Long Beach, CA
-
🧠 - Skills detailed
#GIT #Databases #Python #Data Quality #AWS (Amazon Web Services) #Monitoring #Data Pipeline #Data Engineering #BI (Business Intelligence) #Data Processing #GitHub #SQL (Structured Query Language) #DataOps #Data Lake #Airflow #"ETL (Extract #Transform #Load)" #dbt (data build tool) #Scala #Batch #Jenkins #Cloud #Data Warehouse #Data Architecture
Role description
Long-term contract-to-hire | Hybrid: 4 days remote, 1 day onsite (Long Beach, CA) We’re hiring a hands-on Data Engineer to support and evolve a modern data platform within an enterprise environment that’s investing heavily in its data foundations. This is a long-term contract-to-hire role, offering stability, meaningful ownership, and a clear path into a permanent position. This role sits within an established technology organisation and is focused on building, running, and improving production-grade data pipelines that support both operational and analytical use cases. You’ll work closely with a Data Architect and BI team, turning architectural direction into reliable, scalable data systems. What you’ll be doing β€’ Design, build, and maintain Python and SQL-based ETL / ELT pipelines β€’ Support both batch and near real-time data processing use cases β€’ Integrate data from core operational systems, including enterprise databases such as DB2 β€’ Build and maintain cloud-based data platforms (AWS preferred), including data warehouses and data lakes β€’ Implement orchestration, monitoring, and alerting to ensure pipeline reliability and performance β€’ Apply CI/CD and DataOps practices to data workflows β€’ Troubleshoot pipeline failures, data quality issues, and performance bottlenecks β€’ Collaborate closely with data architecture, BI, and IT teams to deliver well-governed data solutions Tech environment β€’ Python and SQL as core languages β€’ Cloud data platforms (AWS preferred) β€’ ETL / ELT orchestration and workflow tooling (e.g. Airflow, dbt or similar) β€’ Streaming or event-driven data technologies β€’ CI/CD tooling (Git, Jenkins, GitHub Actions) β€’ Enterprise data systems, including DB2 What they’re looking for β€’ Strong hands-on experience as a Data Engineer β€’ Excellent Python and SQL skills β€’ Experience building and supporting production data pipelines β€’ Familiarity with cloud-based data platforms and modern data architectures β€’ Comfort working in enterprise environments with a mix of modern and existing systems β€’ A pragmatic, engineering-led mindset focused on reliability and ownership This is not a BI or reporting role. It’s for someone who enjoys owning pipelines end to end and working close to the systems the business relies on. Please apply to be considered.