Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position in Irvine, CA, offering a pay rate of "unknown". Requires 10-15 years of software engineering experience, proficiency in Python, and expertise in ETL/ELT pipelines and cloud data platforms (AWS, Azure, GCP).
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date discovered
September 19, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Irvine, CA
-
🧠 - Skills detailed
#Automation #AWS (Amazon Web Services) #Python #Observability #Java #Strategy #Scala #GCP (Google Cloud Platform) #Data Processing #Data Integration #Cloud #Data Science #Batch #Data Framework #"ETL (Extract #Transform #Load)" #Monitoring #Deployment #Distributed Computing #Azure #Programming #Data Engineering #Data Quality #ML (Machine Learning)
Role description
Job Title: Data Engineer Location: Irvine, CA (Onsite) Employment: Contract Job Description: As a Senior Staff Software Engineer, you will collaborate closely with the data engineering team and work on developing and maintaining distributed data infrastructure, ETL/ELT pipelines, data applications, and integration solutions. You will get a chance to work with brilliant minds in the industry, work on complex use cases enabling steep learning curve, and build products from scratch while modernizing various applications. Qualifications: β€’ 10–15 years of proven experience in software engineering, with a focus on data infrastructure and engineering. β€’ Expertise in object-oriented programming, design patterns, algorithm optimization, and problem-solving from first principles. β€’ Strong experience parsing unstructured log data, real-time data processing, distributed computing frameworks, and streaming data frameworks. β€’ Proficient in Python; experience with Java or Scala is a plus. β€’ Deep expertise in data engineering technologies, including ETL/ELT pipelines, data integration, and operational monitoring. β€’ Experience drafting proofs of concept (POCs) and collaborating cross-functionally to develop prototypes and production-ready solutions. β€’ Proficiency with cloud-based data platforms (AWS, Azure, or GCP). Key Responsibilities: β€’ Design, build, and maintain event-driven distributed data infrastructure and data applications. β€’ Develop and optimize robust ETL/ELT pipelines to support batch and real-time data workflows. β€’ Integrate and normalize diverse data sources, ensuring high standards of data quality, accuracy, and consistency. β€’ Lead the design and implementation of real-time data processing systems for analytics, operational intelligence, and reporting use cases. β€’ Collaborate cross-functionally with data scientists, ML engineers, and product teams to design end-to-end data solutions. β€’ Establish best practices for data engineering, including testing, monitoring, deployment automation, and observability. β€’ Mentor engineers and contribute to architectural decisions, technical strategy, and roadmap planning for the data platform.