

Cloud Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Data Engineer on a short-term contract (less than a month) with a pay rate of $40-60/hr. Key skills include Python, PySpark, Airflow, and experience with GCP/AWS, particularly in healthcare data.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
May 31, 2025
π - Project duration
Less than a month
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Los Angeles, CA
-
π§ - Skills detailed
#Airflow #Spark (Apache Spark) #Cloud #BigQuery #Data Pipeline #Data Warehouse #GCP (Google Cloud Platform) #Kubernetes #"ETL (Extract #Transform #Load)" #Data Engineering #Redshift #Big Data #Storage #AWS (Amazon Web Services) #Python #PySpark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Contract Data Engineer (Short-Term, US Hours)
Company: Klariva (Healthcare)
Job Overview
We are seeking a skilled and motivated Data Engineer to join our stealth startup for a short term contract (with possible extension). The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and architectures to support our data-driven initiatives. This role requires a strong understanding of big data technologies and cloud services.
Responsibilities:
β’ Design, build, and maintain ETL pipelines using PySpark
β’ Orchestrate workflows using Google Cloud Composer or Airflow on GKE
β’ Monitor and optimize job performance, data reliability, and resource usage
β’ Design and implement data models for efficient consumption and storage
Skills and Qualifications:
β’ A minimum of 4-5 years of professional experience with the following:
β’ Python
β’ PySpark
β’ Airflow
β’ GCP and/or AWS
β’ Strong experience designing and building data pipelines that scale
β’ Hands-on experience configuring and managing Kubernetes clusters
β’ Deep understanding of BigQuery/Redshift data warehouses
Nice-to-haves:
β’ Prior experience with healthcare data
Pay: $40-60/hr DOE