

Clara IT Systems
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis, offering a competitive pay rate. Key skills include strong programming in Python/Java/Scala, advanced SQL, and experience with AWS/Azure/GCP. Familiarity with Apache Spark and data pipeline tools is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Docker #Kubernetes #Datasets #"ETL (Extract #Transform #Load)" #Programming #Snowflake #BigQuery #Data Engineering #Cloud #Scala #Apache Spark #GCP (Google Cloud Platform) #Spark (Apache Spark) #Data Science #Data Pipeline #Java #SQL (Structured Query Language) #Redshift #PySpark #AWS (Amazon Web Services) #Batch #Data Quality #Python #Data Modeling #Azure #Kafka (Apache Kafka) #Airflow
Role description
Key Responsibilities:
• Design, build, and optimize scalable ETL/ELT pipelines
• Develop data pipelines for batch and real-time processing
• Work with large-scale structured & unstructured datasets
• Implement solutions across AWS / Azure / GCP environments
• Ensure data quality, governance, and performance optimization
• Collaborate with cross-functional teams including data scientists and analysts
Required Skills:
• Strong programming in Python / Java / Scala
• Advanced SQL skills
• Experience with Apache Spark / PySpark
• Hands-on with at least one cloud platform (AWS / Azure / GCP)
• Experience with Airflow / Kafka or similar tools
Good to Have:
• Snowflake / Redshift / BigQuery
• Docker / Kubernetes
• CI/CD pipelines
• Data modeling & warehousing concepts
Key Responsibilities:
• Design, build, and optimize scalable ETL/ELT pipelines
• Develop data pipelines for batch and real-time processing
• Work with large-scale structured & unstructured datasets
• Implement solutions across AWS / Azure / GCP environments
• Ensure data quality, governance, and performance optimization
• Collaborate with cross-functional teams including data scientists and analysts
Required Skills:
• Strong programming in Python / Java / Scala
• Advanced SQL skills
• Experience with Apache Spark / PySpark
• Hands-on with at least one cloud platform (AWS / Azure / GCP)
• Experience with Airflow / Kafka or similar tools
Good to Have:
• Snowflake / Redshift / BigQuery
• Docker / Kubernetes
• CI/CD pipelines
• Data modeling & warehousing concepts






