

Creamos Solutions Inc
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills required include strong SQL expertise, GCP experience, proficiency in Python, and familiarity with Databricks and Apache Spark.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#BigQuery #Apache Spark #Data Science #Airflow #Data Analysis #Data Engineering #Data Modeling #Cloud #Data Quality #Databricks #SQL Queries #DevOps #Python #GIT #SQL (Structured Query Language) #Dataflow #Kafka (Apache Kafka) #Data Pipeline #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Scala #GCP (Google Cloud Platform)
Role description
• Design, develop, and maintain robust ETL/ELT data pipelines
• Write and optimize complex SQL queries for analytics use cases
• Work with GCP services such as BigQuery, GCS, and Dataflow
• Build and manage scalable data solutions using Databricks and Apache Spark
• Ensure data quality, reliability, and performance
• Collaborate with data analysts, data scientists, and business stakeholders
• Troubleshoot and resolve data pipeline and performance issues
Required Skills
• Strong expertise in SQL
• Hands-on experience with Google Cloud Platform (GCP)
• Experience with Databricks / Apache Spark
• Proficiency in Python
• Experience with data warehousing and data modeling concepts
Nice to Have
• Experience with Airflow or other orchestration tools
• Exposure to streaming technologies (Kafka, Pub/Sub)
• Knowledge of CI/CD, Git, and DevOps practices
• Design, develop, and maintain robust ETL/ELT data pipelines
• Write and optimize complex SQL queries for analytics use cases
• Work with GCP services such as BigQuery, GCS, and Dataflow
• Build and manage scalable data solutions using Databricks and Apache Spark
• Ensure data quality, reliability, and performance
• Collaborate with data analysts, data scientists, and business stakeholders
• Troubleshoot and resolve data pipeline and performance issues
Required Skills
• Strong expertise in SQL
• Hands-on experience with Google Cloud Platform (GCP)
• Experience with Databricks / Apache Spark
• Proficiency in Python
• Experience with data warehousing and data modeling concepts
Nice to Have
• Experience with Airflow or other orchestration tools
• Exposure to streaming technologies (Kafka, Pub/Sub)
• Knowledge of CI/CD, Git, and DevOps practices






