

Call Quest Solution
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 6-month contract, offering a pay rate of “$X/hour.” Required skills include 5+ years of experience, strong programming in Python/Scala/Java, advanced SQL, and familiarity with big data technologies and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 25, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
California, United States
-
🧠 - Skills detailed
#Data Architecture #Hadoop #Scala #Security #ML (Machine Learning) #Snowflake #Java #Airflow #Data Accuracy #Data Lake #Big Data #Data Engineering #Compliance #Python #Computer Science #AWS (Amazon Web Services) #GCP (Google Cloud Platform) #BigQuery #Programming #Data Processing #Azure #Data Science #SQL (Structured Query Language) #Data Governance #Data Modeling #Redshift #Kafka (Apache Kafka) #Cloud #Data Pipeline #Spark (Apache Spark) #Apache Spark #Apache Airflow #Databases #"ETL (Extract #Transform #Load)" #Luigi
Role description
Job Overview
We are looking for a skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. This role requires a strong foundation in data architecture, data modeling, and large-scale data processing. The ideal candidate will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, reliable, and optimized for performance.
Key Responsibilities
• Design, develop, and maintain scalable ETL/ELT pipelines
• Build and optimize data architectures and data models
• Integrate data from multiple sources into centralized data platforms
• Ensure data accuracy, consistency, and integrity
• Collaborate with cross-functional teams to understand data requirements
• Optimize data processing and query performance
• Implement data governance, security, and compliance standards
• Monitor and troubleshoot data pipeline issues
• Document data flows, processes, and architecture
Required Skills & Qualifications
• Bachelor’s degree in Computer Science, Data Engineering, or related field
• 5+ years of experience in data engineering
• Strong programming skills in Python, Scala, or Java
• Advanced SQL skills and experience with relational databases
• Experience with big data technologies such as Apache Spark, Hadoop
• Familiarity with data pipeline tools like Apache Airflow or Luigi
• Experience with cloud platforms (AWS, Azure, or GCP)
• Knowledge of data warehousing solutions (Redshift, Snowflake, BigQuery)
Preferred Qualifications
• Experience with real-time data streaming tools like Kafka
• Knowledge of data lake architectures
• Familiarity with machine learning workflows
• Experience with containerization and orchestration tools
Job Overview
We are looking for a skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. This role requires a strong foundation in data architecture, data modeling, and large-scale data processing. The ideal candidate will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, reliable, and optimized for performance.
Key Responsibilities
• Design, develop, and maintain scalable ETL/ELT pipelines
• Build and optimize data architectures and data models
• Integrate data from multiple sources into centralized data platforms
• Ensure data accuracy, consistency, and integrity
• Collaborate with cross-functional teams to understand data requirements
• Optimize data processing and query performance
• Implement data governance, security, and compliance standards
• Monitor and troubleshoot data pipeline issues
• Document data flows, processes, and architecture
Required Skills & Qualifications
• Bachelor’s degree in Computer Science, Data Engineering, or related field
• 5+ years of experience in data engineering
• Strong programming skills in Python, Scala, or Java
• Advanced SQL skills and experience with relational databases
• Experience with big data technologies such as Apache Spark, Hadoop
• Familiarity with data pipeline tools like Apache Airflow or Luigi
• Experience with cloud platforms (AWS, Azure, or GCP)
• Knowledge of data warehousing solutions (Redshift, Snowflake, BigQuery)
Preferred Qualifications
• Experience with real-time data streaming tools like Kafka
• Knowledge of data lake architectures
• Familiarity with machine learning workflows
• Experience with containerization and orchestration tools






