

Data Engineer- W2 Position
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Sunnyvale, CA, on a contract basis. Required skills include 5–8+ years in Data Engineering, proficiency in Big Data frameworks, ETL tools, cloud platforms (GCP preferred), and SQL. Experience in retail or e-commerce is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 28, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Sunnyvale, CA
-
🧠 - Skills detailed
#Redshift #Presto #Schema Design #Scala #Data Warehouse #Compliance #Version Control #Java #Teradata #Data Quality #Azure #ML (Machine Learning) #Informatica #SQL (Structured Query Language) #Talend #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Agile #Data Modeling #Kafka (Apache Kafka) #Big Data #Cloud #Data Engineering #Snowflake #Data Framework #GCP (Google Cloud Platform) #Data Science #AWS (Amazon Web Services) #Data Security #Hadoop #GIT #BigQuery #Data Pipeline #Security #Python #Datasets
Role description
Job Title: Data Engineer
Location: Sunnyvale, CA (Onsite)
Employment Type: Contract
Position Overview:
We are seeking a skilled Data Engineer to joins client technology team in Sunnyvale, CA. The ideal candidate will have hands-on experience designing, building, and maintaining scalable data pipelines and platforms that support business analytics, reporting, and advanced data-driven solutions at enterprise scale.
Responsibilities:
• Design, build, and optimize data pipelines for ingestion, transformation, and processing of large-scale datasets.
• Develop ETL/ELT frameworks to integrate structured and unstructured data from multiple sources.
• Collaborate with data scientists, analysts, and product teams to deliver high-quality, reliable datasets.
• Implement data models, partitioning strategies, and performance optimizations for analytical workloads.
• Ensure data quality, governance, and security across all data systems.
• Troubleshoot data-related issues and optimize existing solutions for scalability and performance.
• Contribute to architectural decisions around data platforms and cloud adoption.
Required Skills & Experience:
5–8+ years of experience in Data Engineering.
Strong hands-on experience with:
• Big Data frameworks (Hadoop, Spark, Hive, Presto)
• ETL tools (Informatica, Talend, or custom ETL in Python/Scala/Java)
• Cloud platforms (GCP, Azure, or AWS; GCP preferred for Walmart tech stack)
• Strong proficiency in SQL (analytical queries, performance tuning).
• Experience with data warehouse technologies (BigQuery, Snowflake, Teradata, Redshift).
• Solid knowledge of data modeling, schema design, and best practices.
• Experience with version control (Git), CI/CD pipelines, and Agile development practices.
• Familiarity with data security, governance, and compliance frameworks.
Nice-to-Have:
• Experience working in retail, e-commerce, or supply chain domains.
• Familiarity with real-time streaming technologies (Kafka, Flink, Pub/Sub).
• Exposure to machine learning data pipelines or MLOps environments.
Job Title: Data Engineer
Location: Sunnyvale, CA (Onsite)
Employment Type: Contract
Position Overview:
We are seeking a skilled Data Engineer to joins client technology team in Sunnyvale, CA. The ideal candidate will have hands-on experience designing, building, and maintaining scalable data pipelines and platforms that support business analytics, reporting, and advanced data-driven solutions at enterprise scale.
Responsibilities:
• Design, build, and optimize data pipelines for ingestion, transformation, and processing of large-scale datasets.
• Develop ETL/ELT frameworks to integrate structured and unstructured data from multiple sources.
• Collaborate with data scientists, analysts, and product teams to deliver high-quality, reliable datasets.
• Implement data models, partitioning strategies, and performance optimizations for analytical workloads.
• Ensure data quality, governance, and security across all data systems.
• Troubleshoot data-related issues and optimize existing solutions for scalability and performance.
• Contribute to architectural decisions around data platforms and cloud adoption.
Required Skills & Experience:
5–8+ years of experience in Data Engineering.
Strong hands-on experience with:
• Big Data frameworks (Hadoop, Spark, Hive, Presto)
• ETL tools (Informatica, Talend, or custom ETL in Python/Scala/Java)
• Cloud platforms (GCP, Azure, or AWS; GCP preferred for Walmart tech stack)
• Strong proficiency in SQL (analytical queries, performance tuning).
• Experience with data warehouse technologies (BigQuery, Snowflake, Teradata, Redshift).
• Solid knowledge of data modeling, schema design, and best practices.
• Experience with version control (Git), CI/CD pipelines, and Agile development practices.
• Familiarity with data security, governance, and compliance frameworks.
Nice-to-Have:
• Experience working in retail, e-commerce, or supply chain domains.
• Familiarity with real-time streaming technologies (Kafka, Flink, Pub/Sub).
• Exposure to machine learning data pipelines or MLOps environments.