CoreTek Labs

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract position (long-term) based in Cupertino, CA (Hybrid), Austin, TX, or Seattle, WA, offering $65/hr. Requires 12+ years of experience, advanced Python and SQL skills, and familiarity with ETL processes and orchestration tools.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date
November 11, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Security #ML (Machine Learning) #Spark (Apache Spark) #Databricks #Kubernetes #Computer Science #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Quality #Data Engineering #PySpark #Python #GIT #Microservices #Pandas #Scala #Snowflake #Data Pipeline #Airflow #Data Science #Complex Queries #Docker #Compliance #Data Modeling #Version Control #Schema Design #Observability
Role description
Job Title: Data Engineering & Analytics Location: Cupertino, CA (Hybrid) / Austin, TX / Seattle, WA Type: Long term contract Exp: 12+ Years Rate: $65/hr Only H1B & H4 EAD visa holders - PP number is mandatory Minimum Qualifications: β€’ Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. β€’ 3–7 years of experience in software or data engineering. β€’ Advanced proficiency in Python (Pandas, PySpark, or similar frameworks). β€’ Strong SQL expertise β€” ability to write and optimize complex queries and stored procedures. β€’ Proven experience with data modeling, schema design, and performance tuning. β€’ Experience building or orchestrating workflows using Airflow, Dagster, or similar tools. β€’ Solid understanding of APIs, CI/CD pipelines, Git, and containerization (Docker/Kubernetes). Key Responsibilities: β€’ Design, build, and optimize ETL/ELT data pipelines using Python, SQL, and modern orchestration tools. β€’ Develop and maintain data models, APIs, and microservices that enable analytical and operational use cases. β€’ Work closely with cross-functional partners (Data Science, Product, Finance, and Operations) to translate business needs into engineering solutions. β€’ Apply software engineering best practices (version control, CI/CD, testing, observability) to data workflows. β€’ Optimize data quality, scalability, and latency across distributed systems (Snowflake, Spark, Databricks, etc.). β€’ Participate in architecture discussions on data warehousing, event streaming, and ML data pipelines. β€’ Ensure compliance with Apple’s privacy, security, and governance standards in all data operations.