

CoreTek Labs
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract position (long-term) based in Cupertino, CA (Hybrid), Austin, TX, or Seattle, WA, offering $65/hr. Requires 12+ years of experience, advanced Python and SQL skills, and familiarity with ETL processes and orchestration tools.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
November 11, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Austin, TX
-
π§ - Skills detailed
#Security #ML (Machine Learning) #Spark (Apache Spark) #Databricks #Kubernetes #Computer Science #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Quality #Data Engineering #PySpark #Python #GIT #Microservices #Pandas #Scala #Snowflake #Data Pipeline #Airflow #Data Science #Complex Queries #Docker #Compliance #Data Modeling #Version Control #Schema Design #Observability
Role description
Job Title: Data Engineering & Analytics
Location: Cupertino, CA (Hybrid) / Austin, TX / Seattle, WA
Type: Long term contract
Exp: 12+ Years
Rate: $65/hr
Only H1B & H4 EAD visa holders - PP number is mandatory
Minimum Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related technical field.
β’ 3β7 years of experience in software or data engineering.
β’ Advanced proficiency in Python (Pandas, PySpark, or similar frameworks).
β’ Strong SQL expertise β ability to write and optimize complex queries and stored procedures.
β’ Proven experience with data modeling, schema design, and performance tuning.
β’ Experience building or orchestrating workflows using Airflow, Dagster, or similar tools.
β’ Solid understanding of APIs, CI/CD pipelines, Git, and containerization (Docker/Kubernetes).
Key Responsibilities:
β’ Design, build, and optimize ETL/ELT data pipelines using Python, SQL, and modern orchestration tools.
β’ Develop and maintain data models, APIs, and microservices that enable analytical and operational use cases.
β’ Work closely with cross-functional partners (Data Science, Product, Finance, and Operations) to translate business needs into engineering solutions.
β’ Apply software engineering best practices (version control, CI/CD, testing, observability) to data workflows.
β’ Optimize data quality, scalability, and latency across distributed systems (Snowflake, Spark, Databricks, etc.).
β’ Participate in architecture discussions on data warehousing, event streaming, and ML data pipelines.
β’ Ensure compliance with Appleβs privacy, security, and governance standards in all data operations.
Job Title: Data Engineering & Analytics
Location: Cupertino, CA (Hybrid) / Austin, TX / Seattle, WA
Type: Long term contract
Exp: 12+ Years
Rate: $65/hr
Only H1B & H4 EAD visa holders - PP number is mandatory
Minimum Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related technical field.
β’ 3β7 years of experience in software or data engineering.
β’ Advanced proficiency in Python (Pandas, PySpark, or similar frameworks).
β’ Strong SQL expertise β ability to write and optimize complex queries and stored procedures.
β’ Proven experience with data modeling, schema design, and performance tuning.
β’ Experience building or orchestrating workflows using Airflow, Dagster, or similar tools.
β’ Solid understanding of APIs, CI/CD pipelines, Git, and containerization (Docker/Kubernetes).
Key Responsibilities:
β’ Design, build, and optimize ETL/ELT data pipelines using Python, SQL, and modern orchestration tools.
β’ Develop and maintain data models, APIs, and microservices that enable analytical and operational use cases.
β’ Work closely with cross-functional partners (Data Science, Product, Finance, and Operations) to translate business needs into engineering solutions.
β’ Apply software engineering best practices (version control, CI/CD, testing, observability) to data workflows.
β’ Optimize data quality, scalability, and latency across distributed systems (Snowflake, Spark, Databricks, etc.).
β’ Participate in architecture discussions on data warehousing, event streaming, and ML data pipelines.
β’ Ensure compliance with Appleβs privacy, security, and governance standards in all data operations.






