Intellectt Inc

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineering & Analytics Consultant in Cupertino, CA, Austin, TX, or Seattle, WA, requiring 4-5 years of experience, advanced Python and SQL skills, and prior employment at Apple. Contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 4, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Python #Version Control #Data Quality #Microservices #Spark (Apache Spark) #Schema Design #Data Science #SQL (Structured Query Language) #Kubernetes #Complex Queries #Scala #Data Pipeline #Pandas #Compliance #Airflow #"ETL (Extract #Transform #Load)" #GIT #Snowflake #PySpark #Observability #Security #Docker #Data Modeling #Computer Science #Data Engineering #Databricks #ML (Machine Learning)
Role description
Hello, Role: Data Engineering & Analytics Consultant Location: Cupertino, CA / Austin, TX / Seattle, WA I am looking for EX-apple employee candidates Note: Data engineering, Advance proficiency in Python, SQL with 4-5 years exp. Job Overview : We are seeking a Software Engineer with strong SQL and Python skills to develop reliable data pipelines, optimize complex workflows, and deliver scalable data products that empower decision-making across Apple’s ecosystem. Key Responsibilities: β€’ Design, build, and optimize ETL/ELT data pipelines using Python, SQL, and modern orchestration tools. β€’ Develop and maintain data models, APIs, and microservices that enable analytical and operational use cases. β€’ Work closely with cross-functional partners (Data Science, Product, Finance, and Operations) to translate business needs into engineering solutions. β€’ Apply software engineering best practices (version control, CI/CD, testing, observability) to data workflows. β€’ Optimize data quality, scalability, and latency across distributed systems (Snowflake, Spark, Databricks, etc.). β€’ Participate in architecture discussions on data warehousing, event streaming, and ML data pipelines. β€’ Ensure compliance with Apple’s privacy, security, and governance standards in all data operations. Minimum Qualifications: β€’ Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. β€’ 3–7 years of experience in software or data engineering. β€’ Advanced proficiency in Python (Pandas, PySpark, or similar frameworks). β€’ Strong SQL expertise β€” ability to write and optimize complex queries and stored procedures. β€’ Proven experience with data modeling, schema design, and performance tuning. β€’ Experience building or orchestrating workflows using Airflow, Dagster, or similar tools. β€’ Solid understanding of APIs, CI/CD pipelines, Git, and containerization (Docker/Kubernetes).