Staff Technical Data Analyst

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Staff Technical Data Analyst in San Diego, CA, on a W2 contract for 8+ years of ETL/data pipeline experience. Key skills include Python, SQL, AWS, and big data expertise. Hybrid work requires onsite presence 3 days/week.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
816
-
πŸ—“οΈ - Date discovered
May 29, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
San Diego Metropolitan Area
-
🧠 - Skills detailed
#Data Pipeline #AWS (Amazon Web Services) #Security #Redshift #SQL (Structured Query Language) #Data Science #Data Lake #Data Analysis #Batch #Athena #Anomaly Detection #PySpark #Spark (Apache Spark) #Python #Big Data #Datasets #Automation #"ETL (Extract #Transform #Load)" #Data Engineering
Role description
Job Title: Staff Technical Data Analyst – Identity Analytics Location: San Diego, CA (Hybrid – Onsite 3 days/week) Employment Type: W2 Contract Only (No C2C) Overview: We are seeking a hands-on Staff Technical Data Analyst to join the Identity Analytics team. This is a high-impact role for someone who thrives in a collaborative, data-driven environment and enjoys solving complex problems at scale. You will partner closely with data engineers, analysts, data scientists, and product leaders to build and enhance data pipelines and analytics platforms supporting advanced identity and security initiatives. This role is critical in helping the team build cutting-edge capabilities, including real-time anomaly detection, Bayesian experimentation, and self-serve analytics tooling. Responsibilities: β€’ Design, build, and maintain ETL pipelines using PySpark and SQL across high-volume batch and streaming datasets. β€’ Translate clickstream and transactional data into actionable, trusted datasets for analysts and stakeholders. β€’ Collaborate cross-functionally with engineering, data science, and product teams to gather requirements and deliver robust solutions. β€’ Write and implement technical specifications based on business needs. β€’ Act as a subject matter expert for data pipeline architecture and support technical troubleshooting across the team. Required Skills & Experience: β€’ 8+ years of experience working with ETL/data pipelines to support analytics use cases. β€’ Advanced proficiency in Python and SQL. β€’ Expertise working with big data, data lakes, and processing large datasets. β€’ Strong familiarity with AWS services such as Redshift, Athena, and core AWS concepts. β€’ Experience working with clickstream data and related transformation logic. Preferred Qualifications: β€’ Exceptional problem-solving skills and a systems-thinking mindset. β€’ Excellent communication skills and the ability to influence technical and business partners. β€’ Proactive learner with a passion for process improvement and automation. β€’ Highly organized and capable of managing multiple projects simultaneously. Interview Process: β€’ One 60-minute round including a mix of behavioral and technical questions. Additional Details: β€’ W2 only β€” No C2C or third-party arrangements. β€’ Hybrid role β€” Must be available to work onsite in San Diego 3 days per week.