Diligente Technologies

Technical Data Analyst

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Technical Data Analyst in San Francisco, CA, on a contract basis. Requires 8-10 years of experience, strong SQL and Python skills, and hands-on expertise with Databricks, ETL tools, and data visualization platforms like Tableau.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 27, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#Data Pipeline #Scripting #Data Accuracy #Datalakes #NumPy #Data Analysis #Azure #Data Quality #Migration #Data Governance #Visualization #Deployment #SQL Queries #Datasets #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Cloud #Batch #Pandas #Spark (Apache Spark) #GCP (Google Cloud Platform) #ThoughtSpot #BI (Business Intelligence) #Data Exploration #Data Processing #GIT #Azure Databricks #DataStage #Automation #Data Modeling #Tableau #Databricks #Version Control #Data Engineering #SSAS (SQL Server Analysis Services) #ADF (Azure Data Factory) #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language)
Role description
Position: Technical Data Analyst Location: San Francisco, CA- Local Only Type : Contract Tech Stack β€’ Databricks. β€’ Datalakes β€’ Datastage β€’ They are Doing migration project β€’ This person will be heavily involved in that β€’ Azure Databricks. β€’ HandsOn SQL Job Summary We are seeking a 8-10 years’ experience Technical Data Analyst with strong expertise in SQL and Python, hands-on experience with modern data platforms, and the ability to translate complex data into trusted, actionable insights. This role will work closely with engineering, product, and business stakeholders to support analytics, reporting, and data-driven decision-making at scale. Key Responsibilities β€’ Write complex, high-performance SQL queries to analyze large, structured and semi-structured datasets β€’ 8-10 years of software development and deployment experience with at least 5 years of hands-on experience with SQL, Databricks, ADF, Datastage (or other ETL tool), SSAS cubes, Cognos, Tableau, Thoughtspot and other BI tools Β· Write SQL for processing raw data, kafka ingestions, adf pipelines, data validation and QA Β· Knowledge working with APIs to collect or ingest data β€’ Use Python for data analysis, automation, validation, and lightweight data engineering tasks β€’ Build, enhance, and maintain dashboards and reports using Tableau and ThoughtSpot β€’ Partner with data engineers to design, validate, and optimize ETL/ELT pipelines β€’ Work extensively with Databricks (Spark, notebooks, Delta tables) for data exploration and analytics β€’ Perform data quality checks, reconciliations, and root-cause analysis to ensure data accuracy and consistency β€’ Translate business requirements into technical data solutions and semantic layers β€’ Support self-service analytics by documenting datasets, metrics, and business definitions β€’ Collaborate across teams to troubleshoot data issues and improve reporting performance Required Qualifications β€’ Strong proficiency in SQL, including complex joins, window functions, CTEs, and performance optimization β€’ Strong Python skills for data analysis and scripting (e.g., pandas, numpy) β€’ Hands-on experience with Databricks and distributed data processing concepts β€’ Hands on experience working with ETL tools and data pipelines (batch and/or streaming) β€’ Proficiency in reporting and visualization tools such as Tableau, ThoughtSpot, Cognos, SSAS Cubes β€’ Solid understanding of data warehousing concepts, data modeling, and analytics best practices β€’ Ability to analyze large datasets and communicate insights clearly to both technical and non-technical audiences Preferred Qualifications β€’ Experience with cloud data platforms (AWS, Azure, or GCP) β€’ Familiarity with version control tools (Git) and CI/CD concepts for analytics workflows β€’ Exposure to data governance, metric standardization, and semantic layers β€’ Prior experience in enterprise-scale data platforms or COE environments What Success Looks Like β€’ Trusted, accurate dashboards and datasets used across multiple business teams β€’ Efficient, well-documented SQL and Python code that scales with data growth β€’ Strong partnership with engineering and business stakeholders to deliver timely insights β€’ Proactive identification and resolution of data quality and performance issues