

Technical Data Analyst
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Technical Data Analyst in San Diego, CA (Hybrid – 3 days onsite) on a long-term contract with a pay rate of "W2/1099". Requires 8+ years in ETL/data pipeline development, proficiency in Python, SQL, AWS, and big data platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 21, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
1099 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
San Diego, CA
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Datasets #Python #Big Data #PySpark #"ETL (Extract #Transform #Load)" #Scala #Data Lake #Data Analysis #Redshift #Athena #Data Pipeline #Databricks #Spark (Apache Spark) #Batch #SQL (Structured Query Language)
Role description
Role/Title: Staff Technical Data Analyst
Location: San Diego, CA (Hybrid – 3 days onsite)
Duration: Long-term Contract
Tax Terms: W2/1099
About Us:
ConglomerateIT is a certified and a pioneer in providing premium end-to-end Global Workforce Solutions and IT Services to diverse clients across various domains. Visit us at http://www.conglomerateit.com
Our mission is to establish global cross culture human connections that further the careers of our employees and strengthen the businesses of our clients. We are driven to use the power of global network to connect business with the right people without bias. We provide Global Workforce Solutions with affability.
Job Summary:
We are looking for an experienced and hands-on Staff Technical Data Analyst who can design, build, and support scalable data solutions. The role requires strong expertise in ETL pipelines, data lakes, and analytics platforms, along with proficiency in Python and SQL. You will work closely with cross-functional teams to deliver reliable data workflows from large-scale transactional and clickstream sources, enabling data-driven insights and decision-making.
Key Responsibilities:
• Develop and maintain ETL pipelines using PySpark and SQL across diverse datasets.
• Work with batch and streaming data to deliver scalable and high-performing solutions.
• Collaborate with engineers, analysts, and product teams to resolve data challenges.
• Translate business needs into technical specifications for new pipelines and workflows.
Required Skills:
• 8+ years of experience with ETL/data pipeline development for analytics.
• Advanced proficiency in Python and SQL.
• Strong background with big data platforms and data lakes.
• Hands-on experience with AWS (Redshift, Athena, core services) and Databricks.
• Familiarity with clickstream data and real-time analytics use cases.
Additional Skills:
• Strong problem-solving and systems-thinking mindset.
• Excellent communication skills with the ability to collaborate effectively.
• Proactive learner with the ability to drive improvements and efficiency.
• Strong organizational skills and ability to manage multiple projects simultaneously.
Role/Title: Staff Technical Data Analyst
Location: San Diego, CA (Hybrid – 3 days onsite)
Duration: Long-term Contract
Tax Terms: W2/1099
About Us:
ConglomerateIT is a certified and a pioneer in providing premium end-to-end Global Workforce Solutions and IT Services to diverse clients across various domains. Visit us at http://www.conglomerateit.com
Our mission is to establish global cross culture human connections that further the careers of our employees and strengthen the businesses of our clients. We are driven to use the power of global network to connect business with the right people without bias. We provide Global Workforce Solutions with affability.
Job Summary:
We are looking for an experienced and hands-on Staff Technical Data Analyst who can design, build, and support scalable data solutions. The role requires strong expertise in ETL pipelines, data lakes, and analytics platforms, along with proficiency in Python and SQL. You will work closely with cross-functional teams to deliver reliable data workflows from large-scale transactional and clickstream sources, enabling data-driven insights and decision-making.
Key Responsibilities:
• Develop and maintain ETL pipelines using PySpark and SQL across diverse datasets.
• Work with batch and streaming data to deliver scalable and high-performing solutions.
• Collaborate with engineers, analysts, and product teams to resolve data challenges.
• Translate business needs into technical specifications for new pipelines and workflows.
Required Skills:
• 8+ years of experience with ETL/data pipeline development for analytics.
• Advanced proficiency in Python and SQL.
• Strong background with big data platforms and data lakes.
• Hands-on experience with AWS (Redshift, Athena, core services) and Databricks.
• Familiarity with clickstream data and real-time analytics use cases.
Additional Skills:
• Strong problem-solving and systems-thinking mindset.
• Excellent communication skills with the ability to collaborate effectively.
• Proactive learner with the ability to drive improvements and efficiency.
• Strong organizational skills and ability to manage multiple projects simultaneously.