Big Data Lead Developer (Scala)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Lead Developer (Scala) in Mountain View, CA, with a contract length of "unknown." The pay rate is "unknown." Key skills include AWS, Databricks, SQL, and data modeling, with 5+ years of data engineering experience required. Fintech experience is a bonus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 27, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Mountain View, CA
-
🧠 - Skills detailed
#Big Data #Data Mart #Cloud #Spark (Apache Spark) #Databricks #Agile #AWS (Amazon Web Services) #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Data Quality #Data Engineering #Data Modeling #PySpark #Data Integration #Scala #Delta Lake #"ETL (Extract #Transform #Load)" #Data Lake #Data Pipeline #Lambda (AWS Lambda) #Data Governance #Redshift
Role description
Role : Big Data Lead Developer (Scala) Location : Mountain View, CA (100% onsite) Must have : Strong Cloud & Big Data Expertise: o Proficiency with the AWS data ecosystem (S3, Glue, Lambda, Redshift, etc.). o Hands-on experience with Databricks, including Delta Lake and Spark (PySpark or Scala). β€’ What You'll Do β€’ Design & Build Data Pipelines: Architect, build, and maintain efficient and reliable data pipelines to ingest, process, and transform data within our AWS-based data lake and Databricks platform. β€’ Develop Data Marts & a Metrics Store: Create and own curated, analysis-ready data marts that support critical product initiatives, including multi-product customer onboarding funnels and a centralized metrics store. β€’ Drive Data Integration: Lead complex data integration projects, unifying data from across the Intuit ecosystem, including sources like TurboTax, Credit Karma, and QuickBooks, to create a holistic view of our customers. β€’ Empower Data-Driven Decisions: Collaborate closely with the data analytics team to understand their needs, providing them with high-quality, trusted data sets that enable them to deliver key insights and reporting. β€’ Champion Data Quality & Governance: Implement best practices for data modeling, data quality, and data governance to ensure the consistency and reliability of our data assets. β€’ Thrive in Ambiguity: Work in a fast-paced, agile environment, demonstrating a strong ability to navigate unclear requirements and proactively drive projects to completion. β€’ What We're Looking For β€’ Proven Data Engineering Experience: 5+ years of hands-on experience in a data engineering or analytics engineering role, with a track record of building and managing complex data pipelines. β€’ Strong Cloud & Big Data Expertise: β€’ Proficiency with the AWS data ecosystem (S3, Glue, Lambda, Redshift, etc.). β€’ Hands-on experience with Databricks, including Delta Lake and Spark (PySpark or Scala). β€’ Expert-Level SQL & Data Modeling: Exceptional SQL skills and a deep understanding of data modeling concepts (e.g., dimensional modeling, star schemas) and data warehousing principles. β€’ Exceptional Communicator: The ability to clearly and effectively communicate technical concepts to both technical and non-technical audiences. You are comfortable leading discussions and building consensus. β€’ Collaborative & Product-Focused: A team player who is passionate about understanding the "why" behind the data and is dedicated to building solutions that drive business and product success. β€’ Problem-Solving Mindset: You are intellectually curious, enjoy tackling complex challenges, and are comfortable with ambiguity. β€’ Fintech Experience (Bonus): Previous experience working in the fintech or financial services industry is a strong plus.