HYR Global Source Inc

Lead Data Engineer(Databricks) - Remote (Need Local to PST/MST)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer (Databricks) with a contract length of over 6 months, offering a pay rate of "TBD". It is remote but requires candidates to be local to PST/MST. Key skills include Databricks, SQL, Azure Data Factory, and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 20, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Transformations #Spark (Apache Spark) #Azure #ADF (Azure Data Factory) #Data Quality #Data Analysis #Data Architecture #"ETL (Extract #Transform #Load)" #Security #Data Processing #Azure Data Factory #Scala #Data Lake #Migration #SQL (Structured Query Language) #Leadership #Python #Databricks #Data Science #Data Engineering #PySpark #Data Modeling #Cloud #DataStage
Role description
Job Title: Lead Data Engineer (Need locals PST/MST) Location: San Francisco, CA (Remote) Job Type: Fulltime/ W2 Job Overview We are seeking an experienced Lead Data Engineer to drive large-scale data platform modernization initiatives, including Databricks migration and enterprise data architecture design. This role requires deep hands-on expertise along with strong leadership capabilities to guide engineering teams and collaborate with cross-functional stakeholders. The ideal candidate will have extensive experience designing scalable data solutions, leading cloud data transformations, and implementing modern data engineering best practices. Key Responsibilities • Lead end-to-end Databricks migration initiatives from legacy platforms (e.g., DataStage) to modern cloud-based architectures • Design and implement scalable data architecture frameworks in Azure environments • Build and optimize ETL/ELT pipelines using Azure Data Factory (ADF), Databricks, PySpark, and SQL • Architect robust data lakes and lakehouse solutions • Develop and maintain high-performance data workflows using Python and PySpark • Collaborate with Data Analysts, Data Scientists, and Business stakeholders to define data requirements • Ensure data quality, governance, security, and performance optimization • Mentor and lead junior/mid-level data engineers • Participate in architectural discussions and provide technical leadership Required Skills & Experience • 10+ years of experience in Data Engineering • Strong hands-on experience with: • Databricks (implementation & migration projects) • SQL (advanced query optimization and performance tuning) • Azure Data Factory (ADF) • DataStage • Python & PySpark • Experience designing enterprise-level data architecture solutions • Strong knowledge of cloud data platforms (preferably Azure) • Experience with data modeling and large-scale data processing Follow us over Linkedin - https://www.linkedin.com/company/hyr-global-source-inc To Get more job notifications