Vedan Technologies

Senior/Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior/Lead Data Engineer on a contract basis, lasting over 6 months. It offers a pay rate of "pay rate" and is 100% remote (PST hours), requiring occasional travel to Fremont, CA. Key skills include SQL, PySpark, Azure, ADF, and Databricks, with a minimum of 12 years IT experience and 8 years in data engineering.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Fremont, CA
-
🧠 - Skills detailed
#Spark (Apache Spark) #Databricks #PySpark #ADF (Azure Data Factory) #Data Pipeline #"ETL (Extract #Transform #Load)" #Spark SQL #Data Engineering #Scala #Azure #SQL (Structured Query Language) #SAP
Role description
Title: Senior/Lead Data Engineer Location: 100% Remote – PST Hours (8 AM – 5 PM PST) – Travel to Fremont, CA once every 2 Months for 3 Days (Expenses Paid) Job Type: Contract Role Key Skills: Sql, PySpark, Azure, Microsoft Fabric, ADF, and Databricks. About the Role: We are seeking a highly skilled and experienced Senior/Lead Data Engineer to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you. Key Responsibilities: • Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies. • Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions. • Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability. • Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform. • Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development. Must-have skills & qualifications: • Minimum 12+ years of overall experience in IT Industry. • 8+ years of experience in data engineering, with a strong background in building large-scale data solutions. • 4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions) • 1-2 years of experience in Microsoft Fabric • Some exposure/experience in SAP constructs