

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in San Jose, CA, offering a long-term contract with a pay rate of "unknown." Requires 12-15 years of experience, expertise in Azure ETL solutions, Databricks, and building ETL pipelines using Azure Data Factory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 24, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Jose, CA
-
π§ - Skills detailed
#Code Reviews #Fivetran #"ETL (Extract #Transform #Load)" #Scrum #Data Ingestion #DevOps #Jira #SAP Hana #ADF (Azure Data Factory) #Data Pipeline #Python #Azure DevOps #Compliance #ML (Machine Learning) #VPN (Virtual Private Network) #Computer Science #Azure Data Factory #SQL (Structured Query Language) #Data Engineering #Azure #Agile #Data Governance #Security #SAP #Databricks #Scala #Migration #Data Integrity
Role description
Hello,
Greetings of the day
We have a job opportunity for the position Lead Data Engineer with one of our clients.
If you are interested, please share your latest resume to Ramesh.s@saxonglobal.com
It is highly appreciated if you refer anyone who is available in the job market.
Job Title: Lead Data Engineer (Exp 12-15 years)
Location: San Jose, CA (Onsite) Need only locals
Duration: Long term contract
Job Summary:
Educational Qualification
β’ : Bachelorβs degree in Computer Science, Information Systems, or a related field.
Experience Range : 12-15 years
Primary (Must have skills)
β’ - To be Screened by TA Team
β’ 5+ years of experience in data engineering and data platform development.
β’ 5+ years of experience contributing enterprise data projects involving Azure-based ETL solutions.
β’ 5+ years of experience with Databricks & SQL
β’ 5+ years of in building ETL pipelines using Azure Data Factory
Job Description of Role
β’ (RNR) - To be Evaluated by Technical Panel (Define it to give more clarity)
Key Responsibilities:
β’ Lead the design and implementation of ETL solutions using SAP Data Services and Azure Data Factory.
β’ Leverage Fivetran for automated data ingestion from SAP S4 source systems into the Bronze layer.
β’ Analyze and migrate stored procedures from SAP HANA using SQL / PL/SQL to Databricks-based logic.
β’ Guide and mentor team members on data engineering best practices.
β’ Develop and maintain complex ETL pipelines using Python
β’ Identify and resolve performance bottlenecks and network-related issues.
β’ Ensure adherence to data governance and compliance standards across all data flows.
β’ Participate in performance tuning, issue resolution, and data validation tasks.
β’ Document data flows, pipeline logic, and lineage as part of project delivery.
Experience Required:
β’ Minimum of 8 years of experience in data engineering and architecture roles.
β’ Must have led 3 or more end-to-end enterprise data projects using Databricks, and Azure technologies.
Soft skills/other skills - To be Evaluated by Hiring Manager (To define how this will be evaluated)
Communication Skills:
β’ Communicate effectively with internal and customer stakeholders
β’ Communication approach: verbal, emails and instant messages
Interpersonal Skills:
β’ Strong interpersonal skills to build and maintain productive relationships with team members
β’ Provide constructive feedback during code reviews and be open to receiving feedback on your own code.
Problem-Solving and Analytical Thinking:
β’ Capability to troubleshoot and resolve issues efficiently.
β’ Analytical mindset
β’ Task/ Work Updates
β’ Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps
β’ Provides regular updates, proactive and due diligent to carry out responsibilities
Expected Outcome
β’ Successfully lead the design, development, and execution of scalable data engineering solutions and migration strategies. Ensure seamless data movement from legacy systems to modern platforms with minimal downtime and data integrity. Deliver optimized data pipelines, enforce data governance standards, and enable analytics readiness. Drive technical excellence, mentor engineering teams, and collaborate with stakeholders to align data solutions with business goals.
Secondary Skills to be planned Post Hiring - Training Plan
β’ Experience using Fivetran to automate data pipeline builds.
β’ Understanding of Databricks ML and analytics tools.
β’ Experience resolving networking/VPN issues related to data flow.
β’ Familiarity with data governance, security, and compliance frameworks.
Hello,
Greetings of the day
We have a job opportunity for the position Lead Data Engineer with one of our clients.
If you are interested, please share your latest resume to Ramesh.s@saxonglobal.com
It is highly appreciated if you refer anyone who is available in the job market.
Job Title: Lead Data Engineer (Exp 12-15 years)
Location: San Jose, CA (Onsite) Need only locals
Duration: Long term contract
Job Summary:
Educational Qualification
β’ : Bachelorβs degree in Computer Science, Information Systems, or a related field.
Experience Range : 12-15 years
Primary (Must have skills)
β’ - To be Screened by TA Team
β’ 5+ years of experience in data engineering and data platform development.
β’ 5+ years of experience contributing enterprise data projects involving Azure-based ETL solutions.
β’ 5+ years of experience with Databricks & SQL
β’ 5+ years of in building ETL pipelines using Azure Data Factory
Job Description of Role
β’ (RNR) - To be Evaluated by Technical Panel (Define it to give more clarity)
Key Responsibilities:
β’ Lead the design and implementation of ETL solutions using SAP Data Services and Azure Data Factory.
β’ Leverage Fivetran for automated data ingestion from SAP S4 source systems into the Bronze layer.
β’ Analyze and migrate stored procedures from SAP HANA using SQL / PL/SQL to Databricks-based logic.
β’ Guide and mentor team members on data engineering best practices.
β’ Develop and maintain complex ETL pipelines using Python
β’ Identify and resolve performance bottlenecks and network-related issues.
β’ Ensure adherence to data governance and compliance standards across all data flows.
β’ Participate in performance tuning, issue resolution, and data validation tasks.
β’ Document data flows, pipeline logic, and lineage as part of project delivery.
Experience Required:
β’ Minimum of 8 years of experience in data engineering and architecture roles.
β’ Must have led 3 or more end-to-end enterprise data projects using Databricks, and Azure technologies.
Soft skills/other skills - To be Evaluated by Hiring Manager (To define how this will be evaluated)
Communication Skills:
β’ Communicate effectively with internal and customer stakeholders
β’ Communication approach: verbal, emails and instant messages
Interpersonal Skills:
β’ Strong interpersonal skills to build and maintain productive relationships with team members
β’ Provide constructive feedback during code reviews and be open to receiving feedback on your own code.
Problem-Solving and Analytical Thinking:
β’ Capability to troubleshoot and resolve issues efficiently.
β’ Analytical mindset
β’ Task/ Work Updates
β’ Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps
β’ Provides regular updates, proactive and due diligent to carry out responsibilities
Expected Outcome
β’ Successfully lead the design, development, and execution of scalable data engineering solutions and migration strategies. Ensure seamless data movement from legacy systems to modern platforms with minimal downtime and data integrity. Deliver optimized data pipelines, enforce data governance standards, and enable analytics readiness. Drive technical excellence, mentor engineering teams, and collaborate with stakeholders to align data solutions with business goals.
Secondary Skills to be planned Post Hiring - Training Plan
β’ Experience using Fivetran to automate data pipeline builds.
β’ Understanding of Databricks ML and analytics tools.
β’ Experience resolving networking/VPN issues related to data flow.
β’ Familiarity with data governance, security, and compliance frameworks.