

Senior Data Engneer With Strong Python W2 Contract Role.
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract position in San Jose, California, requiring 8–11 years of experience, strong Python skills, and proficiency in Spark, SQL, Databricks, and Azure. Pay is $65-$70 per hour for 40 hours weekly.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date discovered
August 27, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
San Jose, CA 95112
-
🧠 - Skills detailed
#Data Manipulation #Cloud #Spark (Apache Spark) #Databricks #Azure cloud #Code Reviews #Data Orchestration #Automation #Airflow #AI (Artificial Intelligence) #SQL (Structured Query Language) #Data Quality #Data Engineering #Apache Airflow #PySpark #Azure #Python #Scala #Data Processing #Data Pipeline #Programming
Role description
Job Title: Senior Data Engineer (Python Focus)Location: San Jose, California OnsiteJob Type: Contract (W2)Interview Mode: Phone / Skype
Position OverviewWe are urgently seeking a highly skilled and experienced Data Engineer with 8–11 years of hands-on expertise in designing, building, and optimizing data pipelines and architectures. This role is critical to a high-impact project in collaboration with one of our trusted vendor partners. The ideal candidate will bring deep technical proficiency, a collaborative mindset, and a passion for working with modern data technologies.
Key ResponsibilitiesDesign, develop, and maintain scalable data pipelines usingPython, Spark/ PySpark, and SQLImplement and optimize workflows in Apache Airflow for efficient data orchestrationWork within Databricks and Azure environments to manage large-scale data processing and analyticsCollaborate with cross-functional teams to integrate LLM (Large Language Model) capabilities into data workflowsEnsure data quality, reliability, and performance across all stages of the pipelineParticipate in code reviews, architecture discussions, and performance tuning
Required Skills & Experience11–13 years of professional experience in data engineering or related rolesStrong programming skills in Python, with experience in building production-grade data solutionsProficiency in Spark or PySpark for distributed data processingAdvanced SQL skills for data manipulation and queryingHands-on experience with Databricks and Azure cloud servicesFamiliarity with Apache Airflow for workflow automationExposure to LLM implementation or generative AI integration is a strong plusExcellent communication and problem-solving abilitiesAdditional InformationCandidate will be working closely with a vendor partner, so strong collaboration and adaptability are essentialImmediate availability is preferred due to project urgency
Job Type: Contract
Pay: $65.00 - $70.00 per hour
Expected hours: 40 per week
Experience:
Data Engineer: 10 years (Preferred)
LLM: 5 years (Preferred)
Python: 8 years (Preferred)
Spark: 7 years (Preferred)
PySpark: 5 years (Preferred)
Databricks: 5 years (Preferred)
Azure: 5 years (Preferred)
Work Location: In person
Job Title: Senior Data Engineer (Python Focus)Location: San Jose, California OnsiteJob Type: Contract (W2)Interview Mode: Phone / Skype
Position OverviewWe are urgently seeking a highly skilled and experienced Data Engineer with 8–11 years of hands-on expertise in designing, building, and optimizing data pipelines and architectures. This role is critical to a high-impact project in collaboration with one of our trusted vendor partners. The ideal candidate will bring deep technical proficiency, a collaborative mindset, and a passion for working with modern data technologies.
Key ResponsibilitiesDesign, develop, and maintain scalable data pipelines usingPython, Spark/ PySpark, and SQLImplement and optimize workflows in Apache Airflow for efficient data orchestrationWork within Databricks and Azure environments to manage large-scale data processing and analyticsCollaborate with cross-functional teams to integrate LLM (Large Language Model) capabilities into data workflowsEnsure data quality, reliability, and performance across all stages of the pipelineParticipate in code reviews, architecture discussions, and performance tuning
Required Skills & Experience11–13 years of professional experience in data engineering or related rolesStrong programming skills in Python, with experience in building production-grade data solutionsProficiency in Spark or PySpark for distributed data processingAdvanced SQL skills for data manipulation and queryingHands-on experience with Databricks and Azure cloud servicesFamiliarity with Apache Airflow for workflow automationExposure to LLM implementation or generative AI integration is a strong plusExcellent communication and problem-solving abilitiesAdditional InformationCandidate will be working closely with a vendor partner, so strong collaboration and adaptability are essentialImmediate availability is preferred due to project urgency
Job Type: Contract
Pay: $65.00 - $70.00 per hour
Expected hours: 40 per week
Experience:
Data Engineer: 10 years (Preferred)
LLM: 5 years (Preferred)
Python: 8 years (Preferred)
Spark: 7 years (Preferred)
PySpark: 5 years (Preferred)
Databricks: 5 years (Preferred)
Azure: 5 years (Preferred)
Work Location: In person