

Vedic Staffing Inc.
Databrick Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Engineer (Python/PySpark) with 10+ years of experience, offering a contract at $63/hr in New York, NY. Key skills include Databricks, Delta Lake, and cloud platforms, with strong Python and SQL proficiency required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
504
-
ποΈ - Date
October 28, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Albany, New York Metropolitan Area
-
π§ - Skills detailed
#ADF (Azure Data Factory) #BI (Business Intelligence) #Data Manipulation #"ETL (Extract #Transform #Load)" #Cloud #Data Transformations #Data Lakehouse #Jenkins #Data Lake #SQL (Structured Query Language) #DevOps #Data Orchestration #Data Security #Data Governance #PySpark #Compliance #Azure DevOps #Security #Scala #GCP (Google Cloud Platform) #Spark (Apache Spark) #Data Architecture #Data Pipeline #Azure #GitHub #Azure Databricks #Tableau #GIT #Computer Science #Data Science #Microsoft Power BI #Data Engineering #Data Quality #Python #ML (Machine Learning) #Airflow #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Azure Data Factory #Delta Lake #Databricks
Role description
Job Title: Senior Databricks Engineer (Python/PySpark)
Location: New York, NY (Hybrid β 3 days onsite, 2 days remote)
Experience Level: 10+ years
Employment Type: Contract
Rate: $63/hr
Department: Data Engineering / Analytics
About the Role
We are seeking a highly skilled Senior Databricks Engineer with deep expertise in Python, PySpark, and cloud-based data engineering. The ideal candidate will design, build, and optimize large-scale data pipelines and analytics solutions using Databricks on a modern data lakehouse architecture. This role requires a hands-on technical leader who can work cross-functionally with data architects, analysts, and business stakeholders to deliver high-quality data solutions in a hybrid working environment.
Key Responsibilities
β’ Design, build, and maintain ETL/ELT data pipelines using Databricks, Python, and PySpark.
β’ Develop and manage data lakehouse architecture using Delta Lake.
β’ Integrate structured and unstructured data from multiple on-prem and cloud sources.
β’ Optimize Spark jobs for performance, scalability, and cost efficiency.
β’ Collaborate with data scientists and analysts to support advanced analytics and machine learning workloads.
β’ Implement data quality, lineage, and governance frameworks within Databricks.
β’ Automate workflows and orchestration using Databricks Workflows, Airflow, or Azure Data Factory.
β’ Manage CI/CD pipelines for data projects using Git-based workflows.
β’ Ensure data security, access controls, and compliance with organizational standards.
β’ Mentor junior engineers and contribute to best practices in data engineering.
Required Skills & Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or a related field.
β’ 10+ years of overall experience in data engineering with 3+ years hands-on in Databricks.
β’ Strong proficiency in Python and PySpark for large-scale data transformations.
β’ In-depth understanding of Spark architecture, optimization, and cluster management.
β’ Experience building and managing Delta Lake solutions and data lakehouse architectures.
β’ Expertise with SQL for data manipulation and performance tuning.
β’ Strong knowledge of cloud platforms β preferably Azure Databricks (AWS/GCP also acceptable).
β’ Familiarity with data orchestration tools such as Airflow, ADF, or Databricks Jobs.
β’ Experience with CI/CD pipelines and tools (Azure DevOps, GitHub Actions, Jenkins, etc.).
β’ Hands-on experience implementing data governance, cataloging, and security (e.g., Unity Catalog).
Preferred Skills
β’ Exposure to streaming data (Kafka, Event Hubs, Kinesis).
β’ Experience with MLOps and integrating machine learning workflows in Databricks.
β’ Knowledge of data warehousing concepts and BI tools (Power BI, Tableau).
β’ Certifications such as Databricks Certified Data Engineer Professional or Azure Data Engineer Associate.
Soft Skills
β’ Strong analytical and problem-solving mindset.
β’ Excellent communication and stakeholder management skills.
β’ Ability to lead data engineering initiatives end-to-end.
β’ Team player with mentorship and knowledge-sharing capabilities.
Job Title: Senior Databricks Engineer (Python/PySpark)
Location: New York, NY (Hybrid β 3 days onsite, 2 days remote)
Experience Level: 10+ years
Employment Type: Contract
Rate: $63/hr
Department: Data Engineering / Analytics
About the Role
We are seeking a highly skilled Senior Databricks Engineer with deep expertise in Python, PySpark, and cloud-based data engineering. The ideal candidate will design, build, and optimize large-scale data pipelines and analytics solutions using Databricks on a modern data lakehouse architecture. This role requires a hands-on technical leader who can work cross-functionally with data architects, analysts, and business stakeholders to deliver high-quality data solutions in a hybrid working environment.
Key Responsibilities
β’ Design, build, and maintain ETL/ELT data pipelines using Databricks, Python, and PySpark.
β’ Develop and manage data lakehouse architecture using Delta Lake.
β’ Integrate structured and unstructured data from multiple on-prem and cloud sources.
β’ Optimize Spark jobs for performance, scalability, and cost efficiency.
β’ Collaborate with data scientists and analysts to support advanced analytics and machine learning workloads.
β’ Implement data quality, lineage, and governance frameworks within Databricks.
β’ Automate workflows and orchestration using Databricks Workflows, Airflow, or Azure Data Factory.
β’ Manage CI/CD pipelines for data projects using Git-based workflows.
β’ Ensure data security, access controls, and compliance with organizational standards.
β’ Mentor junior engineers and contribute to best practices in data engineering.
Required Skills & Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or a related field.
β’ 10+ years of overall experience in data engineering with 3+ years hands-on in Databricks.
β’ Strong proficiency in Python and PySpark for large-scale data transformations.
β’ In-depth understanding of Spark architecture, optimization, and cluster management.
β’ Experience building and managing Delta Lake solutions and data lakehouse architectures.
β’ Expertise with SQL for data manipulation and performance tuning.
β’ Strong knowledge of cloud platforms β preferably Azure Databricks (AWS/GCP also acceptable).
β’ Familiarity with data orchestration tools such as Airflow, ADF, or Databricks Jobs.
β’ Experience with CI/CD pipelines and tools (Azure DevOps, GitHub Actions, Jenkins, etc.).
β’ Hands-on experience implementing data governance, cataloging, and security (e.g., Unity Catalog).
Preferred Skills
β’ Exposure to streaming data (Kafka, Event Hubs, Kinesis).
β’ Experience with MLOps and integrating machine learning workflows in Databricks.
β’ Knowledge of data warehousing concepts and BI tools (Power BI, Tableau).
β’ Certifications such as Databricks Certified Data Engineer Professional or Azure Data Engineer Associate.
Soft Skills
β’ Strong analytical and problem-solving mindset.
β’ Excellent communication and stakeholder management skills.
β’ Ability to lead data engineering initiatives end-to-end.
β’ Team player with mentorship and knowledge-sharing capabilities.





