

Azure Databricks Data Engineer (SAP HANA)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Data Engineer (SAP HANA) on a contract basis, requiring expertise in Databricks, Azure Data Factory, Python/PySpark, and SAP HANA. Contract length and pay rate are unspecified. Strong performance optimization skills are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 28, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Santa Clara County, CA
-
π§ - Skills detailed
#Data Migration #ADF (Azure Data Factory) #Data Pipeline #Azure Databricks #Azure #"ETL (Extract #Transform #Load)" #Datasets #SAP #Delta Lake #Data Processing #SAP Hana #Python #Monitoring #Network Security #IAM (Identity and Access Management) #SQL (Structured Query Language) #VPN (Virtual Private Network) #Data Engineering #Security #Spark SQL #Data Profiling #Databricks #Storage #Scala #Data Modeling #Data Quality #Data Lineage #Data Extraction #Spark (Apache Spark) #SAP BODS (BusinessObjects Data Services) #Azure cloud #Data Integration #Azure Data Factory #PySpark #Cloud #Migration
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
About Company
We are seeking a skilled and detail-oriented Azure Databricks Data Engineer with hands-on experience in building and optimizing large-scale data pipelines. The ideal candidate will be proficient in Databricks, Azure Data Factory, Python/PySpark, and SAP HANA integrations, with a strong focus on performance optimization and scalable architecture.
Mandatory Skills & Responsibilities
πΉ Databricks & Delta Lake
β’ Develop scalable data pipelines using Databricks with PySpark or Spark SQL
β’ Efficiently implement Delta Lake for large-scale data processing
β’ Manage data lineage, versioning, and schema evolution within Databricks
πΉ Azure Data Factory (ADF)
β’ Design and manage ETL pipelines using ADF
β’ Orchestrate data flows between Databricks, Blob Storage, and SAP sources
β’ Implement monitoring, error handling, and pipeline performance tuning
πΉ Performance Optimization
β’ Identify and resolve performance bottlenecks in Databricks jobs
β’ Tune Spark configurations and optimize resource usage
πΉ Python & PySpark
β’ Write robust, reusable, and efficient data transformation scripts
β’ Implement custom logic using Python and Spark for data integration
πΉ SAP HANA Expertise
β’ In-depth understanding of SAP HANA architecture and data modeling
β’ Perform large-scale data extraction, transformation, and migration
β’ Integrate SAP data with Azure services and other modern platforms
Good To Have Skills
πΈ ETL Tools β SAP Data Services (BODS)
β’ Create and optimize data jobs in SAP BODS
β’ Handle complex SAP-specific mappings and CDC scenarios
πΈ Data Profiling & Validation
β’ Execute data profiling and validation tasks
β’ Reconcile large datasets during data migrations
πΈ Azure Cloud & Networking
β’ Hands-on with Azure compute, storage, and network security services
β’ Troubleshoot VNet, VPN, firewall, and credential issues
β’ Familiarity with IAM, RBAC, and secure secret management
Key Attributes
β’ Strong analytical and problem-solving skills
β’ Excellent communication and collaboration across global teams
β’ Detail-oriented with a focus on data quality and efficiency
Preferred Certifications (optional)
β’ Microsoft Certified: Azure Data Engineer Associate
β’ Databricks Certified Developer β Data Engineering
Skills: data,databricks,data profiling,hana,data validation,pyspark,etl tools,networking,pipelines,sap hana,sparql,sap,spark,azure cloud,azure,sql,python,azure data factory,delta lake,architecture