

Compunnel Inc.
Oracle Fusion HCM Data Engineer (C2H)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Oracle Fusion HCM Data Engineer (C2H)" with a contract length of "unknown" and a pay rate of "unknown." Candidates should have 10+ years of experience in Oracle Fusion HCM, strong Databricks proficiency, and expertise in SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 27, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Princeton, NJ
-
π§ - Skills detailed
#Delta Lake #Data Quality #GCP (Google Cloud Platform) #Spark SQL #Scala #Automation #Deployment #Business Analysis #"ETL (Extract #Transform #Load)" #Data Modeling #Data Governance #Azure #Databricks #Data Engineering #Data Pipeline #Spark (Apache Spark) #BI (Business Intelligence) #Data Processing #PySpark #Data Science #Data Extraction #AWS (Amazon Web Services) #Security #Data Integrity #Data Architecture #Cloud #Oracle #SQL (Structured Query Language) #Oracle Cloud
Role description
Mandatory Skills : Oracle Cloud HCM BI Publisher, Oracle Cloud Human Capital Management, Oracle Fast Formula
Experience Required: 10+years in Oracle Fucion HCM
Job description
A Oracle Engineer with Oracle Fusion experience and Databricks is responsible for designing developing and maintaining scalable and efficient data solutions that integrate data from various sources including Oracle Fusion applications and process it within the Databricks environment
Key Responsibilities
β’ Data Pipeline Development
β’ Design build and optimize robust ELT pipelines to ingest transform and load data from Oracle Fusion applications and other sources into the Databricks Lakehouse This involves using PySpark SQL and Databricks notebooks
β’ Databricks Platform Expertise
β’ Leverage Databricks functionalities such as Delta Lake Unity Catalog and Spark optimization techniques to ensure data quality performance and governance
β’ Oracle Fusion Integration
β’ Develop connectors and integration strategies to extract data from Oracle Fusion modules eg Financials HCM SCM using APIs SQL or other appropriate methods
β’ Data Modeling and Warehousing
β’ Design and implement data models within Databricks potentially following a medallion architecture to support analytical and reporting requirements
β’ Performance Optimization
β’ Tune Spark jobs and optimize data processing within Databricks for efficiency and cost-effectiveness
β’ Data Quality and Governance
β’ Implement data quality checks error handling and data validation frameworks to ensure data integrity Adhere to data governance policies and security best practices
β’ Collaboration
β’ Work closely with data architectsβ data scientists business analysts and other stakeholders to understand data requirements and deliver solutions that meet business needs
β’ Automation and CICD
β’ Develop automation scripts and implement CICD pipelines for Databricks workflows and deployments
β’ Troubleshooting and Support
β’ Provide operational support troubleshoot data related issues and perform root cause analysis
Required Skills and Qualifications
β’ Strong proficiency in Databricks
β’ Including PySpark Scala Delta Lake Unity Catalog and Databricks notebooks
β’ Experience with Oracle Fusion
β’ Knowledge of Oracle Fusion data structures APIs and data extraction methods
β’ Expertise in SQL
β’ For querying manipulating and optimizing data in both Oracle and Databricks
β’ Cloud Platform Experience
β’ Familiarity with a major cloud provider eg AWS Azure GCP where Databricks is deployed
β’ Data Warehousing and ETLELT Concepts
β’ Solid understanding of data warehousing principles and experience in building and optimizing data pipelines
β’ Problem solving and Analytical Skills
β’ Ability to analyze complex data issues and propose effective solutions
β’ Communication and Collaboration
β’ Strong interpersonal skills to work effectively within cross functional teams
Mandatory Skills : Oracle Cloud HCM BI Publisher, Oracle Cloud Human Capital Management, Oracle Fast Formula
Experience Required: 10+years in Oracle Fucion HCM
Job description
A Oracle Engineer with Oracle Fusion experience and Databricks is responsible for designing developing and maintaining scalable and efficient data solutions that integrate data from various sources including Oracle Fusion applications and process it within the Databricks environment
Key Responsibilities
β’ Data Pipeline Development
β’ Design build and optimize robust ELT pipelines to ingest transform and load data from Oracle Fusion applications and other sources into the Databricks Lakehouse This involves using PySpark SQL and Databricks notebooks
β’ Databricks Platform Expertise
β’ Leverage Databricks functionalities such as Delta Lake Unity Catalog and Spark optimization techniques to ensure data quality performance and governance
β’ Oracle Fusion Integration
β’ Develop connectors and integration strategies to extract data from Oracle Fusion modules eg Financials HCM SCM using APIs SQL or other appropriate methods
β’ Data Modeling and Warehousing
β’ Design and implement data models within Databricks potentially following a medallion architecture to support analytical and reporting requirements
β’ Performance Optimization
β’ Tune Spark jobs and optimize data processing within Databricks for efficiency and cost-effectiveness
β’ Data Quality and Governance
β’ Implement data quality checks error handling and data validation frameworks to ensure data integrity Adhere to data governance policies and security best practices
β’ Collaboration
β’ Work closely with data architectsβ data scientists business analysts and other stakeholders to understand data requirements and deliver solutions that meet business needs
β’ Automation and CICD
β’ Develop automation scripts and implement CICD pipelines for Databricks workflows and deployments
β’ Troubleshooting and Support
β’ Provide operational support troubleshoot data related issues and perform root cause analysis
Required Skills and Qualifications
β’ Strong proficiency in Databricks
β’ Including PySpark Scala Delta Lake Unity Catalog and Databricks notebooks
β’ Experience with Oracle Fusion
β’ Knowledge of Oracle Fusion data structures APIs and data extraction methods
β’ Expertise in SQL
β’ For querying manipulating and optimizing data in both Oracle and Databricks
β’ Cloud Platform Experience
β’ Familiarity with a major cloud provider eg AWS Azure GCP where Databricks is deployed
β’ Data Warehousing and ETLELT Concepts
β’ Solid understanding of data warehousing principles and experience in building and optimizing data pipelines
β’ Problem solving and Analytical Skills
β’ Ability to analyze complex data issues and propose effective solutions
β’ Communication and Collaboration
β’ Strong interpersonal skills to work effectively within cross functional teams






