

IPrime Info Solutions Inc.
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include Azure Databricks, PySpark, SQL, and Oracle database integration. A minimum of 10 years of experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 25, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Louisville, KY
-
🧠 - Skills detailed
#Azure Data Factory #Scala #Azure Databricks #Security #Synapse #Oracle #Data Lake #Storage #Deployment #Infrastructure as Code (IaC) #ADF (Azure Data Factory) #Data Integration #Migration #Cloud #Code Reviews #Azure DevOps #Azure Synapse Analytics #Data Pipeline #SQL Queries #Delta Lake #Data Quality #Spark SQL #Spark (Apache Spark) #GitHub #Data Engineering #ADLS (Azure Data Lake Storage) #DevOps #Data Ingestion #Databricks #SQL (Structured Query Language) #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Azure #Azure ADLS (Azure Data Lake Storage) #Automated Testing #Data Architecture #Microsoft Azure #Data Modeling #Databases #PySpark #Data Governance #Data Science
Role description
Job Title: Senior Azure Data Engineer
Job Summary
Exp :10+
We are seeking an experienced Senior Azure Data Engineer to join our data and analytics team. In this role, you will be a technical leader responsible for designing, developing, and maintaining our enterprise-scale data platform on Microsoft Azure. You will leverage your deep expertise in Azure Databricks, the modern data stack, and data warehousing principles to build robust, scalable, and high-performance data pipelines.
A key part of this role will be managing data integration and migration projects, specifically involving Oracle databases. The ideal candidate is a hands-on engineer with a strong architectural mindset, proficient in PySpark and SQL, and passionate about building a best-in-class Lakehouse platform.
Key Responsibilities
Data Architecture & Design: Architect and implement end-to-end data solutions in Azure, including data ingestion, transformation, and storage.
Pipeline Development: Design, build, and optimize scalable ETL/ELT data pipelines using Azure Databricks (PySpark, Spark SQL) and Azure Data Factory (ADF).
Lakehouse Implementation: Lead the development and governance of our Delta Lakehouse, implementing a Medallion Architecture (Bronze, Silver, Gold layers) to ensure data quality and reliability.
Oracle Integration: Develop and maintain data ingestion pipelines to efficiently and reliably extract data from on-premise or cloud-based Oracle databases (using PL/SQL, GoldenGate, ADF connectors, or other methods).
Data Modeling: Create and manage data models within the Lakehouse and/or Azure Synapse Analytics to support BI, reporting, and advanced analytics use cases.
Performance Tuning: Monitor, troubleshoot, and optimize Databricks clusters, Spark jobs, and SQL queries for performance and cost-efficiency.
Governance & Security: Implement data governance, data quality, and security best practices within the Azure platform, utilizing tools like Unity Catalog for fine-grained access control and lineage.
DevOps & CI/CD: Implement and manage CI/CD pipelines using Azure DevOps (or GitHub Actions) for automated testing and deployment of data pipelines and infrastructure (IaC).
Mentorship: Provide technical guidance and mentorship to junior data engineers, conduct code reviews, and establish team-wide best practices.
Stakeholder Collaboration: Work closely with data scientists, BI developers, and business stakeholders to understand requirements and deliver data solutions that meet their needs.
Required Qualifications & Skills
Azure Core Skills:
Azure Databricks: Expert-level proficiency in developing with notebooks, jobs, and clusters.
PySpark & SQL: Advanced skills in both for large-scale data transformation.
Azure Data Factory (ADF): Strong experience building and orchestrating complex pipelines.
Azure Storage: Deep knowledge of Azure Data Lake Storage (ADLS Gen2) and Delta Lake.
Job Title: Senior Azure Data Engineer
Job Summary
Exp :10+
We are seeking an experienced Senior Azure Data Engineer to join our data and analytics team. In this role, you will be a technical leader responsible for designing, developing, and maintaining our enterprise-scale data platform on Microsoft Azure. You will leverage your deep expertise in Azure Databricks, the modern data stack, and data warehousing principles to build robust, scalable, and high-performance data pipelines.
A key part of this role will be managing data integration and migration projects, specifically involving Oracle databases. The ideal candidate is a hands-on engineer with a strong architectural mindset, proficient in PySpark and SQL, and passionate about building a best-in-class Lakehouse platform.
Key Responsibilities
Data Architecture & Design: Architect and implement end-to-end data solutions in Azure, including data ingestion, transformation, and storage.
Pipeline Development: Design, build, and optimize scalable ETL/ELT data pipelines using Azure Databricks (PySpark, Spark SQL) and Azure Data Factory (ADF).
Lakehouse Implementation: Lead the development and governance of our Delta Lakehouse, implementing a Medallion Architecture (Bronze, Silver, Gold layers) to ensure data quality and reliability.
Oracle Integration: Develop and maintain data ingestion pipelines to efficiently and reliably extract data from on-premise or cloud-based Oracle databases (using PL/SQL, GoldenGate, ADF connectors, or other methods).
Data Modeling: Create and manage data models within the Lakehouse and/or Azure Synapse Analytics to support BI, reporting, and advanced analytics use cases.
Performance Tuning: Monitor, troubleshoot, and optimize Databricks clusters, Spark jobs, and SQL queries for performance and cost-efficiency.
Governance & Security: Implement data governance, data quality, and security best practices within the Azure platform, utilizing tools like Unity Catalog for fine-grained access control and lineage.
DevOps & CI/CD: Implement and manage CI/CD pipelines using Azure DevOps (or GitHub Actions) for automated testing and deployment of data pipelines and infrastructure (IaC).
Mentorship: Provide technical guidance and mentorship to junior data engineers, conduct code reviews, and establish team-wide best practices.
Stakeholder Collaboration: Work closely with data scientists, BI developers, and business stakeholders to understand requirements and deliver data solutions that meet their needs.
Required Qualifications & Skills
Azure Core Skills:
Azure Databricks: Expert-level proficiency in developing with notebooks, jobs, and clusters.
PySpark & SQL: Advanced skills in both for large-scale data transformation.
Azure Data Factory (ADF): Strong experience building and orchestrating complex pipelines.
Azure Storage: Deep knowledge of Azure Data Lake Storage (ADLS Gen2) and Delta Lake.






