

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 3–6 year experience in data engineering, proficient in SQL and Python, and familiar with cloud platforms like Oracle and Azure. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
608
-
🗓️ - Date discovered
September 30, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
New Orleans, LA
-
🧠 - Skills detailed
#ADF (Azure Data Factory) #Datasets #Monitoring #Data Quality #Computer Science #Data Architecture #Storage #BI (Business Intelligence) #Deployment #Scala #Synapse #Azure Data Factory #Databricks #Data Engineering #PySpark #Python #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Logging #Spark (Apache Spark) #Documentation #Version Control #Azure #Oracle #Data Modeling #Cloud
Role description
Required Skills & Experience
• Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or related field.
• 3–6 years of professional experience in data engineering, ETL development, or similar role.
• Demonstrated experience with cloud-based data platforms (Oracle, Fabric, Databricks, Synapse, or equivalent).
• Strong proficiency in SQL and Python (PySpark or Scala a plus).
• Solid understanding of data modeling principles & performance optimization
• Familiarity with orchestration tools (Azure Data Factory, Fabric Pipelines, or large-scale data handling)
Job Description
The client is building out internal data capabilities to support a cloud-based solution developed during a recent transformation initiative. The Data Engineer will be responsible for ingesting, transforming, and mapping data across multiple systems, ensuring seamless integration between backend Oracle environments and frontend reporting platforms.
• Design, develop, and maintain ETL/ELT pipelines for internal and external data sources using Azure and related platforms.
• Implement orchestration, scheduling, and monitoring for high-availability data flows.
• Ensure data quality through validation, testing, logging, and automated monitoring.
• Optimize storage, partitioning, and compute performance in Fabric Lakehouse and associated environments.
• Collaborate with the Data Architect to align implementation with enterprise data models, governance, and standards.
• Partner with Analysts to deliver curated, trusted, and performant datasets for business intelligence and advanced analytics.
• Apply CI/CD practices for data workflows, including version control and automated deployment.
• Maintain documentation of data flows, schemas, and processes, and support team knowledge sharing.
• Perform other job-related duties as assigned, within scope of responsibilities.
Required Skills & Experience
• Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or related field.
• 3–6 years of professional experience in data engineering, ETL development, or similar role.
• Demonstrated experience with cloud-based data platforms (Oracle, Fabric, Databricks, Synapse, or equivalent).
• Strong proficiency in SQL and Python (PySpark or Scala a plus).
• Solid understanding of data modeling principles & performance optimization
• Familiarity with orchestration tools (Azure Data Factory, Fabric Pipelines, or large-scale data handling)
Job Description
The client is building out internal data capabilities to support a cloud-based solution developed during a recent transformation initiative. The Data Engineer will be responsible for ingesting, transforming, and mapping data across multiple systems, ensuring seamless integration between backend Oracle environments and frontend reporting platforms.
• Design, develop, and maintain ETL/ELT pipelines for internal and external data sources using Azure and related platforms.
• Implement orchestration, scheduling, and monitoring for high-availability data flows.
• Ensure data quality through validation, testing, logging, and automated monitoring.
• Optimize storage, partitioning, and compute performance in Fabric Lakehouse and associated environments.
• Collaborate with the Data Architect to align implementation with enterprise data models, governance, and standards.
• Partner with Analysts to deliver curated, trusted, and performant datasets for business intelligence and advanced analytics.
• Apply CI/CD practices for data workflows, including version control and automated deployment.
• Maintain documentation of data flows, schemas, and processes, and support team knowledge sharing.
• Perform other job-related duties as assigned, within scope of responsibilities.