Avance Consulting

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 7-8 years of DWH/Big Data experience, focusing on Azure Data Bricks, Informatica, Snowflake, and SQL. Contract length is unspecified, with a competitive pay rate. Remote work is available.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 25, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Plano, TX
-
🧠 - Skills detailed
#Azure Data Factory #Scala #Apache Spark #Synapse #Oracle #Dimensional Data Models #Data Transformations #Big Data #ADF (Azure Data Factory) #Snowflake #Cloud #Data Manipulation #Informatica #Datasets #Version Control #Microsoft Power BI #Azure DevOps #Azure Synapse Analytics #Data Pipeline #SQL Queries #Scripting #Data Bricks #Spark (Apache Spark) #Data Engineering #Data Processing #DevOps #Data Ingestion #Databricks #SQL (Structured Query Language) #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Azure #Batch #Data Modeling #PySpark #Indexing #Python
Role description
We are hiring a Senior Data Engineer with deep expertise in Azure Data Bricks(Must have) , Informatica, Snowflake & SQL to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architecture while leveraging their technical proficiency in cloud-based data engineering. Key Responsibilities: 1. Data Engineering & Architecture: β€’ Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. β€’ Build and manage scalable data ingestion frameworks for batch and real-time data processing. β€’ Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. 1. ETL/ELT Development: β€’ Should be able to derive insights by analyzing existing informatica mappings. β€’ Develop robust ETL/ELT pipelines using Data Bricks notebooks, and PySpark & Python. β€’ Perform data transformations, cleansing, and validation to prepare datasets for analysis. β€’ Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. 1. Performance Optimization: β€’ Optimize SQL queries for large-scale data processing. β€’ Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. β€’ Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Required Qualifications: β€’ Experience: β€’ Must have 7-8 years’ experience in DWH/Big Data background (preferably in Telco domain) β€’ Proven expertise with Azure Data Bricks & basic understanding of Informatica mappings. β€’ Technical Skills: β€’ Advanced proficiency in Python and SQL for data manipulation and pipeline development. β€’ Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. β€’ Experience with ETL/ELT tools like Azure Data Factory or Informatica. β€’ Familiarity with Azure DevOps for CI/CD pipelines and version control. β€’ Must have good experience in Databricks on Azure (3-4 years) β€’ Must have Oracle PL/SQL scripting experience (3-4 years) β€’ Must have good communication skills to handle client and to coordinate with onsite and offshore team