

Data Engineer Exp Level-12+ Exp-Hybrid
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer in Phoenix, AZ (Hybrid), requiring 12+ years of experience. Key skills include ETL processes, Azure Data Factory, SQL Server, Power BI, and cloud infrastructure. A Bachelor's in computer science is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
May 22, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Phoenix, AZ
-
π§ - Skills detailed
#Azure Data Factory #Azure DevOps #ADLS (Azure Data Lake Storage) #Database Design #ADF (Azure Data Factory) #Databricks #C++ #Data Cleansing #Azure ADLS (Azure Data Lake Storage) #API (Application Programming Interface) #Azure SQL #Data Lake #Data Quality #Programming #Scripting #Batch #Python #Visualization #Data Transformations #Computer Science #Data Storage #SQL (Structured Query Language) #JavaScript #Cloud #Shell Scripting #Azure SQL Database #GCP (Google Cloud Platform) #Spark (Apache Spark) #JSON (JavaScript Object Notation) #Data Engineering #Jira #Data Ingestion #Microsoft Power BI #Data Analysis #Documentation #Agile #SSIS (SQL Server Integration Services) #Data Modeling #Databases #"ETL (Extract #Transform #Load)" #Scala #Storage #Data Lineage #Data Integration #Data Pipeline #Unix #BI (Business Intelligence) #Database Performance #Collibra #Bash #SQL Server #Data Warehouse #Power Automate #Docker #Kubernetes #REST (Representational State Transfer) #DevOps #GitHub #Jenkins #Data Profiling #Java #AWS (Amazon Web Services) #Azure
Role description
Role: Sr Data Engineer - Exp Level -12+
Location: Phoenix, AZ (Hybrid)
Experience: 12+ Years
Key Responsibilities
β’ Solid experience with Database design practices and data warehousing concepts.
β’ Design, develop, and maintain data pipelines and ETL processes to extract, transform, and load
data from various sources into our data warehouse.
β’ Hands on experience delivering data reporting requirements on On-Prem SQL Servers using SSIS
and/or other integration processes.
β’ Expertise with coding and implementing data pipelines in cloud-based data infrastructure in
batch and streaming ETL using Azure Data Factory, Spark, Python, Scala on Databricks.
β’ Experience working with Azure data lake storage and Azure SQL Databases.
β’ Optimize data ingestion and transformation processes to ensure efficient and scalable data
flows, Monitor and optimize database performance, including query tuning and index
optimization.
β’ Develop and maintain data models, schemas, and data dictionaries for efficient data storage and
retrieval.
β’ Perform data analysis, data profiling, and data cleansing to ensure data quality and accuracy,
data validation, quality checks, and troubleshooting to identify and resolve data-related issues.
β’ Develop Power BI dashboards, reports, and visualizations to effectively communicate data
insights. Implement data transformations, data modeling, and data integration processes in
Power BI.
β’ Create and maintain technical documentation related to database structures, schemas, and
Reporting solutions.
β’ Experience with MS office tools like excel, word, power point.
β’ Experience working with build and deploy tools Azure DevOps, GitHub, Jenkins.
β’ Experience working on an Agile Development team and delivering features incrementally using
Jira and Confluence.
β’ Stay up to date with industry trends and best practices in data engineering, Power BI and
proactively recommend and implement improvements to our data infrastructure.
β’ Ability to multi-task, be adaptable, and nimble within a team environment.
β’ Strong communication, interpersonal, analytical and problem-solving skills.
Preferred
β’ Experience with API/REST, JSON
β’ Experience in Banking and Financial Domains.
β’ Experience in Cloud environment like GCP, Azure and AWS.
β’ Familiarity with a variety of programming languages, like Java, JavaScript, C/C++.
β’ Familiarity with containerization and orchestration technologies (Docker, Kubernetes) and
experience with shell scripting in Bash, Unix or windows shell is preferable.
β’ Experience with Power Automate and Power Apps.
β’ Experience with Collibra Data Lineage and Data Quality Modules.
Education: Bachelor's degree in computer science or information system along with work experience in
a related field.
Role: Sr Data Engineer - Exp Level -12+
Location: Phoenix, AZ (Hybrid)
Experience: 12+ Years
Key Responsibilities
β’ Solid experience with Database design practices and data warehousing concepts.
β’ Design, develop, and maintain data pipelines and ETL processes to extract, transform, and load
data from various sources into our data warehouse.
β’ Hands on experience delivering data reporting requirements on On-Prem SQL Servers using SSIS
and/or other integration processes.
β’ Expertise with coding and implementing data pipelines in cloud-based data infrastructure in
batch and streaming ETL using Azure Data Factory, Spark, Python, Scala on Databricks.
β’ Experience working with Azure data lake storage and Azure SQL Databases.
β’ Optimize data ingestion and transformation processes to ensure efficient and scalable data
flows, Monitor and optimize database performance, including query tuning and index
optimization.
β’ Develop and maintain data models, schemas, and data dictionaries for efficient data storage and
retrieval.
β’ Perform data analysis, data profiling, and data cleansing to ensure data quality and accuracy,
data validation, quality checks, and troubleshooting to identify and resolve data-related issues.
β’ Develop Power BI dashboards, reports, and visualizations to effectively communicate data
insights. Implement data transformations, data modeling, and data integration processes in
Power BI.
β’ Create and maintain technical documentation related to database structures, schemas, and
Reporting solutions.
β’ Experience with MS office tools like excel, word, power point.
β’ Experience working with build and deploy tools Azure DevOps, GitHub, Jenkins.
β’ Experience working on an Agile Development team and delivering features incrementally using
Jira and Confluence.
β’ Stay up to date with industry trends and best practices in data engineering, Power BI and
proactively recommend and implement improvements to our data infrastructure.
β’ Ability to multi-task, be adaptable, and nimble within a team environment.
β’ Strong communication, interpersonal, analytical and problem-solving skills.
Preferred
β’ Experience with API/REST, JSON
β’ Experience in Banking and Financial Domains.
β’ Experience in Cloud environment like GCP, Azure and AWS.
β’ Familiarity with a variety of programming languages, like Java, JavaScript, C/C++.
β’ Familiarity with containerization and orchestration technologies (Docker, Kubernetes) and
experience with shell scripting in Bash, Unix or windows shell is preferable.
β’ Experience with Power Automate and Power Apps.
β’ Experience with Collibra Data Lineage and Data Quality Modules.
Education: Bachelor's degree in computer science or information system along with work experience in
a related field.