

Sharp Decisions
Sr Data Engineer (967)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer in Phoenix, AZ (Hybrid) with a contract length of "unknown" and a pay rate of "unknown." Requires 10+ years of experience, expertise in Azure Data Factory, ETL processes, and data warehousing, and a Bachelor’s degree in Computer Science or related field.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Scottsdale, AZ
-
🧠 - Skills detailed
#Database Performance #Visualization #Data Cleansing #DevOps #Azure #JSON (JavaScript Object Notation) #ADLS (Azure Data Lake Storage) #Bash #Data Quality #Scripting #Cloud #Data Storage #Batch #Power Automate #AWS (Amazon Web Services) #Data Analysis #Azure SQL Database #Microsoft Power BI #Collibra #Data Lineage #ADF (Azure Data Factory) #Data Profiling #Documentation #JavaScript #SQL (Structured Query Language) #SQL Server #Scala #Data Modeling #Python #Agile #Delta Lake #C++ #Data Integration #Programming #GCP (Google Cloud Platform) #"ETL (Extract #Transform #Load)" #Shell Scripting #SSIS (SQL Server Integration Services) #Data Ingestion #Storage #Data Lake #Docker #GitHub #Data Warehouse #Data Engineering #Spark (Apache Spark) #Database Design #Azure Data Factory #Azure DevOps #Data Pipeline #Jenkins #BI (Business Intelligence) #Databases #Jira #Kubernetes #Unix #Data Transformations #Azure ADLS (Azure Data Lake Storage) #Databricks #REST (Representational State Transfer) #API (Application Programming Interface) #Java #Azure SQL #Computer Science
Role description
Role: Sr Data Engineer
Location: Phoenix, AZ (Hybrid)
Experience: 10+ Years
Key Responsibilities:
• Solid experience with Database design practices and data warehousing concepts.
• Design, develop, and maintain data pipelines and ETL processes to extract, transform, and load data from various sources into our data warehouse.
• Hands on experience delivering data reporting requirements on On-Prem SQL Servers using SSIS and/or other integration processes.
• Expertise with coding and implementing data pipelines in cloud-based data infrastructure in batch and streaming ETL using Azure Data Factory, Spark, Python, Scala on Databricks.
• Experience working with Azure data lake storage and Azure SQL Databases.
• Experience working with Databricks Delta Lake and Medallion Architecture and Unity Catalog.
• Optimize data ingestion and transformation processes to ensure efficient and scalable data flows, Monitor and optimize database performance, including query tuning and index optimization.
• Develop and maintain data models, schemas, and data dictionaries for efficient data storage and retrieval.
• Perform data analysis, data profiling, and data cleansing to ensure data quality and accuracy,
data validation, quality checks, and troubleshooting to identify and resolve data-related issues.
• Develop Power BI dashboards, reports, and visualizations to effectively communicate data insights. Implement data transformations, data modeling, and data integration processes in Power BI.
• Create and maintain technical documentation related to database structures, schemas, and Reporting solutions.
• Experience with MS office tools like excel, word, power point.
• Experience working with build and deploy tools Azure DevOps, GitHub, Jenkins.
• Experience working on an Agile Development team and delivering features incrementally using Jira and Confluence.
• Stay up to date with industry trends and best practices in data engineering, Power BI and proactively recommend and implement improvements to our data infrastructure.
• Ability to multi-task, be adaptable, and nimble within a team environment.
• Strong communication, interpersonal, analytical and problem-solving skills.
Preferred:
• Experience with API/REST, JSON
• Experience in Banking and Financial Domains.
• Industry certification - Azure/GCP/AWS
• Familiarity with a variety of programming languages, like Java, JavaScript, C/C++.
• Familiarity with containerization and orchestration technologies (Docker, Kubernetes) and experience with shell scripting in Bash, Unix or windows shell is preferable.
• Experience with Power Automate and Power Apps.
• Experience with Collibra Data Lineage and Data Quality Modules.
Education:
Bachelors degree in computer science or information system along with work experience in a related field.
Role: Sr Data Engineer
Location: Phoenix, AZ (Hybrid)
Experience: 10+ Years
Key Responsibilities:
• Solid experience with Database design practices and data warehousing concepts.
• Design, develop, and maintain data pipelines and ETL processes to extract, transform, and load data from various sources into our data warehouse.
• Hands on experience delivering data reporting requirements on On-Prem SQL Servers using SSIS and/or other integration processes.
• Expertise with coding and implementing data pipelines in cloud-based data infrastructure in batch and streaming ETL using Azure Data Factory, Spark, Python, Scala on Databricks.
• Experience working with Azure data lake storage and Azure SQL Databases.
• Experience working with Databricks Delta Lake and Medallion Architecture and Unity Catalog.
• Optimize data ingestion and transformation processes to ensure efficient and scalable data flows, Monitor and optimize database performance, including query tuning and index optimization.
• Develop and maintain data models, schemas, and data dictionaries for efficient data storage and retrieval.
• Perform data analysis, data profiling, and data cleansing to ensure data quality and accuracy,
data validation, quality checks, and troubleshooting to identify and resolve data-related issues.
• Develop Power BI dashboards, reports, and visualizations to effectively communicate data insights. Implement data transformations, data modeling, and data integration processes in Power BI.
• Create and maintain technical documentation related to database structures, schemas, and Reporting solutions.
• Experience with MS office tools like excel, word, power point.
• Experience working with build and deploy tools Azure DevOps, GitHub, Jenkins.
• Experience working on an Agile Development team and delivering features incrementally using Jira and Confluence.
• Stay up to date with industry trends and best practices in data engineering, Power BI and proactively recommend and implement improvements to our data infrastructure.
• Ability to multi-task, be adaptable, and nimble within a team environment.
• Strong communication, interpersonal, analytical and problem-solving skills.
Preferred:
• Experience with API/REST, JSON
• Experience in Banking and Financial Domains.
• Industry certification - Azure/GCP/AWS
• Familiarity with a variety of programming languages, like Java, JavaScript, C/C++.
• Familiarity with containerization and orchestration technologies (Docker, Kubernetes) and experience with shell scripting in Bash, Unix or windows shell is preferable.
• Experience with Power Automate and Power Apps.
• Experience with Collibra Data Lineage and Data Quality Modules.
Education:
Bachelors degree in computer science or information system along with work experience in a related field.






