Azure Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Azure Data Engineer" in Glendale, AZ, offering a 12-month contract at $44.05 - $57.17/hr. Requires 4+ years in data engineering, expertise in Azure services, SQL, and experience with ETL processes and data governance. U.S. Citizenship required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
456
-
πŸ—“οΈ - Date discovered
August 22, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Glendale, AZ
-
🧠 - Skills detailed
#Azure Synapse Analytics #SAP #Spark (Apache Spark) #Data Science #AWS (Amazon Web Services) #SSIS (SQL Server Integration Services) #Azure Blob Storage #Microsoft Power BI #Data Pipeline #Azure SQL Database #Data Encryption #Business Analysis #SQL (Structured Query Language) #Computer Science #Data Framework #Security #Data Engineering #ADF (Azure Data Factory) #SSRS (SQL Server Reporting Services) #Scala #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #BI (Business Intelligence) #Data Processing #Documentation #Data Quality #Azure cloud #Data Modeling #Automation #Synapse #Big Data #Azure #Datasets #DevOps #Business Objects #Storage #Data Orchestration #Azure SQL #Azure Data Factory #Data Accuracy #Compliance #Data Lake #Data Storage #Azure Security #BO (Business Objects) #Databases #Databricks #GCP (Google Cloud Platform) #Data Governance #Cloud
Role description
Job Title: Data Engineer Location: Glendale, AZ Zip Code: 85301 Contract: 12 Months Pay Rate: $44.05 - 57.17/hr. Tags: #SoftwareEngineerJobs; #GlendaleJobs; Job Summary: We are seeking a highly skilled and motivated Software Engineer specializing in Data Engineering to join our growing team. This critical role will focus on designing, developing, and optimizing our data infrastructure within the Azure cloud environment. The ideal candidate possesses a deep understanding of data engineering principles and extensive hands-on experience with Azure Data Lake, Azure Data Factory, Databricks, and SAP Business Objects. You will play a key role in building and maintaining robust data pipelines, ensuring data quality, and enabling data-driven insights for the business. This position requires a U.S. Citizenship due to project requirements. Major Responsibilities: Design, develop, and optimize scalable and efficient data processing pipelines and architectures within Azure Data Lake and Databricks, leveraging best practices for performance and maintainability. Implement and manage complex ETL (Extract, Transform, Load) processes to seamlessly integrate data from diverse sources (e.g., databases, APIs, streaming platforms) into Azure Data Lake, ensuring data quality and consistency. Develop and maintain interactive dashboards and reports using SAP Business Objects and Power BI, translating complex data into actionable business insights. Focus on performance optimization and data accuracy. Leverage Azure Data Factory for data orchestration, workflow automation, and scheduling, ensuring reliable and timely data delivery. Implement and maintain Azure Security & Governance policies, including access control, data encryption, and compliance frameworks, to ensure data protection and adherence to industry best practices. Optimize data storage and retrieval mechanisms within Azure, including performance tuning of Databricks clusters and Azure SQL databases, to improve query performance and scalability. Collaborate effectively with cross-functional teams (e.g., business analysts, data scientists, product managers) to understand business requirements, translate them into technical solutions, and communicate technical concepts clearly. Implement data quality checks and validation rules throughout the data pipeline to ensure data accuracy, completeness, and consistency. Monitor, troubleshoot, and enhance existing data solutions, proactively identifying and resolving performance bottlenecks and data quality issues. Create and maintain comprehensive technical documentation, including design specifications, data flow diagrams, and operational procedures, to facilitate knowledge sharing and team collaboration. Education and Experience Requirements: Bachelor"s degree in computer science, Engineering, or a related field and 2-5 years of experience or 5+ years of relevant work experience. Must be a U.S. Citizen. Required Knowledge, Skills, and Abilities: 4+ years of hands-on experience in data engineering, data warehousing, and cloud-based data platforms. Deep expertise in Azure Data Lake, Azure Data Factory, Azure Security & Governance, Databricks, and SAP Business Objects. Strong proficiency in SQL, including complex query writing, query optimization, and performance tuning. Proven experience in developing and maintaining Power BI dashboards and reports. Hands-on experience with Azure services such as Azure Synapse Analytics, Azure SQL Database, and Azure Blob Storage. Solid understanding of data modeling concepts, ETL processes, and big data frameworks (e.g., Spark). Experience in optimizing and managing large-scale datasets in cloud environments. Experience developing and maintaining ETL packages using SSIS and reports using SSRS. Strong analytical and problem-solving skills with a keen attention to detail. Excellent communication and collaboration skills. Master's degree in a relevant field. Familiarity with machine learning models and data science concepts. Understanding of DevOps practices and CI/CD pipelines for data applications. Experience with data governance tools and frameworks. Experience with other cloud platforms (e.g., AWS, GCP).