HatchPros

Azure Cloud Engineer--W2 Only--Locals Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Cloud Engineer in Buffalo, NY, with a contract length of unspecified duration. Pay rate is W2 only. Requires 10+ years of experience, 6+ in Azure Data Services, proficiency in SQL, Python, and relevant certifications.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
480
-
πŸ—“οΈ - Date
October 30, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Buffalo, NY
-
🧠 - Skills detailed
#Computer Science #Cloud #Azure DevOps #Migration #Python #GitHub #Scala #"ETL (Extract #Transform #Load)" #Azure Databricks #Security #Storage #DevOps #Data Governance #Data Pipeline #Synapse #Data Engineering #Databricks #Azure #Logging #Data Ingestion #Compliance #PySpark #Data Integration #Data Quality #Azure cloud #BI (Business Intelligence) #Delta Lake #Data Architecture #Data Processing #Spark (Apache Spark) #SQL (Structured Query Language) #Microsoft Power BI
Role description
USC, GC and GC-EAD need local DL and copy of VISA Last 4 of SSN LinkedIn profile Location: Buffalo, NY Onsite (specified as Hybrid, but do not have Hybrid Onsite details) Valid Drivers License or State Issued ID Cloud Engineer β€’ 10+ years of total experience & 6+ years of experience in Azure Data Services, Data Architecture, and Cloud Infrastructure. β€’ Hands-on experience with Microsoft Fabric services (OneLake, Data Factory, Synapse Data Engineering, Power BI, Real-time Analytics). β€’ Experience in performance tuning on Microsoft Fabric OR Azure. β€’ Data Governance: Establish data governance frameworks incorporating Microsoft Purview for data quality, lineage, and compliance. β€’ Proficiency in SQL, Python, PySpark, and Power BI for data engineering and analytics. β€’ Experience in DevOps for Data (CI/CD, Azure DevOps, GitHub Actions, ARM Templates). β€’ Strong problem-solving and troubleshooting skills in Azure/Fabric & Data Services. β€’ Data Flows: Design and technical skill data flows within the Microsoft Fabric environment. β€’ Storage Strategies: Implement OneLake storage strategies. β€’ Analytics Configuration: Configure Synapse Analytics workspaces. β€’ Migration: Experience in potential migration from their existing data platforms like Databricks/Spark, or any other source systems to Microsoft Fabric β€’ Integration Patterns: Establish Power BI integration patterns. β€’ Data Integration: Architect data integration patterns between systems using Azure Databricks/Spark and Microsoft Fabric. β€’ Delta Lake Architecture: Design Delta Lake architecture and implement medallion architecture (Bronze/Silver/Gold layers). β€’ Real-Time Data Ingestion: Create real-time data ingestion patterns and establish data quality frameworks. β€’ Security: Implement row-level security, data masking, and audit logging mechanisms. β€’ Pipeline Development: Design and implement scalable data pipelines using Notebooks/Spark for ETL/ELT processes and real-time data integration. β€’ Performance Optimization: Implement performance tuning strategies for large-scale data processing and analytics workloads. β€’ Analytical Skills: Strong analytical and problem-solving skills. β€’ Communication: Excellent communication and teamwork skills. β€’ Certifications: Relevant certifications in Microsoft data platforms are a plus β€’ Qualifications: Masters or Bachelor’s or master’s degree in computer science, Information Technology, or a related field.