Nasscomm

Data Specialist (Azure Data Platform)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Specialist (Azure Data Platform) with a contract length of "unknown," offering a pay rate of "unknown" and requiring remote work. Key skills include Azure Data Factory, PostgreSQL, SQL development, and Power BI expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 3, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Analysis #Scala #Data Framework #BI (Business Intelligence) #Microsoft Power BI #Debugging #Datasets #Azure Data Factory #Indexing #Data Architecture #PostgreSQL #Complex Queries #Data Ingestion #Visualization #Data Engineering #ADF (Azure Data Factory) #SQL (Structured Query Language) #Azure #Database Performance #Data Modeling #SQL Server #Data Pipeline #Databases #"ETL (Extract #Transform #Load)" #MySQL #SQL Queries
Role description
Role Overview We are looking for a highly hands-on Data Specialist who combines skills across Data Engineering, Data Analysis, and Data Modeling. This role focuses on building end-to-end data pipelines, integrating external data sources, transforming large datasets, and generating actionable insights. This is an individual contributor role (initially a one-person setup), requiring someone who is self-driven, technically strong, and capable of working independently across data ingestion, transformation, and visualization. Key Responsibilities • Design and develop end-to-end data pipelines using Azure tools • Integrate data from external systems (cross-network/firewall environments) into Azure • Build and manage data pipelines using Azure Data Factory (ADF) • Extract, transform, and load (ETL/ELT) data into internal environments • Implement Medallion Architecture (Raw, Staging, Serving layers) • Work with large datasets (1TB+, millions of rows of telemetry data) • Optimize and tune SQL queries, stored procedures, and database performance • Create materialized views, indexing strategies, and performance improvements • Develop Power BI dashboards and reports with complex filters and dynamic views • Analyze data and generate meaningful business insights • Understand and interpret existing data models and database structures • Collaborate with external data providers (Europe-based stakeholders) • Troubleshoot data connectivity, pipeline, and infrastructure issues independently Mandatory Requirements (Must-Have Skills) • Strong hands-on experience with: • Azure Data Factory (ADF), pipeline creation, debugging, and optimization • PostgreSQL (primary database) • SQL development, including: • Complex queries • Stored procedures • Query performance tuning • Experience working with large-scale datasets (millions of rows, TB-level data) • Strong experience in Power BI: • Advanced filters • Dynamic dashboards (date ranges, business units, sectors, etc.) • Experience in data pipeline design and ETL/ELT processes • Solid understanding of relational databases (MySQL, SQL Server as secondary) • Ability to connect and integrate external data sources across networks • Strong troubleshooting skills across data, pipelines, and connectivity issues • Excellent communication skills (working with global stakeholders) Good-to-Have Skills • Experience in: • Data Modeling • Data Architecture design • Exposure to columnar databases • Understanding of enterprise data platforms • Ability to derive business insights from raw data • Prior experience building scalable data frameworks Technical Environment • Azure Data Factory (ADF) • PostgreSQL (primary) • SQL Server / MySQL (secondary) • Power BI • Azure ecosystem Data & System Context • Data Source: External vendors (telemetry data) • Volume: 1TB+ datasets, millions of rows • Tables: ~20–30 tables (high data density) • Architecture: Medallion Architecture (Raw → Staging → Serving)