

Realign LLC
Data Engineer (MSFT Project)-5
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (MSFT Project) on a contract basis, remote (PST hours). Key skills include Azure technologies, PowerShell scripting, SQL database management, Apache Spark, and Scala programming. Previous Microsoft experience is preferred. Pay rate is unspecified.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
Unknown
-
ποΈ - Date
November 6, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
California
-
π§ - Skills detailed
#Presto #Scala #SQL Server #Big Data #Azure ADLS (Azure Data Lake Storage) #Delta Lake #SQL (Structured Query Language) #Azure #Distributed Computing #ADLS (Azure Data Lake Storage) #Database Management #Apache Spark #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #Azure SQL #Data Storage #Scripting #Spark (Apache Spark) #Data Lake #Storage #Datasets #DevOps #Data Processing #Automation #Synapse #Programming #Azure Synapse Analytics
Role description
Job Type: Contract
Job Category: IT
Role: Data Engineer (MSFT Project)
Location: Remote (must work in PST hours)
Client Environment:
Microsoft ecosystem (Redmond, WA). The role is aligned with Azure data engineering, automation, and distributed data processing workloads.
Core Responsibilities & Skill Insights
Azure Technologies (Synapse, IaaS VM, SQL Server, ADLS Gen2)
Youβll manage and integrate Azure Synapse Analytics for data warehousing, SQL Server (on both Azure and On-prem), and Azure Data Lake Storage Gen2 for large-scale storage.
IaaS VM refers to setting up or maintaining virtual machines hosted in Azure.
Azure Automation with PowerShell
Building and maintaining runbooks β automated scripts used for system and data tasks (like database backups, maintenance, or pipeline restarts).
Heavy scripting using PowerShell for Azure administration.
SQL Backup/Restore Operations (Azure & On-Prem)
Hands-on with SQL database management, especially performing backups and restores across Azure Storage Accounts and on-prem environments.
Distributed Computing / Spark Processing
Must know how to process large datasets using Apache Spark, ensuring scalability and performance optimization in data pipelines.
Fabric Reporting (Preferred)
Microsoft Fabric is the new end-to-end analytics platform; prior exposure is a plus but not mandatory.
Microsoft Experience (Preferred)
If youβve worked at Microsoft or on Microsoft projects before, itβs a strong advantage (preferred for vendor alignment and process familiarity).
Scala Programming & OOPs Concepts
Development using Scala for Spark-based ETL pipelines; solid grasp of object-oriented principles.
Data Storage Formats (Parquet, Delta Lake)
Knowledge of modern data storage formats used in big data processing for performance and versioning.
Core Tech Stack Summary
Azure Synapse | Azure SQL | ADLS Gen2 | PowerShell | Spark | Scala | Parquet | Delta Lake | Azure Automation
Required Skills
DEVOPS ENGINEER
Job Type: Contract
Job Category: IT
Role: Data Engineer (MSFT Project)
Location: Remote (must work in PST hours)
Client Environment:
Microsoft ecosystem (Redmond, WA). The role is aligned with Azure data engineering, automation, and distributed data processing workloads.
Core Responsibilities & Skill Insights
Azure Technologies (Synapse, IaaS VM, SQL Server, ADLS Gen2)
Youβll manage and integrate Azure Synapse Analytics for data warehousing, SQL Server (on both Azure and On-prem), and Azure Data Lake Storage Gen2 for large-scale storage.
IaaS VM refers to setting up or maintaining virtual machines hosted in Azure.
Azure Automation with PowerShell
Building and maintaining runbooks β automated scripts used for system and data tasks (like database backups, maintenance, or pipeline restarts).
Heavy scripting using PowerShell for Azure administration.
SQL Backup/Restore Operations (Azure & On-Prem)
Hands-on with SQL database management, especially performing backups and restores across Azure Storage Accounts and on-prem environments.
Distributed Computing / Spark Processing
Must know how to process large datasets using Apache Spark, ensuring scalability and performance optimization in data pipelines.
Fabric Reporting (Preferred)
Microsoft Fabric is the new end-to-end analytics platform; prior exposure is a plus but not mandatory.
Microsoft Experience (Preferred)
If youβve worked at Microsoft or on Microsoft projects before, itβs a strong advantage (preferred for vendor alignment and process familiarity).
Scala Programming & OOPs Concepts
Development using Scala for Spark-based ETL pipelines; solid grasp of object-oriented principles.
Data Storage Formats (Parquet, Delta Lake)
Knowledge of modern data storage formats used in big data processing for performance and versioning.
Core Tech Stack Summary
Azure Synapse | Azure SQL | ADLS Gen2 | PowerShell | Spark | Scala | Parquet | Delta Lake | Azure Automation
Required Skills
DEVOPS ENGINEER





