PDSSOFT INC.

Data Engineer-Ex Microsoft

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "N/A" and a pay rate of "N/A," located in Redmond, WA (remote work allowed in PST). Requires strong skills in Apache Spark, Scala, SQL, and Power BI; financial domain experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 18, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Redmond, WA
-
🧠 - Skills detailed
#Azure #Hadoop #Oracle ERP #Security #Apache Spark #SQL (Structured Query Language) #GCP (Google Cloud Platform) #SQL Queries #AWS (Amazon Web Services) #Data Lake #Data Processing #"ETL (Extract #Transform #Load)" #Cloud #Microsoft Power BI #Compliance #Databricks #Big Data #Azure Databricks #Scala #Synapse #Oracle #Spark (Apache Spark) #Data Quality #Data Engineering #BI (Business Intelligence) #Distributed Computing #Data Manipulation #Azure cloud #Data Pipeline #SAP #Datasets #Visualization
Role description
Data Engineer Location - Redmond ,WA ( Open for WFM -Need to work in PST time zone) Key Responsibilities • Design, develop, and maintain scalable big data solutions using Apache Spark and Scala. • Implement complex SQL queries for data transformation and analytics. • Develop and optimize Power BI dashboards for business reporting and visualization. • Collaborate with cross-functional teams to integrate data pipelines and reporting solutions. • Ensure data quality, security, and compliance across all systems. Required Skills & Experience • Strong proficiency in Apache Spark with hands-on experience in Scala. • Solid understanding of SQL for data manipulation and analysis. • Experience in Power BI for creating interactive reports and dashboards. • Familiarity with distributed computing concepts and big data ecosystems (Hadoop, Hive, etc.). • Ability to work with large datasets and optimize data workflows. Highly Desirable • Spark Performance Tuning Expertise: Proven ability to optimize Spark jobs for efficiency and scalability. • Knowledge of cluster resource management and troubleshooting performance bottlenecks. • Experience with Azure cloud services for big data solutions (e.g., Azure Data Lake, Azure Databricks, Synapse Analytics). • Exposure to other cloud platforms (AWS or GCP) is a plus. • Experience working in financial domain or with ERP systems (e.g., SAP, Oracle ERP). • Understanding of compliance and regulatory requirements in financial data processing.