

Yochana
Contract Role: Data Engineer with Vector and Cribl Exp at Bellevue, WA (Remote)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Contract Data Engineer position in Bellevue, WA (Remote) for 6+ months, requiring 8+ years of experience with Vector and Cribl, Azure Data Factory, Databricks (PySpark), SQL, and Python for ETL/ELT pipeline development and data integration.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 18, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bellevue, WA
-
🧠 - Skills detailed
#PySpark #Azure #Security #ADF (Azure Data Factory) #SQL (Structured Query Language) #Azure SQL Database #SQL Queries #Data Lake #Data Processing #Python #Azure SQL #"ETL (Extract #Transform #Load)" #Cloud #Compliance #Databricks #API (Application Programming Interface) #Data Security #Automation #Spark (Apache Spark) #Data Engineering #Azure Data Factory #Azure cloud #Unit Testing
Role description
Data Engineer
Bellevue, WA (Remote)
6 months+
Mandatory Skills: Vector and Cribl
• 8+ years of experience.
• Design and develop ETL/ELT pipelines using Azure Data Factory (ADF) and Databricks (PySpark).
• Build and manage API integrations for data exchange between systems.
• Write and optimize SQL queries and stored procedures for data transformation and analysis.
• Implement Azure cloud services, including Azure Data Lake, Azure SQL Database
• Develop Python-based data processing solutions for automation and orchestration.
• Monitor, troubleshoot, and optimize data workflows and pipelines.
• Ensure data security, governance, and compliance with industry standards.
• Undertake Unit testing making full utilization of automation tools available and working with the system test team to ensure tests are integrated as
• part of the overall test plan.
• Good multi-tasking and flexibility to stretch when situation demands.
Data Engineer
Bellevue, WA (Remote)
6 months+
Mandatory Skills: Vector and Cribl
• 8+ years of experience.
• Design and develop ETL/ELT pipelines using Azure Data Factory (ADF) and Databricks (PySpark).
• Build and manage API integrations for data exchange between systems.
• Write and optimize SQL queries and stored procedures for data transformation and analysis.
• Implement Azure cloud services, including Azure Data Lake, Azure SQL Database
• Develop Python-based data processing solutions for automation and orchestration.
• Monitor, troubleshoot, and optimize data workflows and pipelines.
• Ensure data security, governance, and compliance with industry standards.
• Undertake Unit testing making full utilization of automation tools available and working with the system test team to ensure tests are integrated as
• part of the overall test plan.
• Good multi-tasking and flexibility to stretch when situation demands.





