VySystems

Data Engineer with Risk & Fraud (Remote)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Risk & Fraud (Remote) for a contract length of "unknown" at a pay rate of "unknown." Key skills include PySpark, Azure Data Factory, Python, and SQL. Experience in risk analysis and fraud detection is mandatory.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 16, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Redmond, WA
-
🧠 - Skills detailed
#Data Quality #Data Engineering #Monitoring #Risk Analysis #Data Processing #PySpark #Compliance #Security #Data Pipeline #Data Enrichment #Databricks #Synapse #Azure Data Factory #API (Application Programming Interface) #SQL (Structured Query Language) #Vault #ADF (Azure Data Factory) #Datasets #Python #ADLS (Azure Data Lake Storage) #Spark (Apache Spark) #Automation #Azure #Batch #"ETL (Extract #Transform #Load)"
Role description
Job Description: β€’ Experience with Risk Analysis and Fraud Detection is mandate. β€’ β€’ Experience in data engineering, with at least 3 years working hands-on with PySpark, Azure Data Factory, and Python in production environments. β€’ β€’ Strong background in designing and implementing large-scale data pipelines, including batch and real-time ingestion for risk, fraud, or financial datasets. β€’ β€’ Deep experience with PySpark for distributed data processing, data quality validation, data enrichment, feature engineering, and fraud-signal extraction. β€’ β€’ Solid expertise in Azure Data Factory for orchestrating complex ETL/ELT workflows across multiple data sources. β€’ β€’ Proficiency in Python for data processing, automation, API integration, anomaly-detection scripts, and model-ready dataset preparation. β€’ β€’ Strong SQL skills, including query optimization, performance tuning, and working with both relational and non-relational stores such as Cosmos DB, Kusto, or ADLS. β€’ β€’ Good understanding of data warehousing, dimensional modeling, and data quality frameworks used in risk scoring and fraud detection systems. β€’ β€’ Exposure to the broader Azure ecosystem such as Synapse, Databricks, EventHub, Service Bus, Key Vault, Functions, Monitor, Log Analytics, and other platform components used in risk and fraud architecture. β€’ β€’ Familiarity with streaming architectures and patterns such as event-driven pipelines, near real-time scoring, and anomaly monitoring. β€’ β€’ Experience working with high-volume, sensitive data while adhering to security, compliance, and privacy guidelines. β€’ β€’ Strong analytical and problem-solving abilities, with the ability to troubleshoot complex data pipeline issues in a risk or fraud context. β€’ β€’ Effective communication skills to work with engineering, analytics, and fraud operations teams.