

Dahl Consulting
Senior Engineer | Fraud and Abuse Data Engineering
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Engineer | Fraud and Abuse Data Engineering, a 12-month contract based in Brooklyn Park, MN (Hybrid). Pay ranges from $67.67 to $112.78 per hour. Requires 4+ years of experience in data systems, proficiency in Python, SQL, and GCP.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
896
-
ποΈ - Date
February 26, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Brooklyn Park, MN
-
π§ - Skills detailed
#Data Engineering #Pig #Cloud #Batch #YARN (Yet Another Resource Negotiator) #Python #Data Pipeline #Compliance #Apache Spark #Data Processing #Strategy #Spark (Apache Spark) #Data Quality #AI (Artificial Intelligence) #GCP (Google Cloud Platform) #Observability #SQL (Structured Query Language) #Deployment #Dataflow #Kafka (Apache Kafka) #Consulting #Monitoring #Data Science #HDFS (Hadoop Distributed File System) #Scala #Agile #ML (Machine Learning) #Hadoop #"ETL (Extract #Transform #Load)" #Security #Sqoop (Apache Sqoop) #BigQuery #Datasets
Role description
Title: Senior Engineer | Fraud and Abuse Data Engineering
Location: Brooklyn Park, MN | Hybrid 2 days in office
Job Type: Contract (12 months)
Compensation: $67.67-$112.78
Industry: Retail
About The Role
Our firm is partnering with a leading Fortune 50 retailer known for its large-scale digital footprint, advanced data infrastructure, and industry-leading fraud prevention capabilities. We are seeking a Senior Data Engineer to join their Fraud & Abuse Engineering organizationβan area responsible for protecting millions of daily digital interactions, ensuring safe, trustworthy experiences for customers, and enabling secure growth across e-commerce and enterprise systems.
In this role, you will design and operate high-performance data platforms that power real-time detection, machine learning, and analytics solutions used across the enterprise. This is an opportunity to influence the technology strategy of a global retail leader while working with cutting-edge cloud and big-data technologies.
Job Description
In this role, you will contribute to the data engineering foundation that powers fraud detection, abuse prevention, and secure digital experiences for one of the nationβs largest retail enterprises. You will architect and operate high-performance data systems across both on-prem Hadoop platforms and Google Cloud, enabling real-time decisioning, machine learning, and large-scale analytics. This position requires deep technical expertise, a strong understanding of distributed systems, and the ability to collaborate across engineering, analytics, and data science teams to deliver trusted, production-grade data solutions.
Key Responsibilities
β’ Design, build, and maintain scalable batch and real-time data pipelines using Kafka, Hadoop ecosystem tools, and GCP services.
β’ Develop and optimize ETL/ELT workflows for high-volume, high-velocity datasets.
β’ Implement monitoring, alerting, and observability solutions to ensure reliability and meet production SLAs.
β’ Build distributed data processing solutions using Python, Spark, Hive, and related frameworks with a focus on performance, throughput, and cost optimization.
β’ Deliver curated datasets with strong data quality, governance, and lineage for enterprise analytics and ML use cases.
β’ Partner with data scientists to operationalize machine learning models on Vertex AI and build end-to-end MLOps pipelines (training, deployment, CI/CD, monitoring, auto-retraining).
β’ Integrate on-prem Hadoop environments with GCP platforms to support hybrid cloud data flows.
β’ Collaborate with analysts, product teams, and engineers to ensure data is trusted, accessible, and actionable.
β’ Provide technical mentorship to junior engineers and influence best practices across the organization.
β’ Ensure compliance with security, privacy, and regulatory standards across all data engineering work.
β’ Evaluate new technologies and drive improvements in performance, scalability, and cost efficiency across data platforms.
Qualifications
Required Qualifications:
β’ 4+ years building and operating large-scale, production data systems.
β’ Strong proficiency writing clean, maintainable code in Python and SQL.
β’ Demonstrated skill in building distributed data processing using frameworks such as Apache Spark and Hadoop ecosystem tools (HDFS, Hive, Pig, Oozie, Sqoop, YARN).
β’ Experience with real-time data streaming technologies (e.g., Kafka, Flink, Spark Streaming).
β’ Expertise with core GCP services: BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI (AI Platform).
β’ Experience developing, deploying, and supporting MLOps pipelines for ML workloads.
β’ Strong troubleshooting skills with the ability to independently resolve both routine and complex issues.
β’ Ability to deliver high-throughput, low-latency, scalable, and secure data solutions.
β’ Effective communicator who works well in Agile environments and aligns work with stakeholder and business needs.
β’ Demonstrated ability to influence engineering standards and remain current with emerging technologies.
β’ BA/BS degree or equivalent experience.
Preferred Qualifications:
β’ Experience configuring and deploying custom data engineering packages or platform components.
β’ Prior work within large enterprise data environments or fraud/abuse detection domains.
β’ Experience contributing to platform architecture decisions and cross-team engineering strategy.
Technical Skills:
β’ Python & SQL β strong proficiency.
β’ Distributed Systems β Spark, Hadoop (HDFS, Hive, Pig, Oozie, Sqoop, YARN).
β’ Streaming β Kafka, Flink, Spark Streaming.
β’ Cloud (GCP) β BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI.
β’ MLOps β pipelines for ML deployment, CI/CD, monitoring, automated retraining.
Benefits
Dahl Consulting is proud to offer a comprehensive benefits package to eligible employees that will allow you to choose the best coverage to meet your familyβs needs. For details, please review the DAHL Benefits Summary: https://www.dahlconsulting.com/benefits-w2fta/.
How To Apply
Take the first step on your new career path! To submit yourself for consideration for this role, simply click the apply button and complete our mobile-friendly online application. Once weβve reviewed your application details, a recruiter will reach out to you with next steps!
Equal Opportunity Statement
As an equal opportunity employer, Dahl Consulting welcomes candidates of all backgrounds and experiences to apply. If this position sounds like the right opportunity for you, we encourage you to take the next step and connect with us. We look forward to meeting you!
Title: Senior Engineer | Fraud and Abuse Data Engineering
Location: Brooklyn Park, MN | Hybrid 2 days in office
Job Type: Contract (12 months)
Compensation: $67.67-$112.78
Industry: Retail
About The Role
Our firm is partnering with a leading Fortune 50 retailer known for its large-scale digital footprint, advanced data infrastructure, and industry-leading fraud prevention capabilities. We are seeking a Senior Data Engineer to join their Fraud & Abuse Engineering organizationβan area responsible for protecting millions of daily digital interactions, ensuring safe, trustworthy experiences for customers, and enabling secure growth across e-commerce and enterprise systems.
In this role, you will design and operate high-performance data platforms that power real-time detection, machine learning, and analytics solutions used across the enterprise. This is an opportunity to influence the technology strategy of a global retail leader while working with cutting-edge cloud and big-data technologies.
Job Description
In this role, you will contribute to the data engineering foundation that powers fraud detection, abuse prevention, and secure digital experiences for one of the nationβs largest retail enterprises. You will architect and operate high-performance data systems across both on-prem Hadoop platforms and Google Cloud, enabling real-time decisioning, machine learning, and large-scale analytics. This position requires deep technical expertise, a strong understanding of distributed systems, and the ability to collaborate across engineering, analytics, and data science teams to deliver trusted, production-grade data solutions.
Key Responsibilities
β’ Design, build, and maintain scalable batch and real-time data pipelines using Kafka, Hadoop ecosystem tools, and GCP services.
β’ Develop and optimize ETL/ELT workflows for high-volume, high-velocity datasets.
β’ Implement monitoring, alerting, and observability solutions to ensure reliability and meet production SLAs.
β’ Build distributed data processing solutions using Python, Spark, Hive, and related frameworks with a focus on performance, throughput, and cost optimization.
β’ Deliver curated datasets with strong data quality, governance, and lineage for enterprise analytics and ML use cases.
β’ Partner with data scientists to operationalize machine learning models on Vertex AI and build end-to-end MLOps pipelines (training, deployment, CI/CD, monitoring, auto-retraining).
β’ Integrate on-prem Hadoop environments with GCP platforms to support hybrid cloud data flows.
β’ Collaborate with analysts, product teams, and engineers to ensure data is trusted, accessible, and actionable.
β’ Provide technical mentorship to junior engineers and influence best practices across the organization.
β’ Ensure compliance with security, privacy, and regulatory standards across all data engineering work.
β’ Evaluate new technologies and drive improvements in performance, scalability, and cost efficiency across data platforms.
Qualifications
Required Qualifications:
β’ 4+ years building and operating large-scale, production data systems.
β’ Strong proficiency writing clean, maintainable code in Python and SQL.
β’ Demonstrated skill in building distributed data processing using frameworks such as Apache Spark and Hadoop ecosystem tools (HDFS, Hive, Pig, Oozie, Sqoop, YARN).
β’ Experience with real-time data streaming technologies (e.g., Kafka, Flink, Spark Streaming).
β’ Expertise with core GCP services: BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI (AI Platform).
β’ Experience developing, deploying, and supporting MLOps pipelines for ML workloads.
β’ Strong troubleshooting skills with the ability to independently resolve both routine and complex issues.
β’ Ability to deliver high-throughput, low-latency, scalable, and secure data solutions.
β’ Effective communicator who works well in Agile environments and aligns work with stakeholder and business needs.
β’ Demonstrated ability to influence engineering standards and remain current with emerging technologies.
β’ BA/BS degree or equivalent experience.
Preferred Qualifications:
β’ Experience configuring and deploying custom data engineering packages or platform components.
β’ Prior work within large enterprise data environments or fraud/abuse detection domains.
β’ Experience contributing to platform architecture decisions and cross-team engineering strategy.
Technical Skills:
β’ Python & SQL β strong proficiency.
β’ Distributed Systems β Spark, Hadoop (HDFS, Hive, Pig, Oozie, Sqoop, YARN).
β’ Streaming β Kafka, Flink, Spark Streaming.
β’ Cloud (GCP) β BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI.
β’ MLOps β pipelines for ML deployment, CI/CD, monitoring, automated retraining.
Benefits
Dahl Consulting is proud to offer a comprehensive benefits package to eligible employees that will allow you to choose the best coverage to meet your familyβs needs. For details, please review the DAHL Benefits Summary: https://www.dahlconsulting.com/benefits-w2fta/.
How To Apply
Take the first step on your new career path! To submit yourself for consideration for this role, simply click the apply button and complete our mobile-friendly online application. Once weβve reviewed your application details, a recruiter will reach out to you with next steps!
Equal Opportunity Statement
As an equal opportunity employer, Dahl Consulting welcomes candidates of all backgrounds and experiences to apply. If this position sounds like the right opportunity for you, we encourage you to take the next step and connect with us. We look forward to meeting you!






