Quadrant IQ Solutions LLC

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Chicago, IL (Hybrid) with a contract length of "unknown" and a pay rate of "unknown." Requires 10+ years of experience, previous work with Discover Financial Services, and expertise in Python, SQL, and AWS technologies.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Snowflake #Jenkins #Redshift #HDFS (Hadoop Distributed File System) #Data Processing #AWS S3 (Amazon Simple Storage Service) #S3 (Amazon Simple Storage Service) #Monitoring #Datadog #Microservices #Metadata #Aurora #Splunk #Compliance #Delta Lake #Lambda (AWS Lambda) #Scala #SNS (Simple Notification Service) #PostgreSQL #GIT #Programming #Data Engineering #Apache Spark #Security #Data Modeling #Agile #DevOps #Kafka (Apache Kafka) #Data Management #Big Data #Data Pipeline #AWS (Amazon Web Services) #Datasets #SQL (Structured Query Language) #Scrum #IAM (Identity and Access Management) #Athena #GitHub #PCI (Payment Card Industry) #NoSQL #Data Quality #PySpark #DynamoDB #SQS (Simple Queue Service) #Docker #Java #ML (Machine Learning) #Logging #Python #Storage #Cloud #Batch #Spark (Apache Spark) #Code Reviews #"ETL (Extract #Transform #Load)" #Terraform #Data Profiling #Jira #Databases
Role description
Role: Data Engineer Location: Chicago, IL (Hybrid from Day 1) Visa's Accepted: USC/GC/H4-EAD/L2S/TN Key Skills: 10+ years of Experience Mandatory: Previous work Experience with Discover Financial Services Key Skills: Programming & Data Processing: Python, Java/Scala (for distributed workloads), SQL (Advanced), PySpark, Spark Streaming, batch & real-time ETL development. Big Data Ecosystem: Apache Spark, Hive, HDFS, Kafka, Delta Lake, distributed data processing frameworks. Cloud Technologies (DFS Standard): AWS (S3, Glue, Athena, EMR, Lambda, Step Functions, SNS/SQS), cloud-native ETL orchestration, serverless data workflows. Data Pipelines & ETL: Building scalable, fault-tolerant data pipelines, ingestion frameworks, CDC pipelines, data transformation and enrichment using Spark + Python. Databases & Storage: Snowflake (preferred), Aurora, Redshift, PostgreSQL, NoSQL (Cassandra/DynamoDB), data modeling (3NF, Star/Snowflake schema). CI/CD & DevOps for Data: Git/GitHub, Jenkins, GitHub Actions, Docker (nice-to-have), infrastructure-as-code (Terraform/CloudFormation). Data Quality & Governance: Data profiling, data validation frameworks, metadata management, lineage, cataloging tools, regulatory compliance (PCI, SOX). Analytics & Monitoring: Splunk, Datadog, CloudWatch, logging pipelines, pipeline performance tuning & job optimization. Security & Compliance (Financial Industry): Secure data development, encryption, IAM, key management, role-based access control, sensitive data masking. Architecture & Design: Distributed systems design, event-driven architecture, microservices-integrated pipelines, design patterns for data engineering. SDLC & Collaboration: Agile/Scrum, Jira, Confluence, code reviews, working with product, analytics, ML, and platform engineering teams. Domain Knowledge (Discover-specific): Credit card & payments data flows, fraud analytics datasets, customer lifecycle data, financial regulatory & audit requirements.