

Servsys Corporation
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown," requiring prior experience at Capital One for over 2 years. Key skills include AWS, Python, Spark, SQL, and data warehousing with Snowflake.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 19, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
McLean, VA
-
π§ - Skills detailed
#Snowflake #Compliance #Leadership #Migration #Agile #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #AWS S3 (Amazon Simple Storage Service) #Spark (Apache Spark) #Splunk #Data Migration #Scripting #Cloud #GitHub #NoSQL #Airflow #Data Governance #Jira #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #PySpark #Databricks #Bash #Databases #DynamoDB #AWS (Amazon Web Services) #Data Engineering #SQL (Structured Query Language) #Python #Data Pipeline #Jenkins
Role description
Only apply if you have worked with Previously Capital One in the last 10 years for over 2+ yrs.
Sr Data Engineer
Project Overview
Discoverβs historical credit card data will be integrated into the Capital One ecosystem within Capital Ones data stream pipeline & datalake.
Engineer will be responsible for:
β’ Extracting the data from OneLake
β’ Transforming the data
β’ Loading it into the target state, which could be Snowflake or AWS S3.
Tech Lead Expectations
β’ Accountability: Ensure team deliverables are completed on time.
β’ Leadership: Lead sprint ceremonies and manage team workflows.
β’ Problem Solving: Collaborate with senior managers & others to resolve issues.
β’ Adaptability: Pivot quickly based on changing requirements from customer/client.
β’ Talent Level: Must be exceptionally high-caliber professionals.
β’ Communication: Strong written and verbal skills; confidently articulate points of view. Must be able to write out their thoughts.
β’ PySpark optimization and troubleshooting as part of performance tuning and data pipeline efficiency.
β’ Architectural Awareness: Understand available tools and how they integrate and engage effectively with architects (nominal architecture experience required).
Tech Stack: Must Have
Must Have
AWS(EMR/Glue, S3, Lambda, Cloudwatch), Python, Spark, Databricks, SQL, Bash)
Python use: scripting, building new data pipelines (not app dev)
Data warehousing experience (Snowflake)
Understanding of NoSQL databases (DynamoDB)
APIs
Agile engineering practices & JIRA usage
Nice to Have
- Kafka, Airflow, Open Table formats (Delta, Hudi, Iceberg), Splunk, NewRelic, Github, Jenkins, prior Capital One experience highly preferred
Secondary Skills - Nice to Haves
Business Drivers/Customer Impact
DFS data migration effort - Capital One is undertaking a critical data migration initiative to ingest all historical Discover Card data into its internal systems.
While the data already resides within the Capital One ecosystem, the challenge lies in efficiently ingesting it into its target state, ensuring completeness, accuracy, and usability across platforms.
This migration is an enterprise imperative to enable downstream analytics, compliance, and operational continuity using DFS data. Success requires tight coordination across engineering, data governance, and platform teams to minimize disruption and maximize business value.
Only apply if you have worked with Previously Capital One in the last 10 years for over 2+ yrs.
Sr Data Engineer
Project Overview
Discoverβs historical credit card data will be integrated into the Capital One ecosystem within Capital Ones data stream pipeline & datalake.
Engineer will be responsible for:
β’ Extracting the data from OneLake
β’ Transforming the data
β’ Loading it into the target state, which could be Snowflake or AWS S3.
Tech Lead Expectations
β’ Accountability: Ensure team deliverables are completed on time.
β’ Leadership: Lead sprint ceremonies and manage team workflows.
β’ Problem Solving: Collaborate with senior managers & others to resolve issues.
β’ Adaptability: Pivot quickly based on changing requirements from customer/client.
β’ Talent Level: Must be exceptionally high-caliber professionals.
β’ Communication: Strong written and verbal skills; confidently articulate points of view. Must be able to write out their thoughts.
β’ PySpark optimization and troubleshooting as part of performance tuning and data pipeline efficiency.
β’ Architectural Awareness: Understand available tools and how they integrate and engage effectively with architects (nominal architecture experience required).
Tech Stack: Must Have
Must Have
AWS(EMR/Glue, S3, Lambda, Cloudwatch), Python, Spark, Databricks, SQL, Bash)
Python use: scripting, building new data pipelines (not app dev)
Data warehousing experience (Snowflake)
Understanding of NoSQL databases (DynamoDB)
APIs
Agile engineering practices & JIRA usage
Nice to Have
- Kafka, Airflow, Open Table formats (Delta, Hudi, Iceberg), Splunk, NewRelic, Github, Jenkins, prior Capital One experience highly preferred
Secondary Skills - Nice to Haves
Business Drivers/Customer Impact
DFS data migration effort - Capital One is undertaking a critical data migration initiative to ingest all historical Discover Card data into its internal systems.
While the data already resides within the Capital One ecosystem, the challenge lies in efficiently ingesting it into its target state, ensuring completeness, accuracy, and usability across platforms.
This migration is an enterprise imperative to enable downstream analytics, compliance, and operational continuity using DFS data. Success requires tight coordination across engineering, data governance, and platform teams to minimize disruption and maximize business value.






