

Lorvenk Technologies
Data Engineer (Ex-Capital One)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience, located in Richmond, VA, for a 12+ month contract. Requires expertise in AWS services, Python, SQL, and ETL development, specifically seeking candidates with prior Capital One experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 5, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Richmond, VA
-
🧠 - Skills detailed
#Data Lake #Athena #Scala #SQL (Structured Query Language) #IAM (Identity and Access Management) #"ETL (Extract #Transform #Load)" #AWS Lambda #Python #Redshift #Data Science #Data Quality #GIT #Terraform #Lambda (AWS Lambda) #Data Pipeline #Security #Data Security #S3 (Amazon Simple Storage Service) #Infrastructure as Code (IaC) #AWS (Amazon Web Services) #AWS Glue #RDBMS (Relational Database Management System) #Data Engineering #Data Warehouse #Cloud #Data Modeling
Role description
Title: Data Engineer
Location: Richmond, VA
Exp: 5+
Duration: 12+ Months
Contract: W2
Need Ex-Capital One
Job Summary:
We are looking for a skilled Data Engineer with hands-on experience in AWS cloud services to design, build, and maintain scalable data pipelines and architectures. The ideal candidate will be responsible for ensuring reliable data flow, optimizing performance, and supporting data-driven decision-making across the organization.
Key Responsibilities:
Design, develop, and maintain data pipelines and ETL workflows using AWS services.
Build and optimize data lakes, data warehouses, and real-time streaming solutions.
Work closely with data scientists, analysts, and business stakeholders to deliver high-quality, structured data.
Implement and maintain data quality, security, and governance best practices.
Develop and deploy AWS Lambda, Glue, Step Functions, and Kinesis-based data workflows.
Integrate data from multiple sources (RDBMS, APIs, logs, etc.) into centralized platforms.
Monitor and troubleshoot data pipeline issues for performance and reliability.
Automate data workflows and optimize resource usage for cost efficiency.
Required Skills:
Strong proficiency in Python, SQL, and ETL development.
Hands-on experience with key AWS data services, including:
AWS Glue, Lambda, Athena, Redshift, Kinesis, S3, EMR, Step Functions.
Solid understanding of data modeling, data warehousing, and data lake architectures.
Experience with CI/CD, Git, and Infrastructure as Code tools such as Terraform or CloudFormation.
Strong problem-solving skills and attention to detail.
Excellent understanding of data security, encryption, and IAM configurations.
Title: Data Engineer
Location: Richmond, VA
Exp: 5+
Duration: 12+ Months
Contract: W2
Need Ex-Capital One
Job Summary:
We are looking for a skilled Data Engineer with hands-on experience in AWS cloud services to design, build, and maintain scalable data pipelines and architectures. The ideal candidate will be responsible for ensuring reliable data flow, optimizing performance, and supporting data-driven decision-making across the organization.
Key Responsibilities:
Design, develop, and maintain data pipelines and ETL workflows using AWS services.
Build and optimize data lakes, data warehouses, and real-time streaming solutions.
Work closely with data scientists, analysts, and business stakeholders to deliver high-quality, structured data.
Implement and maintain data quality, security, and governance best practices.
Develop and deploy AWS Lambda, Glue, Step Functions, and Kinesis-based data workflows.
Integrate data from multiple sources (RDBMS, APIs, logs, etc.) into centralized platforms.
Monitor and troubleshoot data pipeline issues for performance and reliability.
Automate data workflows and optimize resource usage for cost efficiency.
Required Skills:
Strong proficiency in Python, SQL, and ETL development.
Hands-on experience with key AWS data services, including:
AWS Glue, Lambda, Athena, Redshift, Kinesis, S3, EMR, Step Functions.
Solid understanding of data modeling, data warehousing, and data lake architectures.
Experience with CI/CD, Git, and Infrastructure as Code tools such as Terraform or CloudFormation.
Strong problem-solving skills and attention to detail.
Excellent understanding of data security, encryption, and IAM configurations.






