

Lorvenk Technologies
Data Engineer with Java(Ex-Capital One)-W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Java in McLean, VA (Hybrid) for a W2 contract. Requires 8+ years of experience, expertise in Java, ETL pipelines, Apache Spark, AWS Glue, and prior Capital One experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 22, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
McLean, VA
-
π§ - Skills detailed
#Jenkins #Data Processing #S3 (Amazon Simple Storage Service) #Scala #Lambda (AWS Lambda) #PostgreSQL #Documentation #GIT #Data Engineering #Apache Spark #Agile #Data Transformations #Version Control #AWS (Amazon Web Services) #EC2 #Spring Boot #SQL (Structured Query Language) #IAM (Identity and Access Management) #NoSQL #PySpark #DynamoDB #Java #Data Catalog #Cloud #Batch #Spark (Apache Spark) #Code Reviews #GitLab #"ETL (Extract #Transform #Load)" #AWS Glue #MySQL #Databases
Role description
Position: Data Engineer with Java
Location: Mclean, VA(Hybrid)
Experience: 8+ Yrs
Employment Type: W2
Need Former Capital One candidates
Overview
We are looking for a highly skilled Data Engineer with Java with strong expertise in Java, ETL pipelines, Apache Spark, and AWS servicesβparticularly AWS Glue. The ideal candidate will design and develop scalable back-end systems, data processing frameworks, and cloud-based integration solutions.
Key Responsibilities
Design, develop, and maintain backend services using Java and associated frameworks.
Build, optimize, and manage ETL pipelines for large-scale data processing.
Develop distributed data processing jobs using Apache Spark (batch and streaming).
Implement data transformations and workflows using AWS Glue, Glue Jobs, Crawlers, and Data Catalog.
Work with AWS cloud services such as Lambda, S3, EMR, DynamoDB, IAM, CloudWatch, and Step Functions.
Ensure high performance, scalability, and reliability of back-end systems.
Collaborate with data engineers, architects, and cross-functional teams to integrate data flows and services.
Participate in code reviews, architecture discussions, and technical design sessions.
Troubleshoot production issues and optimize system performance.
Maintain strong documentation and follow best practices in software development.
Required Skills & Qualifications
Strong hands-on experience with Java (8/11/17) and backend frameworks (Spring / Spring Boot).
Proven experience building and maintaining ETL pipelines and data workflows.
Practical expertise in Apache Spark (RDD, DataFrames, SQL, PySpark or Scala is a plus).
Solid experience with AWS Glue, Glue Jobs, Glue Studio, ETL scripts, and Data Catalog.
Hands-on exposure to AWS cloud servicesβS3, Lambda, EMR, Kinesis, EC2, IAM, CloudWatch.
Proficiency with SQL and NoSQL databases (e.g., DynamoDB, PostgreSQL, MySQL).
Experience with CI/CD tools (Jenkins, GitLab, CodePipeline) and Git version control.
Strong understanding of distributed systems, performance tuning, and data optimization.
Excellent problem-solving skills with ability to work in an Agile environment.
Position: Data Engineer with Java
Location: Mclean, VA(Hybrid)
Experience: 8+ Yrs
Employment Type: W2
Need Former Capital One candidates
Overview
We are looking for a highly skilled Data Engineer with Java with strong expertise in Java, ETL pipelines, Apache Spark, and AWS servicesβparticularly AWS Glue. The ideal candidate will design and develop scalable back-end systems, data processing frameworks, and cloud-based integration solutions.
Key Responsibilities
Design, develop, and maintain backend services using Java and associated frameworks.
Build, optimize, and manage ETL pipelines for large-scale data processing.
Develop distributed data processing jobs using Apache Spark (batch and streaming).
Implement data transformations and workflows using AWS Glue, Glue Jobs, Crawlers, and Data Catalog.
Work with AWS cloud services such as Lambda, S3, EMR, DynamoDB, IAM, CloudWatch, and Step Functions.
Ensure high performance, scalability, and reliability of back-end systems.
Collaborate with data engineers, architects, and cross-functional teams to integrate data flows and services.
Participate in code reviews, architecture discussions, and technical design sessions.
Troubleshoot production issues and optimize system performance.
Maintain strong documentation and follow best practices in software development.
Required Skills & Qualifications
Strong hands-on experience with Java (8/11/17) and backend frameworks (Spring / Spring Boot).
Proven experience building and maintaining ETL pipelines and data workflows.
Practical expertise in Apache Spark (RDD, DataFrames, SQL, PySpark or Scala is a plus).
Solid experience with AWS Glue, Glue Jobs, Glue Studio, ETL scripts, and Data Catalog.
Hands-on exposure to AWS cloud servicesβS3, Lambda, EMR, Kinesis, EC2, IAM, CloudWatch.
Proficiency with SQL and NoSQL databases (e.g., DynamoDB, PostgreSQL, MySQL).
Experience with CI/CD tools (Jenkins, GitLab, CodePipeline) and Git version control.
Strong understanding of distributed systems, performance tuning, and data optimization.
Excellent problem-solving skills with ability to work in an Agile environment.





