

Senior Data Engineer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Ashburn, VA, for 12+ months with a Public Trust clearance requirement. Key skills include 7+ years in data engineering, Java, advanced SQL, AWS, Apache Spark, and Kafka.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
-
๐๏ธ - Date discovered
July 26, 2025
๐ - Project duration
More than 6 months
-
๐๏ธ - Location type
On-site
-
๐ - Contract type
Unknown
-
๐ - Security clearance
Unknown
-
๐ - Location detailed
Ashburn, VA
-
๐ง - Skills detailed
#Data Engineering #Data Pipeline #Hadoop #AWS (Amazon Web Services) #Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #SQL Queries #Bash #Java #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Computer Science #DynamoDB #Python #Spark (Apache Spark) #Scripting #Scala #Databricks #Redshift #Apache Spark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Hello
Senior Data Engineer
Client: Confidential (exclusive with DP;
Location: Ashburn, VA (Onsite 2โ3 days/week,)
Duration : 12+ months
Clearance Requirement:ย Public Trust clearance
Overview:
Weโre hiring aย Senior Data Engineerย to join a long-term federal program supporting high-impact, mission-driven data initiatives. This is a hands-on, engineering-heavy role that requires deep technical expertise and a strong commitment to on-site work in Ashburn, VA. The project is now in year 2 of a 5-year engagement with long-term conversion expected.
What Youโll Do:
โข Build and maintain high-volume data pipelines and ETL workflows
โข Write and optimize complex SQL queries for large-scale data sets
โข Design robust data models using star schemas, fact and dimension tables
โข Work with Apache Spark and the Hadoop ecosystem to process large data sets
โข Develop real-time streaming pipelines using Kafka
โข Use AWS tools such as S3, EMR, Redshift, and DynamoDB
โข Automate tasks with bash scripting
โข Use Python for ad hoc data engineering support
โข Collaborate with cross-functional teams to deliver scalable, secure solutions
โข (Bonus) Leverage Databricks and Delta Tables for advanced analytics
Required Skills (No Exceptions):
โข 7+ years of professional experience in data engineering
โข Strong Java development experience
โข Advanced SQL expertise with performance tuning skills
โข Bash scripting proficiency
โข Extensive AWS experience (S3, EMR, Redshift, DynamoDB)
โข Hands-on with Apache Spark and Kafka
โข Strong understanding of data warehousing (star schemas, fact/dimension modeling)
โข Proven success building and supporting production-grade pipelines
โข Experience with Python
โข Bachelorโs or Masterโs degree in Computer Science, Engineering, or related field
Nice to Have:
โข Experience with Databricks and Delta Tables
โข Background working on federal or public-sector programs
Deepak Thakur
Technical Recruiter
Oreva Technologies Inc
Email : Deepak.t@orevatech.com
linkedIn : linkedin.com/in/deepak-thakur-72a23a195
p: 9729144537 EXT 468
a: 1320 Greenway Drive, Suite #460, Irving, TX
w: www.OrevaTech.com