

Data Engineer with Public Trust Clearance
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Public Trust Clearance in Ashburn, VA (Hybrid, 2–3 days onsite). Contract duration is 6-12+ months. Requires 7+ years of data engineering experience, advanced SQL, AWS proficiency, and strong Apache Spark skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
July 17, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Ashburn, VA
-
🧠 - Skills detailed
#SQL Queries #Python #Data Pipeline #Data Engineering #Bash #Computer Science #Scripting #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Databricks #Kafka (Apache Kafka) #SQL (Structured Query Language) #AWS (Amazon Web Services) #Storage #Data Architecture #Redshift #Datasets #Hadoop #Automation #DynamoDB #S3 (Amazon Simple Storage Service) #Apache Spark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Data Engineer
Location: Ashburn, VA (Hybrid-Onsite 2–3 days per week)
Duration: 6-12+ Month C2H
Note: Currently hold or have previously held a minimum of a Public Trust clearance.
What You’ll Do
• Design, build, and maintain reliable, high-volume data pipelines and ETL processes.
• Develop and optimize complex SQL queries for analytics, reporting, and application support.
• Implement robust data architectures using star schemas, fact tables, and dimension tables.
• Process massive datasets efficiently using Apache Spark and the Hadoop ecosystem.
• Build and manage real-time streaming pipelines using Kafka.
• Leverage AWS services — S3, EMR, Redshift, DynamoDB — for storage and processing.
• Automate workflows with bash scripting.
• Use Python to support various data engineering tasks as needed.
• Collaborate with analysts, engineers, and stakeholders to deliver secure, reliable data solutions.
• (Nice to have) Work with Databricks and delta tables for advanced analytics.
Must-Have Skills:
• Minimum of 7 years of professional data engineering experience.
• Advanced SQL skills with a track record of performance tuning on large datasets.
• Proficiency in bash scripting for automation.
• Extensive AWS experience — S3, EMR, Redshift, DynamoDB.
• Strong understanding of star schemas and data warehousing best practices.
• Proven experience with Apache Spark and the Hadoop ecosystem.
• Hands-on experience with Kafka for real-time streaming pipelines.
• Demonstrated success building and maintaining production-grade data pipelines.
• Proficiency with Python.
• B.S. or M.S. in Computer Science, Engineering, or related field.
• Hands-on experience with Databricks and delta tables.
• Experience working on federal or mission-driven data programs.