Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer for a 6-month contract in Ashburn, VA, offering a competitive pay rate. Requires 7+ years of experience, strong SQL and Java skills, AWS expertise, and proficiency in Apache Spark, Kafka, and Python.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Ashburn, VA
-
🧠 - Skills detailed
#Data Pipeline #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Data Engineering #Apache Spark #AWS (Amazon Web Services) #Redshift #Hadoop #DynamoDB #Scala #Databricks #SQL Queries #Scripting #Computer Science #Java #Python #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Bash
Role description
Job Summary (Senior Data Engineer – KNS, Ashburn, VA) β€’ Design, build, and maintain high-volume data pipelines and ETL workflows. β€’ Write and optimize complex SQL queries for large-scale data sets. β€’ Model robust data solutions using star schemas, fact, and dimension tables. β€’ Process large data sets using Apache Spark and Hadoop ecosystem. β€’ Develop real-time streaming pipelines with Kafka. β€’ Utilize AWS services (S3, EMR, Redshift, DynamoDB) for data engineering tasks. β€’ Automate data workflows and tasks using bash scripting. β€’ Provide ad hoc data engineering support with Python. β€’ Collaborate with cross-functional teams to deliver scalable, secure data solutions. β€’ (Bonus) Implement advanced analytics using Databricks and Delta Tables. β€’ Ensure production-grade data pipelines remain reliable and performant. Required Skills β€’ 7+ years of professional data engineering experience. β€’ Strong Java development and advanced SQL proficiency (including performance tuning). β€’ Hands-on experience with Bash scripting, Apache Spark, and Kafka. β€’ Deep knowledge of data warehousing concepts (star schema, fact/dimension modeling). β€’ Extensive AWS experience (S3, EMR, Redshift, DynamoDB). β€’ Proven track record of building/supporting production pipelines. β€’ Proficient with Python for data engineering tasks. β€’ Bachelor’s or Master’s in Computer Science, Engineering, or related field. Nice To Have β€’ Experience with Databricks and Delta Tables. β€’ Background working on federal or public-sector programs. β€’ Active Public Trust clearance preferred. Location/Other: β€’ Onsite in Ashburn, VA (2–3 days/week, non-negotiable). β€’ 6-month contract, with long-term conversion expected. β€’ In-person final interview required.