Data Engineer with AWS

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with AWS based in Reston, VA, requiring a Bachelor's/Master's degree and 4+ years of experience. Key skills include AWS services, Databricks, DBT Core, SQL, Python/PySpark, and ETL/ELT pipeline management.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 17, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Reston, VA
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #dbt (data build tool) #GIT #Redshift #Automation #Datasets #Data Pipeline #Data Engineering #Scala #"ETL (Extract #Transform #Load)" #Version Control #AWS Kinesis #Kafka (Apache Kafka) #Agile #Apache Spark #SQL (Structured Query Language) #Spark (Apache Spark) #Databricks #S3 (Amazon Simple Storage Service) #PySpark #Computer Science #Complex Queries #Programming #Apache Kafka #Data Processing #Python #Deployment #AWS (Amazon Web Services)
Role description
AWS Data Engineer Reston VA Skills & Qualifications: Required: β€’ Bachelors / masters degree in computer science, Engineering or a related field. β€’ 4+ years of experience as a Data Engineer or in a similar role. β€’ Extensive hands-on experience with AWS services (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions. β€’ Advanced expertise in Databricks, including the creation and optimization of data pipelines, notebooks, and integration with other AWS services. β€’ Strong experience with DBT Core for data transformation and modelling, including writing, testing, and maintaining DBT models. β€’ Proficiency in SQL and experience with designing and optimizing complex queries for large datasets. β€’ Strong programming skills in Python/PySpark, with the ability to develop custom data processing logic and automate tasks. β€’ Experience with Data Warehousing and knowledge of concepts related to OLAP and OLTP systems. β€’ Expertise in building and managing ETL/ELT pipelines, automating data workflows, and performing data validation. β€’ Familiarity with CI/CD concepts, version control (e.g., Git), and deployment automation. β€’ Having worked under Agile project environment Preferred: β€’ Experience with Apache Spark and distributed data processing in Databricks. β€’ Familiarity with streaming data solutions (e.g., AWS Kinesis, Apache Kafka