Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 6-month contract, offering a pay rate of "X" per hour. Key requirements include 5+ years of Python and AWS experience, SQL proficiency, and expertise in building CI/CD pipelines.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 21, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Reston, VA
-
🧠 - Skills detailed
#GitLab #Datasets #Data Engineering #NoSQL #Lambda (AWS Lambda) #Debugging #AWS EMR (Amazon Elastic MapReduce) #SQL (Structured Query Language) #AWS (Amazon Web Services) #Python #Scrum #Databases #Data Lake #Programming #Cloud #S3 (Amazon Simple Storage Service) #SNS (Simple Notification Service) #Agile #SQS (Simple Queue Service) #Kanban #RDS (Amazon Relational Database Service) #Redshift
Role description
Job Description: β€’ At least 5 years of relevant Data Engineering and insight experience β€’ 5+ years of Python with very strong AWS experience in delivering Python based solutions β€’ Skilled in SQL and experience in the process of analyzing data to identify trends or relationships to inform conclusions about the data β€’ 5+ years of recent experience with building and deploying applications in AWS using services like (S3,Glue,Redshift,RDS,AWS EMR, Cloudwatch, Lambda, State Machine, SNS, SQS, ECS Fargate, AppFlow, etc.) β€’ At least 2 years of experience in APIs, RESTful services. β€’ Skilled in cloud technologies and cloud computing β€’ Strong experience building CI/CD pipelines on AWS (CloudFormation and Gitlab) β€’ Good communication skills and ability to work in a team environment. β€’ Ability to work independently as well as part of an agile team (Scrum / Kanban) β€’ Programming including coding, debugging, and using relevant programming languages β€’ Focused on manipulating data in a software engineering capacity. β€’ Some of that data might live in relational systems, but its increasingly moving towards NoSQL systems and data lakes. β€’ Normalize databases and ascertain the structure of the data meets the requirements of the applications that are accessing the information. β€’ Construct datasets that are easy to analyze and support company requirements. β€’ Combine raw information from different sources to create consistent and machine-readable formats.