AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a 6-month contract, mostly remote, offering £500-£550 per day. Key skills include Python, SQL, Kafka, and AWS (Redshift, S3). Experience in data architecture and migration is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
September 30, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London
-
🧠 - Skills detailed
#Datasets #Quality Assurance #Redshift #Data Quality #Data Integrity #AWS (Amazon Web Services) #Data Architecture #Data Pipeline #S3 (Amazon Simple Storage Service) #Scala #Data Migration #Kafka (Apache Kafka) #Data Engineering #Python #SQL (Structured Query Language) #Version Control #Migration #Agile #Data Science #GIT #Cloud
Role description
AWS Data Engineer - Contract Duration: 6 months Location: Mostly remote (occasional office visits) Rate: £500-£550 per day (Outside IR35) I'm partnering with a global consultancy that is looking for a Data Engineer to enhance its evolving data infrastructure and drive operational excellence. This is a hands-on role where you'll play a key part in business-critical operations, tackling complex data challenges, supporting cross-functional initiatives, and improving core data systems. As a central member of the data team, you'll be responsible for building and maintaining scalable data pipelines, resolving data quality issues, and collaborating closely with analysts, engineers, and stakeholders. If you thrive in a fast-paced environment and enjoy optimising data systems, this could be a great match. Key Responsibilities: • Design, develop, and maintain scalable data pipelines for both operational and analytical needs • Ensure high data integrity through best practices in validation and quality assurance • Investigate and resolve data issues, identifying root causes and implementing sustainable solutions • Lead and support data migration projects, modernising legacy systems using cloud technologies • Collaborate with cross-functional teams including analysts, data scientists, and engineers to deliver impactful data solutions • Continuously improve internal tools, workflows, and processes to increase data efficiency and reliability Your Experience & Skills: • Proven experience as a Data Engineer or Software Engineer with a focus on data architecture • Strong proficiency in Python and SQL, with the ability to manipulate and analyse large datasets • Hands-on experience with Kafka for stream processing and AWS (particularly Redshift and S3) • Skilled in implementing testing and validation strategies to ensure data quality • Familiar with version control systems such as Git and working in Agile, collaborative environments • Self-driven with strong multitasking skills and a collaborative mindset