Senior AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer with a 6-month contract, offering competitive pay. Requires 8+ years in data engineering, proficiency in Python, PySpark, SQL, and AWS services, plus experience in data governance and cloud platforms.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 26, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
London Area, United Kingdom
🧠 - Skills detailed
#Spark (Apache Spark) #AWS Glue #Data Governance #Python #AWS (Amazon Web Services) #Docker #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Metadata #Security #Airflow #Data Lake #SQL (Structured Query Language) #Data Integration #Data Science #Scala #DMS (Data Migration Service) #BI (Business Intelligence) #Lambda (AWS Lambda) #Data Quality #Apache Kafka #Apache Airflow #Automation #Kubernetes #Data Architecture #Apache Spark #Data Warehouse #Cloud #Terraform #Data Pipeline #Monitoring #PySpark #Computer Science #Data Engineering #Data Management #Redshift #AWS DMS (AWS Database Migration Service) #S3 (Amazon Simple Storage Service) #Data Mart
Role description
Duration: 6 Months Join our dynamic team and contribute to impactful data engineering projects, including sophisticated data pipelines, robust data integration, and innovative solutions using modern data technologies. Role Overview We are looking for a skilled Data Engineer with substantial experience to design, develop, and maintain scalable data pipelines and data solutions. You'll collaborate closely with cross-functional teams to ensure data quality, availability, and efficiency to support analytics and business intelligence. Key Responsibilities • Design, build, and maintain efficient data pipelines using tools such as AWS Glue, Apache Airflow, PySpark, AWS DMS, CDC or similar. • Develop and optimize PySpark and SQL based ETL/ELT processes for data integration and transformation. • Manage cloud-based data platform such as AWS using Terraform and implement best practices for data governance, security, and reliability. • Collaborate with stakeholders to define requirements and deliver scalable data solutions. • Implement and manage real-time streaming processes using technologies such as Apache Kafka, Apache Spark, or similar tools. • Support and enhance data architecture, including data lakes, data warehouses, and data marts. • Drive improvements in data quality, data governance, and metadata management practices. • Contribute to automation and monitoring processes for data pipelines to ensure operational excellence. Qualifications • Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field. • 8+ years of experience in data engineering, including designing and deploying production-quality data pipelines and ETL/ELT solutions. • Proficiency in Python, PySpark and SQL. • Strong hands-on experience with cloud services such as AWS (Glue, Lambda, Redshift, S3). • Experience with data governance and management platforms (e.g., AWS DataZone, AWS Lake Formation). • Familiarity with containerization and orchestration technologies (Docker, Kubernetes) is a plus. • Exceptional analytical and problem-solving skills. • Strong communication and collaboration capabilities.