

Tenth Revolution Group
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a 14-week remote contract, paying £350 per day. Key skills include ETL/ELT pipeline design, advanced Python, AWS services, and data governance. Immediate interviews available, with a start date of January 12th.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 16, 2025
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London
-
🧠 - Skills detailed
#Data Quality #Data Engineering #Complex Queries #Lambda (AWS Lambda) #CockroachDB #Data Processing #Data Governance #Data Management #PySpark #Data Pipeline #Deployment #Metadata #REST (Representational State Transfer) #AWS (Amazon Web Services) #MDM (Master Data Management) #Databases #Data Integration #SQL (Structured Query Language) #REST API #Datasets #SAP #AWS Glue #Python #Pandas #Spark (Apache Spark) #Batch #Redshift #DynamoDB #Libraries #NumPy #Data Extraction #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)"
Role description
Data Engineer - 14-Week Contract (Outside IR35) Likely to Extend
Start Date: 12th January
Rate: £350 per day
Location: Remote (UK-based)
Interview: Immediate - Offer before Christmas
We are seeking an experienced Data Engineer to join a 14-week project focused on building robust data pipelines and integrating complex data sources. This is an outside IR35 engagement, offering flexibility and autonomy. Key Responsibilities
• Design and implement ETL/ELT pipelines with strong error handling and retry logic.
• Develop incremental data processing patterns for large-scale datasets.
• Work with AWS services including Glue, Step Functions, S3, DynamoDB, Redshift, Lambda, and EventBridge.
• Build and optimise vector database solutions and embedding generation pipelines for semantic search.
• Implement document processing workflows (PDF parsing, OCR, metadata extraction).
• Integrate data from REST APIs, PIM systems, and potentially SAP.
• Ensure data quality, governance, and lineage tracking throughout the project. Required Skills
• ETL/ELT pipeline design and data validation frameworks.
• Advanced Python (pandas, numpy, boto3) and SQL (complex queries, optimisation).
• Experience with AWS Glue, Step Functions, and event-driven architectures.
• Knowledge of vector databases, embeddings, and semantic search strategies.
• Familiarity with document parsing libraries (PyPDF2, pdfplumber, Textract) and OCR tools.
• Understanding of data governance, schema validation, and master data management.
• Strong grasp of real-time vs batch processing trade-offs. Beneficial Experience
• CockroachDB deployment and management.
• PySpark or similar for large-scale processing.
• SAP data structures and PIM systems.
• E-commerce and B2B data integration patterns.
Why Apply?
• Fully remote contract
• Outside IR35
• Competitive day rate
• Immediate interviews - secure your next role before Christmas
Data Engineer - 14-Week Contract (Outside IR35) Likely to Extend
Start Date: 12th January
Rate: £350 per day
Location: Remote (UK-based)
Interview: Immediate - Offer before Christmas
We are seeking an experienced Data Engineer to join a 14-week project focused on building robust data pipelines and integrating complex data sources. This is an outside IR35 engagement, offering flexibility and autonomy. Key Responsibilities
• Design and implement ETL/ELT pipelines with strong error handling and retry logic.
• Develop incremental data processing patterns for large-scale datasets.
• Work with AWS services including Glue, Step Functions, S3, DynamoDB, Redshift, Lambda, and EventBridge.
• Build and optimise vector database solutions and embedding generation pipelines for semantic search.
• Implement document processing workflows (PDF parsing, OCR, metadata extraction).
• Integrate data from REST APIs, PIM systems, and potentially SAP.
• Ensure data quality, governance, and lineage tracking throughout the project. Required Skills
• ETL/ELT pipeline design and data validation frameworks.
• Advanced Python (pandas, numpy, boto3) and SQL (complex queries, optimisation).
• Experience with AWS Glue, Step Functions, and event-driven architectures.
• Knowledge of vector databases, embeddings, and semantic search strategies.
• Familiarity with document parsing libraries (PyPDF2, pdfplumber, Textract) and OCR tools.
• Understanding of data governance, schema validation, and master data management.
• Strong grasp of real-time vs batch processing trade-offs. Beneficial Experience
• CockroachDB deployment and management.
• PySpark or similar for large-scale processing.
• SAP data structures and PIM systems.
• E-commerce and B2B data integration patterns.
Why Apply?
• Fully remote contract
• Outside IR35
• Competitive day rate
• Immediate interviews - secure your next role before Christmas






