

Twine
Freelance Data Engineer (Remote)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Freelance Data Engineer (Remote) for a US-based organization, lasting 6 months with a competitive pay rate. Key skills include Python, SQL, AWS, and Azure. Requires 6+ years of data engineering experience and strong communication skills.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Snowflake #Data Lake #Airflow #Data Warehouse #Databricks #Azure Databricks #Azure #AI (Artificial Intelligence) #Scala #Data Architecture #Looker #Data Engineering #Security #Data Modeling #Kafka (Apache Kafka) #Automation #BI (Business Intelligence) #Data Pipeline #AWS (Amazon Web Services) #SQL (Structured Query Language) #Distributed Computing #Tableau #Data Quality #Data Manipulation #Python #Cloud #"ETL (Extract #Transform #Load)"
Role description
This is an opportunity for an accomplished data engineer to play a pivotal role in architecting and optimizing large-scale data infrastructure for a fast-paced, US-based organization. The position is fully remote and requires a proactive professional who can deliver robust solutions under tight timelines. You will be responsible for designing, building, and maintaining scalable data lakes, warehouses, and pipelines, ensuring seamless data flow and accessibility for analytics and business intelligence. Collaboration with cross-functional teams and stakeholders will be essential to deliver high-quality, production-ready data systems.
Deliverables
• Design, implement, and maintain scalable data pipelines and ETL processes using AWS, Azure, Databricks, or similar cloud platforms
• Develop and optimize data lakes and data warehouses to support analytics and reporting needs
• Integrate data from multiple sources, ensuring data quality, consistency, and security
• Collaborate with analytics and engineering teams to deliver data solutions that meet business requirements
• Monitor, troubleshoot, and enhance data workflows for performance and reliability
• Document data architecture, processes, and best practices for ongoing maintenance and knowledge sharing
Requirements
• Minimum 6 years of experience in data engineering or a related field
• Advanced proficiency in Python and SQL for data manipulation and automation
• Strong experience with distributed computing, orchestration tools (e.g., Airflow), and data modeling
• Hands-on expertise with cloud data services such as AWS, Azure, Databricks, or similar platforms
• Familiarity with tools like Snowflake, Kafka, PowerBI, Looker, and Tableau
• Proven ability to deliver solutions in a fast-paced, remote environment
• Excellent communication skills in English, with the ability to collaborate across time zones
• Self-motivated, detail-oriented, and able to work independently to meet urgent project deadlines
About Twine
Twine is a leading freelance marketplace connecting top freelancers, consultants, and contractors with companies needing creative and tech expertise. Trusted by Fortune 500 companies and innovative startups alike, Twine enables companies to scale their teams globally.
Our Mission
Twine's mission is to empower creators and businesses to thrive in an AI-driven, freelance-first world.
This is an opportunity for an accomplished data engineer to play a pivotal role in architecting and optimizing large-scale data infrastructure for a fast-paced, US-based organization. The position is fully remote and requires a proactive professional who can deliver robust solutions under tight timelines. You will be responsible for designing, building, and maintaining scalable data lakes, warehouses, and pipelines, ensuring seamless data flow and accessibility for analytics and business intelligence. Collaboration with cross-functional teams and stakeholders will be essential to deliver high-quality, production-ready data systems.
Deliverables
• Design, implement, and maintain scalable data pipelines and ETL processes using AWS, Azure, Databricks, or similar cloud platforms
• Develop and optimize data lakes and data warehouses to support analytics and reporting needs
• Integrate data from multiple sources, ensuring data quality, consistency, and security
• Collaborate with analytics and engineering teams to deliver data solutions that meet business requirements
• Monitor, troubleshoot, and enhance data workflows for performance and reliability
• Document data architecture, processes, and best practices for ongoing maintenance and knowledge sharing
Requirements
• Minimum 6 years of experience in data engineering or a related field
• Advanced proficiency in Python and SQL for data manipulation and automation
• Strong experience with distributed computing, orchestration tools (e.g., Airflow), and data modeling
• Hands-on expertise with cloud data services such as AWS, Azure, Databricks, or similar platforms
• Familiarity with tools like Snowflake, Kafka, PowerBI, Looker, and Tableau
• Proven ability to deliver solutions in a fast-paced, remote environment
• Excellent communication skills in English, with the ability to collaborate across time zones
• Self-motivated, detail-oriented, and able to work independently to meet urgent project deadlines
About Twine
Twine is a leading freelance marketplace connecting top freelancers, consultants, and contractors with companies needing creative and tech expertise. Trusted by Fortune 500 companies and innovative startups alike, Twine enables companies to scale their teams globally.
Our Mission
Twine's mission is to empower creators and businesses to thrive in an AI-driven, freelance-first world.






