

alphayotta
Solution Architect - Databricks
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Solution Architect - Databricks, remote in London, UK, with a contract length of unspecified duration. Pay rate is outside IR35. Requires 15+ years in data analytics, expertise in Databricks, AWS, Pyspark, Python, and ETL tools.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Data Architecture #Informatica #Data Analysis #Data Quality #Scala #Storage #Datasets #AWS (Amazon Web Services) #Cloud #Databases #PySpark #Pandas #Spark (Apache Spark) #Documentation #Automation #Databricks #Data Engineering #Data Cleaning #Data Integrity #Matplotlib #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language)
Role description
We are looking for a Databricks + Solution Architect with broader exposure to infrastructure setup, wider cloud ecosystem integration (beyond storage), and end-to-end solution design.
Role: Solution Architect
Location : London UK (Remote)
Role type : Contractors (Outside IR35) for the UK location
Please Note: No VISA Sponsorship
Job details:
Senior Level Data Architect with data analytics experience, Databricks, Pyspark, Python, ETL tools like Informatica.
This is a key role that requires senior/lead with great communication skills who is very proactive with risk & issue management.
15+ years of experience as Data Analyst / Data Engineer with Databricks on AWS expertise in designing and implementing scalable, secure, and cost-efficient data solutions on AWS
Job Profile:
• Hands-on data analytics experience with Databricks on AWS, Pyspark and Python
• Must have prior experience with migrating a data asset to the cloud using a GenAI automation option
• Experience in migrating data from on-premises to AWS
• Expertise in developing data models, delivering data-driven insights for business solutions
• Experience in pretraining, fine-tuning, augmenting and optimizing large language models (LLMs)
• Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights
• Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance.
• Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets.
• Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks.
• Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work
Technical Skills:
• Strong experience as a Data Analyst, Data Engineer, or related role,
• Strong proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL, with experience working across various data formats and sources.
• Proven ability to automate data workflows, implement code-based best practices, and maintain documentation to ensure reproducibility and scalability.
Interested applicants can please share their CV / Queries to anita.gokul@alphayotta.com
We are looking for a Databricks + Solution Architect with broader exposure to infrastructure setup, wider cloud ecosystem integration (beyond storage), and end-to-end solution design.
Role: Solution Architect
Location : London UK (Remote)
Role type : Contractors (Outside IR35) for the UK location
Please Note: No VISA Sponsorship
Job details:
Senior Level Data Architect with data analytics experience, Databricks, Pyspark, Python, ETL tools like Informatica.
This is a key role that requires senior/lead with great communication skills who is very proactive with risk & issue management.
15+ years of experience as Data Analyst / Data Engineer with Databricks on AWS expertise in designing and implementing scalable, secure, and cost-efficient data solutions on AWS
Job Profile:
• Hands-on data analytics experience with Databricks on AWS, Pyspark and Python
• Must have prior experience with migrating a data asset to the cloud using a GenAI automation option
• Experience in migrating data from on-premises to AWS
• Expertise in developing data models, delivering data-driven insights for business solutions
• Experience in pretraining, fine-tuning, augmenting and optimizing large language models (LLMs)
• Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights
• Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance.
• Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets.
• Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks.
• Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work
Technical Skills:
• Strong experience as a Data Analyst, Data Engineer, or related role,
• Strong proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL, with experience working across various data formats and sources.
• Proven ability to automate data workflows, implement code-based best practices, and maintain documentation to ensure reproducibility and scalability.
Interested applicants can please share their CV / Queries to anita.gokul@alphayotta.com



