Avance Consulting

Cloud Consultant

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Consultant with a contract length of "unknown" at a pay rate of "unknown". Key skills include AWS, Python, SQL, and experience with Databricks. Strong data engineering and UI development experience are required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Apache Spark #Athena #Data Warehouse #Scala #GitLab #Data Lake #MLflow #Programming #VPC (Virtual Private Cloud) #Delta Lake #AI (Artificial Intelligence) #Airflow #Data Engineering #IAM (Identity and Access Management) #Cloud #Unit Testing #S3 (Amazon Simple Storage Service) #Databricks #ML (Machine Learning) #SageMaker #SQL (Structured Query Language) #Security #Migration #Spark (Apache Spark) #Data Pipeline #AWS (Amazon Web Services) #Data Science #PySpark #Lambda (AWS Lambda) #Python
Role description
Role Description: To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Candidates working for this role will either contribute to strategic deliverables by building data engineering pipelines using the AWS technology stack, or support platform modernization efforts by migrating existing AWS components to Databricks, depending on their assessed skill sets. • • UI development expereince as must along with AWS Cloudformation. • • AWS Data Engineering JD Should lead the project: • Strong hands-on data engineering experience delivering production-grade solutions. • Ability to design, implement and optimise complex data pipelines independently. • Strong SQL and programming (Python/Pyspark preferred); capable of building reusable and scalable code. • Experience with modern orchestration tools (e.g. Airflow, Step Functions, etc.). • Hands-on experience on AWS cloud platform with skillsets listed below • Ability to proactively analyse existing processes, suggest improvements, and drive technical solutions end-to-end. • Comfortable engaging with stakeholders, understanding business requirements, and translating them into technical design. • Strong Experience in Databricks development/migration. (this is for Databricks engineer) • Experience with Apache Spark, Databricks Delta Lake, Unity Catalog, and MLflow.( this is for Databricks engineer) Accountabilities • Build and maintenance of data pipelines that enable the transfer and processing of durable, complete and consistent data. • Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. • Development of processing and analysis algorithms fit for the intended data complexity and volumes. • Collaboration with data scientist to build and deploy machine learning models. Hands-on Skills: Data Engineering Coding background and Hands on experience on below AWS services. S3, Lambda, Glue, Step Function, Athena, Sagemaker, VPC, ECS, IAM, KMS etc. CloudFormation Python Unit Testing Gitlab PySpark AI/ML knowledge (Good to have)