W3Global

Data Modelling Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Modelling Engineer in Glasgow, UK, on a contract basis. It requires expertise in Python, SQL/NoSQL, and cloud technologies, with experience in data modelling and API development. Knowledge of AWS databases and Agile methodologies is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Data Integration #S3 (Amazon Simple Storage Service) #API (Application Programming Interface) #Scala #Kubernetes #Cloud #RDF (Resource Description Framework) #SQL (Structured Query Language) #NoSQL #Swagger #Documentation #React #AWS S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Datasets #Microservices #Oracle #FactSet #Agile #Elasticsearch #Databricks #Deployment #AWS Databases #Python #AWS (Amazon Web Services) #NLP (Natural Language Processing) #Databases #Snowflake #GraphQL
Role description
Contract Glasgow, UK 5 days/week office Job Description As an MDS Data Modelling Engineer Associate in the Market Data Services team, Commercial & Investment Bank (CIB), you will drive innovation in data structuring and integration for market data discovery and digital rights management. As a key contributor, you will leverage your expertise in Python, SQL/NoSQL, RDF/SPARQL, and cloud technologies to model, transform, and structure data from leading market data vendors. You will play a pivotal role in building scalable data solutions and APIs for the SCUDO platform, supporting business users with high-quality, actionable data. As an Engineer, you will be instrumental in defining schemas for diverse datasets delivered via Snowflake, Databricks, or AWS S3, and integrating them into AWS Neptune or other internal databases. You will collaborate with cross-functional teams to deliver robust data models and API endpoints, ensuring alignment with business and regulatory requirements. Job Responsibilities Model and structure currently available market data and define new schemas for integration into internal databases Design, implement, and maintain Kubernetes clusters for scalable data solutions Automate deployment, scaling, and management of containerized applications Design, develop, and maintain RESTful and/or GraphQL APIs using Python Collaborate with stakeholders to ensure data solutions meet business and regulatory requirements Work with datasets from major Market Data Vendors (Bloomberg, LSEG, Factset, S&P) and define schemas post-delivery Manage data integration and transformation using cloud platforms (AWS) and database technologies Maintain comprehensive technical documentation and API specifications Actively participate in Agile ceremonies and two-week sprint cycles Required Qualifications, Capabilities, And Skills Proven experience in data modelling within professional services or contract environments Strong hands-on expertise with Python, SQL/NoSQL, RDF/SPARQL Deep familiarity with API documentation tools (Swagger/OpenAPI) Experience in design and development of APIs Exposure to microservices and event-driven architectures Experience with AWS databases and Oracle database products Proficient in SQL for Oracle and No-SQL DSL for ElasticSearch Experience working in Agile teams Deep familiarity with W3C standards for data modelling and interoperability Excellent communication and stakeholder management skills Preferred Qualifications, Capabilities, And Skills Experience with DCAT/ODRL, NLP, Kubernetes, React, AWS Neptune