CI&T

DataBricks Specialist - Short Term Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DataBricks Specialist on a short-term contract with an immediate start. Requires expertise in Databricks, PySpark, SQL, and DataOps practices. Must have experience with cloud platforms and large-scale data pipelines. Competitive pay rate.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 29, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Databricks #GIT #Programming #SQL (Structured Query Language) #Azure #DataOps #Automated Testing #Cloud #Data Engineering #PySpark #AI (Artificial Intelligence) #Python #Security #Data Pipeline #Data Processing #GCP (Google Cloud Platform) #Delta Lake #"ETL (Extract #Transform #Load)" #Databases #Scala #Spark (Apache Spark) #AWS (Amazon Web Services) #Version Control
Role description
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions. With over 8,000 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality. Immediate Start As a Senior Data Engineer, you will lead the design and development of robust data pipelines, integrating and transforming data from diverse data sources such as APIs, relational databases, and files. Collaborating closely with business and analytics teams, you will ensure high-quality deliverables that meet the strategic needs of our organization. Your expertise will be pivotal in maintaining the quality, reliability, security and governance of the ingested data, therefore driving our mission of Collaboration, Innovation, & Transformation. Key Responsibilities: Develop and maintain data pipelines. Integrate data from various sources (APIs, relational databases, files, etc.). Collaborate with business and analytics teams to understand data requirements. Ensure quality, reliability, security and governance of the ingested data. Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CD Document processes and best practices in data engineering. Required Skills and Qualifications: Must-have Skills: Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL). Strong programming skills in Python and SQL for data processing and transformation. Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing. Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code. Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services. Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues. Proficiency in Git. Collaboration is our superpower, diversity unites us, and excellence is our standard. We value diverse identities and life experiences, fostering a diverse, inclusive, and safe work environment. We encourage applications from diverse and underrepresented groups to our job positions.