

CI&T
DataBricks Specialist - Short Term Contract
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DataBricks Specialist on a short-term contract with an immediate start. Key skills include Databricks, Python, SQL, and DataOps practices. Experience with ETL/ELT frameworks and cloud platforms (AWS, Azure, GCP) is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 2, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Azure #AWS (Amazon Web Services) #Security #Python #Automated Testing #Delta Lake #Scala #Databricks #Data Processing #Data Engineering #Spark (Apache Spark) #Data Pipeline #Databases #Version Control #GCP (Google Cloud Platform) #PySpark #"ETL (Extract #Transform #Load)" #Cloud #Programming #GIT #DataOps #SQL (Structured Query Language)
Role description
Immediate Start
As a Senior Data Engineer, you will lead the design and development of robust data pipelines, integrating and transforming data from diverse data sources such as APIs, relational databases, and files. Collaborating closely with business and analytics teams, you will ensure high-quality deliverables that meet the strategic needs of our organization. Your expertise will be pivotal in maintaining the quality, reliability, security and governance of the ingested data, therefore driving our mission of Collaboration, Innovation, & Transformation.
Key Responsibilities:
Develop and maintain data pipelines.
Integrate data from various sources (APIs, relational databases, files, etc.).
Collaborate with business and analytics teams to understand data requirements.
Ensure quality, reliability, security and governance of the ingested data.
Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CD
Document processes and best practices in data engineering.
Required Skills and Qualifications:
Must-have Skills:
Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).
Strong programming skills in Python and SQL for data processing and transformation.
Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.
Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.
Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.
Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues.
Proficiency in Git.
Immediate Start
As a Senior Data Engineer, you will lead the design and development of robust data pipelines, integrating and transforming data from diverse data sources such as APIs, relational databases, and files. Collaborating closely with business and analytics teams, you will ensure high-quality deliverables that meet the strategic needs of our organization. Your expertise will be pivotal in maintaining the quality, reliability, security and governance of the ingested data, therefore driving our mission of Collaboration, Innovation, & Transformation.
Key Responsibilities:
Develop and maintain data pipelines.
Integrate data from various sources (APIs, relational databases, files, etc.).
Collaborate with business and analytics teams to understand data requirements.
Ensure quality, reliability, security and governance of the ingested data.
Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CD
Document processes and best practices in data engineering.
Required Skills and Qualifications:
Must-have Skills:
Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).
Strong programming skills in Python and SQL for data processing and transformation.
Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.
Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.
Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.
Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues.
Proficiency in Git.






