Coforge

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position in London, UK, requiring 3+ years of Databricks experience, strong SQL, Python, and AWS skills. Pay rate and contract length are unspecified. Hybrid work involves 2-3 days in the office.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 24, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#GIT #EC2 #Automation #Delta Lake #SQL Queries #Redshift #S3 (Amazon Simple Storage Service) #Data Extraction #Complex Queries #Data Modeling #Python #Scala #Cloud #Agile #Data Pipeline #Data Bricks #"ETL (Extract #Transform #Load)" #Documentation #AWS (Amazon Web Services) #IAM (Identity and Access Management) #Spark SQL #Indexing #Data Processing #Athena #Scripting #Pandas #Data Engineering #SQL (Structured Query Language) #Lambda (AWS Lambda) #Airflow #PySpark #Databricks #Spark (Apache Spark)
Role description
Job Title: Data Engineer Location: London, UK Job Type: Contract Work Mode: Hybrid (2 -3 Days a week to London office) Key Skills: Data Bricks(Spark), SQL , Python & AWS services We at Coforge are hiring for Data Engineer with the following skillset: Key Responsibilities • Develop, maintain, and optimize data pipelines and ETL workflows using Databricks (Spark). • Should have 3+ years of experience on Databricks projects. • Good problem solving skills. • Write efficient, optimized SQL queries for data extraction, transformation, and validation. • Develop Python‑based scripts for automation, data processing, and integration tasks. • Work with AWS Cloud services (S3, Glue, EC2, Lambda, IAM, Athena, EMR) to build scalable data solutions. • Collaborate with business and technical teams to understand data requirements and translate them into technical specifications. • Perform data validation, quality checks, and root‑cause analysis for data‑related issues. • Ensure performance tuning, reliability, and scalability of data pipelines. • Contribute to design reviews, architecture discussions, and best‑practice implementations. • Prepare and maintain technical documentation. Technical Skills • Strong SQL: Complex queries, performance tuning, stored procedures, indexing. • Python: Experience with data processing (Pandas, PySpark), scripting, automation. • Databricks: • Spark/Delta Lake • Notebooks • Job orchestration • ETL/ELT pipeline development • AWS Cloud (working knowledge in at least 3+ services): • S3, Glue, Lambda, EC2, EMR, Athena, Redshift, IAM. Additional Skills (Good to Have) • Experience in Git, CI/CD pipelines. • Knowledge of data modeling concepts. • Exposure to Agile methodologies. • Experience with job scheduling tools (Airflow, Cron, etc.)