

Galaxi Consulting Group
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8–10 years of experience in AWS data engineering, specifically in telecommunications. It is a 12-month fixed-term contract based in London, UK, offering a hybrid working model. Key skills include ETL development, AWS services, and Python.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#IAM (Identity and Access Management) #Storage #Data Processing #Datasets #Data Lifecycle #"ETL (Extract #Transform #Load)" #Apache Iceberg #Metadata #Data Engineering #Strategy #Cloud #Scala #Amazon Redshift #Data Management #Athena #Data Strategy #DevOps #Lambda (AWS Lambda) #Security #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #ML (Machine Learning) #Data Pipeline #Data Catalog #SQL (Structured Query Language) #AWS S3 (Amazon Simple Storage Service) #Redshift #AWS (Amazon Web Services) #CRM (Customer Relationship Management) #Business Analysis #Data Architecture #"ACID (Atomicity #Consistency #Isolation #Durability)" #Batch #Data Lake #Data Quality #Deployment #GIT #Python #Monitoring #Data Governance #Kafka (Apache Kafka) #Debugging #Airflow #Data Ingestion #Terraform
Role description
AWS DATA ENGINEER:
Domain: Telecommunications
Location: London,UK
Duration: 12 Months Fixed Term Contract
Hybrid working 2-3 Days from Office
Keywords:
End‑to‑End Data Platform Design, Data Strategy, AWS Cloud Architecture, High‑Availability Architecture, Scalable Data Pipelines ,ETL Architecture, Metadata Management, Iceberg, Batch & Streaming Pipelines, Python, Pypark, Data Pipeline Design & Deployment
Role Overview
We are seeking an experienced AWS Data Engineer with strong expertise in ETL pipelines, Redshift, Iceberg, Athena, and S3 to support large-scale data processing and analytics initiatives in the telecom domain. The candidate will work closely with data architects, business analysts, and cross-functional teams to build scalable and efficient data solutions supporting network analytics, customer insights, billing systems, and telecom OSS/BSS workflows.
Key Responsibilities
1. Data Engineering & ETL Development
• Design, develop, and maintain ETL/ELT pipelines using AWS-native services (Glue, Lambda, EMR, Step Functions).
• Implement data ingestion from telecom systems like OSS/BSS, CDRs, mediation systems, CRM, billing, network logs.
• Optimize ETL workflows for large-scale telecom datasets (high volume, high velocity).
1. Data Warehousing (Redshift)
• Build and manage scalable Amazon Redshift clusters for reporting and analytics.
• Create and optimize schemas, tables, distribution keys, sort keys, and workload management.
• Implement Redshift Spectrum to query data in S3 using external tables.
1. Data Lake & Iceberg
• Implement and maintain Apache Iceberg tables on AWS for schema evolution and ACID operations.
• Build Iceberg-based ingestion and transformation pipelines using Glue, EMR, or Spark.
• Ensure high performance for petabyte-scale telecom datasets (CDRs, tower logs, subscriber activity).
1. Querying & Analytics (Athena)
• Develop and optimize Athena queries for operational and analytical reporting.
• Integrate Athena with S3/Iceberg for low-cost, serverless analytics.
• Manage Glue Data Catalog integrations and table schema management.
1. Storage (S3) & Data Lake Architecture
• Design secure, cost-efficient S3 data lake structures (bronze/silver/gold zones).
• Implement data lifecycle policies, versioning, and partitioning strategies.
• Ensure data governance, metadata quality, and security (IAM, Lake Formation).
1. Telecom Domain Expertise
• Understand telecom-specific datasets such as:
• CDR, xDR, subscriber data
• Network KPIs (4G/5G tower logs)
• Customer lifecycle & churn data
• Billing & revenue assurance
• Build models and pipelines to support network analytics, customer 360, churn prediction, fraud detection, etc.
1. Performance Optimization & Monitoring
• Tune Spark/Glue jobs for performance and cost.
• Monitor Redshift/Athena/S3 efficiency and implement best practices.
• Perform data quality checks and validation across pipelines.
1. DevOps & CI/CD (Preferred)
• Use Git, CodePipeline, Terraform/CloudFormation for infrastructure and deployments.
• Automate pipeline deployment and monitoring.
Required Skills
• 8–10 years’ experience in data engineering.
• Strong hands-on experience with:
• AWS S3, Athena, Glue, Redshift, EMR/Spark
• Apache Iceberg
• Python/SQL
• Experience in telecom data pipelines and handling large-scale structured/semi-structured data.
• Strong problem-solving, optimization, and debugging skills.
Good to Have Skills
• Knowledge of AWS Lake Formation, Kafka/Kinesis, Airflow, or Delta/Apache Hudi.
• Experience with ML workflows in telecom (churn, network prediction).
• Exposure to 5G network data models.
AWS DATA ENGINEER:
Domain: Telecommunications
Location: London,UK
Duration: 12 Months Fixed Term Contract
Hybrid working 2-3 Days from Office
Keywords:
End‑to‑End Data Platform Design, Data Strategy, AWS Cloud Architecture, High‑Availability Architecture, Scalable Data Pipelines ,ETL Architecture, Metadata Management, Iceberg, Batch & Streaming Pipelines, Python, Pypark, Data Pipeline Design & Deployment
Role Overview
We are seeking an experienced AWS Data Engineer with strong expertise in ETL pipelines, Redshift, Iceberg, Athena, and S3 to support large-scale data processing and analytics initiatives in the telecom domain. The candidate will work closely with data architects, business analysts, and cross-functional teams to build scalable and efficient data solutions supporting network analytics, customer insights, billing systems, and telecom OSS/BSS workflows.
Key Responsibilities
1. Data Engineering & ETL Development
• Design, develop, and maintain ETL/ELT pipelines using AWS-native services (Glue, Lambda, EMR, Step Functions).
• Implement data ingestion from telecom systems like OSS/BSS, CDRs, mediation systems, CRM, billing, network logs.
• Optimize ETL workflows for large-scale telecom datasets (high volume, high velocity).
1. Data Warehousing (Redshift)
• Build and manage scalable Amazon Redshift clusters for reporting and analytics.
• Create and optimize schemas, tables, distribution keys, sort keys, and workload management.
• Implement Redshift Spectrum to query data in S3 using external tables.
1. Data Lake & Iceberg
• Implement and maintain Apache Iceberg tables on AWS for schema evolution and ACID operations.
• Build Iceberg-based ingestion and transformation pipelines using Glue, EMR, or Spark.
• Ensure high performance for petabyte-scale telecom datasets (CDRs, tower logs, subscriber activity).
1. Querying & Analytics (Athena)
• Develop and optimize Athena queries for operational and analytical reporting.
• Integrate Athena with S3/Iceberg for low-cost, serverless analytics.
• Manage Glue Data Catalog integrations and table schema management.
1. Storage (S3) & Data Lake Architecture
• Design secure, cost-efficient S3 data lake structures (bronze/silver/gold zones).
• Implement data lifecycle policies, versioning, and partitioning strategies.
• Ensure data governance, metadata quality, and security (IAM, Lake Formation).
1. Telecom Domain Expertise
• Understand telecom-specific datasets such as:
• CDR, xDR, subscriber data
• Network KPIs (4G/5G tower logs)
• Customer lifecycle & churn data
• Billing & revenue assurance
• Build models and pipelines to support network analytics, customer 360, churn prediction, fraud detection, etc.
1. Performance Optimization & Monitoring
• Tune Spark/Glue jobs for performance and cost.
• Monitor Redshift/Athena/S3 efficiency and implement best practices.
• Perform data quality checks and validation across pipelines.
1. DevOps & CI/CD (Preferred)
• Use Git, CodePipeline, Terraform/CloudFormation for infrastructure and deployments.
• Automate pipeline deployment and monitoring.
Required Skills
• 8–10 years’ experience in data engineering.
• Strong hands-on experience with:
• AWS S3, Athena, Glue, Redshift, EMR/Spark
• Apache Iceberg
• Python/SQL
• Experience in telecom data pipelines and handling large-scale structured/semi-structured data.
• Strong problem-solving, optimization, and debugging skills.
Good to Have Skills
• Knowledge of AWS Lake Formation, Kafka/Kinesis, Airflow, or Delta/Apache Hudi.
• Experience with ML workflows in telecom (churn, network prediction).
• Exposure to 5G network data models.






