Senior Data Engineer – Data Migration

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – Data Migration in San Diego, CA, with a contract length and pay rate unspecified. Requires 5+ years in Data Engineering, expertise in RDBMS to PostgreSQL migration, Apache Spark, and AWS services.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 26, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
San Diego, CA
-
🧠 - Skills detailed
#Database Migration #Security #Observability #Data Quality #Spark (Apache Spark) #IAM (Identity and Access Management) #Debugging #S3 (Amazon Simple Storage Service) #DevOps #RDS (Amazon Relational Database Service) #AWS (Amazon Web Services) #Compliance #"ETL (Extract #Transform #Load)" #Data Migration #DataOps #Cloud #Monitoring #RDBMS (Relational Database Management System) #Automation #SQL (Structured Query Language) #Python #PySpark #Migration #Infrastructure as Code (IaC) #Data Pipeline #Scala #Terraform #Strategy #Programming #PostgreSQL #Data Engineering #Apache Spark
Role description
Job Title: Senior Data Engineer – Data Migration Location: San Diego, CA/local only Job Description About the Role We are looking for an experienced Senior Data Engineer to play a key role in the migration of our legacy RDBMS platforms to PostgreSQL for a mission-critical billing and invoicing system. This position requires strong hands-on skills in data migration, transformation, and validation using Apache Spark as the preferred compute engine. The Senior Engineer will work closely with the Lead Data Engineer to implement the migration strategy, ensure performance and accuracy, and deliver a seamless transition while safeguarding customer billing continuity. Key Responsibilities • Data Migration Execution - Build and optimize ETL/ELT pipelines for bulk data loads, transformations, and Change Data Capture (CDC). - Assist in schema conversion, SQL optimization, and data validation processes. - Implement Spark-based jobs for high-volume and high-performance migration workloads. • Collaboration & Support - Work under the guidance of the Lead Data Engineer to deliver migration components. - Partner with cross-functional teams (application engineers, DBAs, QA) to ensure smooth integration with PostgreSQL. - Provide inputs on tool selection, migration best practices, and automation opportunities. • Data Quality & Reliability - Develop data validation and reconciliation frameworks to ensure 100% accuracy. - Monitor pipeline performance and troubleshoot issues proactively. - Maintain high availability, compliance, and security of sensitive customer and financial data. Required Qualifications • 5+ years of experience in Data Engineering, with proven work in database migrations. • Strong experience in RDBMS to PostgreSQL data migration and SQL performance tuning. • Hands-on expertise in Apache Spark (PySpark/Scala) for ETL/ELT workloads. • Familiarity with AWS data services (Glue, EMR, RDS, S3, IAM) or similar cloud platforms. • Knowledge of data validation frameworks and best practices for reconciliation. • Solid programming skills in Python. Preferred Skills • Background in financial/billing systems with mission-critical data flows. • Exposure to Terraform or Infrastructure-as-Code (IaC). • Familiarity with DevOps/DataOps practices (CI/CD for data pipelines, monitoring, observability). • Strong problem-solving, debugging, and optimization skills.