Openkyber

Kafka Schema Registry Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Kafka Schema Registry Engineer, a remote contract position lasting 12 months, offering a competitive pay rate. Requires 8+ years in data engineering, strong SQL and MongoDB skills, ETL tools experience, and cloud platform expertise.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 3, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alaska
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Azure #Data Science #Data Processing #AWS (Amazon Web Services) #AWS Glue #Data Warehouse #Normalization #NoSQL #Schema Design #Metadata #Data Security #PySpark #Data Modeling #Storage #Snowflake #Scripting #Python #Databricks #Redshift #Indexing #Azure SQL #Leadership #Compliance #SSIS (SQL Server Integration Services) #Data Engineering #Informatica #Scala #RDS (Amazon Relational Database Service) #Data Architecture #SQL Queries #Airflow #Data Pipeline #Batch #S3 (Amazon Simple Storage Service) #Data Lake #Cloud #Data Quality #GCP (Google Cloud Platform) #Security #Computer Science #BigQuery #Spark (Apache Spark) #MongoDB #Data Manipulation #ADF (Azure Data Factory) #Kafka (Apache Kafka) #DataStage #Azure Data Factory #Data Management #SQL (Structured Query Language)
Role description
We are looking for Data Engineer Lead - Remote / Telecommute for our client in Charlotte, NC Job Title: Data Engineer Lead - Remote / Telecommute Job Location: Charlotte, NC Job Type: Contract Job Description: The Data Engineer Lead is responsible for designing, developing, and maintaining scalable enterprise data platforms and pipelines. This role provides technical leadership across data engineering initiatives, ensuring high-performance data processing, optimized database solutions, and secure cloud-based architectures. The ideal candidate has deep expertise in SQL and MongoDB, strong hands-on ETL experience, and proven leadership in building modern data platforms across cloud environments. Data Pipeline Engineering: Design, develop, and maintain scalable, reliable, and optimized ETL/ELT pipelines supporting enterprise data flows. Build ingestion frameworks to extract structured and unstructured data into data lakes and data warehouses. Implement batch and real-time data processing using modern orchestration tools. Develop reusable and scalable data processing frameworks. Database Engineering (SQL and MongoDB): Develop and optimize complex SQL queries, stored procedures, indexing strategies, and high-performance queries for high-volume systems. Architect and maintain MongoDB collections, schema designs, aggregation pipelines, and NoSQL data models. Conduct data validation and quality checks across SQL and NoSQL environments. Implement performance tuning and database optimization strategies. Cloud and Data Platform Engineering: Build scalable, secure, and high-performance data platforms on AWS, Azure, or Google Cloud Platform. Leverage cloud-native ETL tools including Airflow, AWS Glue, DataStage, SSIS, Azure Data Factory, and PySpark. Design cloud storage and compute architectures using services such as S3, Redshift, BigQuery, RDS, Azure SQL, and Data Lake platforms. Data Architecture and Modeling: Lead data modeling initiatives including dimensional modeling, normalization, and schema optimization. Design and optimize data warehouse architectures for analytics and reporting. Implement metadata management, governance frameworks, and data quality processes. Performance, Optimization, and Security: Monitor and optimize pipeline performance, storage utilization, and cost efficiency in cloud environments. Ensure compliance with data security standards, regulatory requirements, and access control policies. Leadership and Collaboration: Lead and mentor data engineering teams, guiding architecture, best practices, and tooling decisions. Collaborate with software engineers, DBAs, analysts, and data scientists to meet enterprise data needs. Participate in sprint planning, architectural reviews, and project governance processes. Requirement/Must Have: 8+ years of hands-on experience in data engineering. Strong expertise in advanced SQL including query optimization, indexing, and performance tuning. Strong experience with MongoDB including schema design, aggregation, and performance optimization. Proficiency in Python or similar scripting languages for data manipulation. Hands-on experience with ETL tools including SSIS, PySpark, Airflow, AWS Glue, DataStage, Informatica, or Azure Data Factory. Experience with cloud data platforms across AWS, Azure, or Google Cloud Platform. Strong understanding of data warehousing concepts and data modeling frameworks. Experience building scalable and secure data pipelines. Excellent communication and stakeholder management skills. Should Have: Experience with Kafka, Spark, Snowflake, or Databricks. Knowledge of data security practices and compliance frameworks. Experience in healthcare or large enterprise environments preferred. Qualification And Education: Bachelor s or Master s degree in Computer Science, Information Systems, Engineering, or related field preferred. Equivalent professional experience in enterprise data engineering accepted. For applications and inquiries, contact: hirings@openkyber.com