

New York Technology Partners
Data Architect with Low Latency
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect with Low Latency, offering a long-term contract in Atlanta, GA (hybrid). Key skills include expertise in Confluent Kafka, Flink, microservices, ETL/ELT design, and cloud platforms. Requires 10+ years in software engineering and strong leadership abilities.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 18, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#Terraform #Jenkins #PCI (Payment Card Industry) #Data Architecture #GCP (Google Cloud Platform) #ML (Machine Learning) #Data Processing #Azure #Grafana #Kubernetes #REST (Representational State Transfer) #Observability #Leadership #Python #Data Quality #Automated Testing #Security #DevSecOps #Classification #API (Application Programming Interface) #GDPR (General Data Protection Regulation) #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Deployment #Java #Compliance #Splunk #Data Governance #Containers #Migration #Data Lineage #Infrastructure as Code (IaC) #SQL (Structured Query Language) #Data Catalog #Cloud #Docker #Microservices #Metadata #Disaster Recovery #Redis #"ETL (Extract #Transform #Load)" #IAM (Identity and Access Management) #Prometheus
Role description
Role : Data Architect with low latency
Duration : Long Term Contact
Location : Atlanta GA Hybrid
Role Summary:
We’re seeking a seasoned Lead Software Engineer to architect, build, and scale real time data processing platforms that power event driven applications and analytics. You’ll lead the design of streaming microservices, govern data quality and lineage, and mentor engineers while partnering with product, platform, and security stakeholders to deliver resilient, low latency systems.
Responsibilities:
• Own design & delivery of high throughput, low latency streaming solutions using technologies like Confluent Kafka, Apache Flink, Hazelcast, Kafka Streams, Kafka Connect, and Schema Registry.
• Design and implement microservices and event driven systems with robust ETL/ELT pipelines for real time ingestion, enrichment, and delivery.
• Establish distributed caching and in memory data grid patterns (e.g., Redis, Hazelcast) to optimize read/write performance and session/state management.
• Define and operationalize event gateways / event grids for event routing, fan out, and reliable delivery.
• Lead data governance initiatives—standards for metadata, lineage, classifications, retention, access controls, and compliance (PII/PCI/SOX/GDPR as applicable).
• Drive CI/CD best practices (pipelines, automated testing, progressive delivery) to enable safe, frequent releases; champion DevSecOps and “shift left” testing.
• Set SLOs/SLAs, track observability (tracing, metrics, logs), and optimize performance at scale (throughput, backpressure, state, checkpointing).
• Work with Security, Platform, and Cloud teams on networking, IAM, secrets, certificates, and cost optimization.
• Mentor engineers, conduct design reviews, and enforce coding standards and reliability patterns.
• Guide platform and delivery roadmap
Required Qualifications:
• 10+ years in software engineering; 5+ years designing large-scale real time or event driven platforms.
• Expert with Confluent Kafka (brokers, partitions, consumer groups, Schema Registry, Kafka Connect), Flink (DataStream/Table API, stateful ops, checkpointing), Hazelcast, and/or Kafka Streams.
• Strong in ETL/ELT design, streaming joins/windows, exactly once semantics, and idempotent processing.
• Experience with microservices (Java/Python), REST/gRPC, protobuf/Avro, and contract-first development.
• Hands-on with distributed caching and in memory data grids; performance tuning and eviction strategies.
• Cloud experience in any one or more cloud platforms Azure/AWS/GCP; containers, Docker, Kubernetes.
• Experience in production-grade CI/CD (Jenkins, Bamboo, Harness or similar), Infrastructure as Code (Terraform/Helm).
• Robust observability (Prometheus/Grafana/OpenTelemetry, Splunk/ELK or similar), and resilience patterns (circuit breakers, retries, DLQs).
• Practical data governance: metadata catalogs, lineage, encryption, RBAC.
• Excellent communication; ability to lead design, influence stakeholders, and guide cross-functional delivery.
• Core competencies to include Architectural Thinking, Systems Design, Operational Excellence, Security & Compliance, Team Leadership, Stakeholder Management.
Nice to Have:
• Experience with CDC, Kafka Connect custom connectors, Flink SQL, Beam.
• Streaming ML or feature stores integration (online/offline consistency).
• Multi region / disaster recovery for streaming platforms.
• Experience with Zero downtime migrations, blue/green, and canary deployments."
Role : Data Architect with low latency
Duration : Long Term Contact
Location : Atlanta GA Hybrid
Role Summary:
We’re seeking a seasoned Lead Software Engineer to architect, build, and scale real time data processing platforms that power event driven applications and analytics. You’ll lead the design of streaming microservices, govern data quality and lineage, and mentor engineers while partnering with product, platform, and security stakeholders to deliver resilient, low latency systems.
Responsibilities:
• Own design & delivery of high throughput, low latency streaming solutions using technologies like Confluent Kafka, Apache Flink, Hazelcast, Kafka Streams, Kafka Connect, and Schema Registry.
• Design and implement microservices and event driven systems with robust ETL/ELT pipelines for real time ingestion, enrichment, and delivery.
• Establish distributed caching and in memory data grid patterns (e.g., Redis, Hazelcast) to optimize read/write performance and session/state management.
• Define and operationalize event gateways / event grids for event routing, fan out, and reliable delivery.
• Lead data governance initiatives—standards for metadata, lineage, classifications, retention, access controls, and compliance (PII/PCI/SOX/GDPR as applicable).
• Drive CI/CD best practices (pipelines, automated testing, progressive delivery) to enable safe, frequent releases; champion DevSecOps and “shift left” testing.
• Set SLOs/SLAs, track observability (tracing, metrics, logs), and optimize performance at scale (throughput, backpressure, state, checkpointing).
• Work with Security, Platform, and Cloud teams on networking, IAM, secrets, certificates, and cost optimization.
• Mentor engineers, conduct design reviews, and enforce coding standards and reliability patterns.
• Guide platform and delivery roadmap
Required Qualifications:
• 10+ years in software engineering; 5+ years designing large-scale real time or event driven platforms.
• Expert with Confluent Kafka (brokers, partitions, consumer groups, Schema Registry, Kafka Connect), Flink (DataStream/Table API, stateful ops, checkpointing), Hazelcast, and/or Kafka Streams.
• Strong in ETL/ELT design, streaming joins/windows, exactly once semantics, and idempotent processing.
• Experience with microservices (Java/Python), REST/gRPC, protobuf/Avro, and contract-first development.
• Hands-on with distributed caching and in memory data grids; performance tuning and eviction strategies.
• Cloud experience in any one or more cloud platforms Azure/AWS/GCP; containers, Docker, Kubernetes.
• Experience in production-grade CI/CD (Jenkins, Bamboo, Harness or similar), Infrastructure as Code (Terraform/Helm).
• Robust observability (Prometheus/Grafana/OpenTelemetry, Splunk/ELK or similar), and resilience patterns (circuit breakers, retries, DLQs).
• Practical data governance: metadata catalogs, lineage, encryption, RBAC.
• Excellent communication; ability to lead design, influence stakeholders, and guide cross-functional delivery.
• Core competencies to include Architectural Thinking, Systems Design, Operational Excellence, Security & Compliance, Team Leadership, Stakeholder Management.
Nice to Have:
• Experience with CDC, Kafka Connect custom connectors, Flink SQL, Beam.
• Streaming ML or feature stores integration (online/offline consistency).
• Multi region / disaster recovery for streaming platforms.
• Experience with Zero downtime migrations, blue/green, and canary deployments."






