

ExecutivePlacements.com - The JOB Portal
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires 5-10+ years of data engineering experience, expertise in cloud data platforms, and proficiency in Python, SQL, and orchestration frameworks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 12, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
San Francisco, CA
-
π§ - Skills detailed
#AWS S3 (Amazon Simple Storage Service) #AutoScaling #Cloud #Data Science #SaaS (Software as a Service) #Redis #Data Engineering #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Data Lake #Infrastructure as Code (IaC) #Code Reviews #Dataflow #REST (Representational State Transfer) #Databricks #Terraform #Azure #IAM (Identity and Access Management) #Microservices #Python #BigQuery #Batch #Security #Compliance #NoSQL #MongoDB #Data Governance #Synapse #IoT (Internet of Things) #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Observability #AWS (Amazon Web Services) #Airflow #DynamoDB #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #Docker #Computer Science #Redshift #Leadership #Kafka (Apache Kafka) #Scala #Java #Storage #Data Architecture #Kubernetes #Databases
Role description
Position Overview
We are seeking a Senior Data Engineer to design, build, and operate cloud-scale data platforms and pipelines that support analytics, machine learning, and data products across the organization. This role collaborates with Product, Architecture, and Data Science teams to deliver secure, reliable, and cost-efficient batch and streaming data services. The ideal candidate brings deep technical expertise in cloud data engineering, data platform architecture, and best practices for governance, reliability, and observability.
Key Responsibilities
β’ Lead the design, development, and operation of enterprise data platforms, including lakehouse, streaming, and centralized data hubs, while managing dependencies and integrations with enterprise systems.
β’ Provide technical leadership and guidance to engineering teams, including planning, scoping, risk management, and delivery accountability.
β’ Coach engineers on standards, observability, operational excellence, and best practices;conduct design and code reviews.
β’ Define and evolve cloud data architectures for ingestion, storage, compute, cataloging, and access across real-time and batch use cases.
β’ Build and maintain robust ELT/ETL and streaming pipelines using orchestration frameworks (Airflow, Dagster), event platforms (Kafka, Kinesis, Event Hub), and distributed processing frameworks (Spark, Databricks, Flink, Beam).
β’ Implement lakehouse/lake patterns with medallion layering, CDC, and schema evolution;ensure quality gates, testing, and lineage.
β’ Establish and monitor service-level objectives for data freshness, availability, and quality;implement observability and runbooks for operations.
β’ Optimize storage and compute costs and performance;manage capacity, autoscaling, and governance.
β’ Ensure privacy, security, and compliance (IAM, tokenization, row/column security, data retention);work closely with enterprise security teams.
β’ Translate product and stakeholder requirements into data contracts and SLAs;document architecture and operational procedures.
β’ Evaluate emerging data technologies and provide proof-of-concepts and reference implementations.
β’ Anticipate future platform needs, including multi-cloud, IoT/edge ingestion, and AI/ML workloads.
β’ Modernize existing data architectures using current best practices: lakehouse, CDC, medallion layering, data mesh principles, and streaming pipelines.
β’ Develop software engineering solutions in Python/SQL and one additional language (Scala/Java);familiarity with REST/gRPC and microservices patterns.
β’ Utilize cloud data services (Azure Data Lake/Databricks/Synapse;AWS S3/EMR/Glue/Redshift;GCP BigQuery/Dataflow) and distributed stores (Delta/Iceberg/Hudi, Kafka, Redis).
β’ Work with SQL/NoSQL databases (Postgres, DynamoDB, MongoDB, Cassandra) and CI/CD pipelines, containerization (Docker/Kubernetes), and infrastructure as code (Terraform).
β’ Ensure proper data governance and security across catalogs, lineage, RBAC/ABAC, encryption, and key management.
β’ Communicate effectively with stakeholders, translating business needs into scalable, reliable data solutions.
Required Qualifications
β’ Bachelorβs degree in Computer Science, Engineering, or related field;Masterβs preferred.
β’ 5-10+ years of experience in data engineering or platform roles delivering enterprise-grade systems (SaaS or large-scale internal platforms).
β’ 2+ years leading or architecting cloud data platforms at scale.
β’ Proven experience with batch and streaming pipelines, lakehouse implementations, and production ML data flows.
β’ Experience operating and supporting data platforms with SLAs/SLOs, cost governance, and on-call responsibilities.
β’ Hands-on experience with cloud-native data services, orchestration frameworks, and distributed processing.
Preferred / Additional Experience
β’ Experience with real-time data systems and IoT ingestion.
β’ Background in large-scale SaaS or multi-cloud platforms.
β’ Strong collaboration skills with cross-functional teams, including product, architecture, and analytics.
Position Overview
We are seeking a Senior Data Engineer to design, build, and operate cloud-scale data platforms and pipelines that support analytics, machine learning, and data products across the organization. This role collaborates with Product, Architecture, and Data Science teams to deliver secure, reliable, and cost-efficient batch and streaming data services. The ideal candidate brings deep technical expertise in cloud data engineering, data platform architecture, and best practices for governance, reliability, and observability.
Key Responsibilities
β’ Lead the design, development, and operation of enterprise data platforms, including lakehouse, streaming, and centralized data hubs, while managing dependencies and integrations with enterprise systems.
β’ Provide technical leadership and guidance to engineering teams, including planning, scoping, risk management, and delivery accountability.
β’ Coach engineers on standards, observability, operational excellence, and best practices;conduct design and code reviews.
β’ Define and evolve cloud data architectures for ingestion, storage, compute, cataloging, and access across real-time and batch use cases.
β’ Build and maintain robust ELT/ETL and streaming pipelines using orchestration frameworks (Airflow, Dagster), event platforms (Kafka, Kinesis, Event Hub), and distributed processing frameworks (Spark, Databricks, Flink, Beam).
β’ Implement lakehouse/lake patterns with medallion layering, CDC, and schema evolution;ensure quality gates, testing, and lineage.
β’ Establish and monitor service-level objectives for data freshness, availability, and quality;implement observability and runbooks for operations.
β’ Optimize storage and compute costs and performance;manage capacity, autoscaling, and governance.
β’ Ensure privacy, security, and compliance (IAM, tokenization, row/column security, data retention);work closely with enterprise security teams.
β’ Translate product and stakeholder requirements into data contracts and SLAs;document architecture and operational procedures.
β’ Evaluate emerging data technologies and provide proof-of-concepts and reference implementations.
β’ Anticipate future platform needs, including multi-cloud, IoT/edge ingestion, and AI/ML workloads.
β’ Modernize existing data architectures using current best practices: lakehouse, CDC, medallion layering, data mesh principles, and streaming pipelines.
β’ Develop software engineering solutions in Python/SQL and one additional language (Scala/Java);familiarity with REST/gRPC and microservices patterns.
β’ Utilize cloud data services (Azure Data Lake/Databricks/Synapse;AWS S3/EMR/Glue/Redshift;GCP BigQuery/Dataflow) and distributed stores (Delta/Iceberg/Hudi, Kafka, Redis).
β’ Work with SQL/NoSQL databases (Postgres, DynamoDB, MongoDB, Cassandra) and CI/CD pipelines, containerization (Docker/Kubernetes), and infrastructure as code (Terraform).
β’ Ensure proper data governance and security across catalogs, lineage, RBAC/ABAC, encryption, and key management.
β’ Communicate effectively with stakeholders, translating business needs into scalable, reliable data solutions.
Required Qualifications
β’ Bachelorβs degree in Computer Science, Engineering, or related field;Masterβs preferred.
β’ 5-10+ years of experience in data engineering or platform roles delivering enterprise-grade systems (SaaS or large-scale internal platforms).
β’ 2+ years leading or architecting cloud data platforms at scale.
β’ Proven experience with batch and streaming pipelines, lakehouse implementations, and production ML data flows.
β’ Experience operating and supporting data platforms with SLAs/SLOs, cost governance, and on-call responsibilities.
β’ Hands-on experience with cloud-native data services, orchestration frameworks, and distributed processing.
Preferred / Additional Experience
β’ Experience with real-time data systems and IoT ingestion.
β’ Background in large-scale SaaS or multi-cloud platforms.
β’ Strong collaboration skills with cross-functional teams, including product, architecture, and analytics.





