

Adroit Innovative Solutions Inc
Sr. Data/Cloud Engineer Seattle WA Hybrid Work 3 Days Locals
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data/Cloud Engineer in Seattle, WA, offering a contract of 3 to 6 months at $55.00 - $70.00 per hour. Requires 10-14 years of experience, strong Python and SQL skills, and AWS data services expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
March 25, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Seattle, WA 98103
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #PySpark #Lambda (AWS Lambda) #S3 (Amazon Simple Storage Service) #dbt (data build tool) #Data Quality #Scrum #REST API #RDBMS (Relational Database Management System) #Scala #Data Catalog #AWS Glue #Agile #Data Extraction #Python #REST (Representational State Transfer) #Apache Kafka #AWS (Amazon Web Services) #Metadata #Cloud #Documentation #Athena #"ETL (Extract #Transform #Load)" #Data Engineering #Delta Lake #Batch #Redshift #Spark (Apache Spark) #SQL (Structured Query Language) #JDBC (Java Database Connectivity) #Jenkins #GitHub #Data Ingestion
Role description
Sr. Data/Cloud Engineer (3 Positions)
Experience Required: 10 to 14 years
Location: Seattle WA (Hybrid work 2-3 dyas per week) (only locals)
Only USC only
Role Summary:
The Data/Cloud Engineer is responsible for designing, building, testing, and deploying end-to-end data ingestion connectors and ETL/ELT pipelines on the Boeing-provided framework. Working in two-person pods, each pod will deliver one data source to production per month across a variety of ingestion patterns (batch, streaming, CDC). This role is the core delivery engine of the project.
Key Responsibilities:
- Design and build connectors for prioritized data sources including SFTP, REST APIs, RDBMS (CDC), Kafka, S3 file drops, and mainframe extracts.
- Define source-specific ingestion patterns (batch windows, CDC, streaming) and map data to canonical landing zones in the lakehouse architecture.
- Implement reusable ETL/ELT pipelines on the IT-provided framework (e.g., AWS Glue, Spark, dbt) across raw → curated → consumption layers.
- Develop transformation logic, handle schema evolution, implement partitioning strategies, and capture metadata for lineage tracking.
- Embed data quality checks (completeness, schema conformance, record counts, freshness) with fail/alert behavior within pipelines.
- Write unit, integration, and end-to-end tests; validate pipelines in CI/CD and staging environments prior to production promotion.
- Produce connector runbooks, data contracts, transformation specs, and onboarding guides.
- Collaborate with source system owners to obtain access, sample data, and schema/contract details.
- Participate in 2-week Agile sprints under Boeing's sprint planning and task assignment process.
Required Skills & Qualifications:
- 5–8 years of hands-on experience in data engineering, cloud data platforms, and ETL/ELT pipeline development.
- Strong proficiency in Python, SQL, and Spark (PySpark or Scala).
- Hands-on experience with AWS data services: Glue, S3, Kinesis, Lambda, Redshift, Athena, or equivalent.
- Experience building ingestion pipelines for diverse source types: SFTP, REST APIs, RDBMS (JDBC/CDC), Kafka/streaming, and flat file processing.
- Working knowledge of lakehouse architectures (Delta Lake, Iceberg, or Hudi).
- Experience with dbt or similar transformation frameworks.
- Familiarity with CI/CD pipelines for data workloads (e.g., GitHub Actions, CodePipeline, Jenkins).
- Understanding of data quality frameworks and schema evolution handling.
- Strong documentation skills for runbooks, data contracts, and technical specifications.
- Experience working in Agile/Scrum delivery models.
Preferred Skills:
- Experience with mainframe data extraction and integration.
- Familiarity with Apache Kafka (producers, consumers, connect, schema registry).
- Exposure to data cataloging and lineage tools (e.g., AWS Glue Catalog, Apache Atlas, DataHub).
Pay: $55.00 - $70.00 per hour
Work Location: In person
Sr. Data/Cloud Engineer (3 Positions)
Experience Required: 10 to 14 years
Location: Seattle WA (Hybrid work 2-3 dyas per week) (only locals)
Only USC only
Role Summary:
The Data/Cloud Engineer is responsible for designing, building, testing, and deploying end-to-end data ingestion connectors and ETL/ELT pipelines on the Boeing-provided framework. Working in two-person pods, each pod will deliver one data source to production per month across a variety of ingestion patterns (batch, streaming, CDC). This role is the core delivery engine of the project.
Key Responsibilities:
- Design and build connectors for prioritized data sources including SFTP, REST APIs, RDBMS (CDC), Kafka, S3 file drops, and mainframe extracts.
- Define source-specific ingestion patterns (batch windows, CDC, streaming) and map data to canonical landing zones in the lakehouse architecture.
- Implement reusable ETL/ELT pipelines on the IT-provided framework (e.g., AWS Glue, Spark, dbt) across raw → curated → consumption layers.
- Develop transformation logic, handle schema evolution, implement partitioning strategies, and capture metadata for lineage tracking.
- Embed data quality checks (completeness, schema conformance, record counts, freshness) with fail/alert behavior within pipelines.
- Write unit, integration, and end-to-end tests; validate pipelines in CI/CD and staging environments prior to production promotion.
- Produce connector runbooks, data contracts, transformation specs, and onboarding guides.
- Collaborate with source system owners to obtain access, sample data, and schema/contract details.
- Participate in 2-week Agile sprints under Boeing's sprint planning and task assignment process.
Required Skills & Qualifications:
- 5–8 years of hands-on experience in data engineering, cloud data platforms, and ETL/ELT pipeline development.
- Strong proficiency in Python, SQL, and Spark (PySpark or Scala).
- Hands-on experience with AWS data services: Glue, S3, Kinesis, Lambda, Redshift, Athena, or equivalent.
- Experience building ingestion pipelines for diverse source types: SFTP, REST APIs, RDBMS (JDBC/CDC), Kafka/streaming, and flat file processing.
- Working knowledge of lakehouse architectures (Delta Lake, Iceberg, or Hudi).
- Experience with dbt or similar transformation frameworks.
- Familiarity with CI/CD pipelines for data workloads (e.g., GitHub Actions, CodePipeline, Jenkins).
- Understanding of data quality frameworks and schema evolution handling.
- Strong documentation skills for runbooks, data contracts, and technical specifications.
- Experience working in Agile/Scrum delivery models.
Preferred Skills:
- Experience with mainframe data extraction and integration.
- Familiarity with Apache Kafka (producers, consumers, connect, schema registry).
- Exposure to data cataloging and lineage tools (e.g., AWS Glue Catalog, Apache Atlas, DataHub).
Pay: $55.00 - $70.00 per hour
Work Location: In person






