Lumicity

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Architect/Engineer with 10+ years of experience, focusing on AWS and Snowflake, on an ongoing contract in a hybrid setting in Pennsylvania. Key skills include Python, SQL, Spark, and data governance expertise.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
600
-
πŸ—“οΈ - Date
February 14, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Pennsylvania, United States
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #Kafka (Apache Kafka) #Redshift #Data Pipeline #Batch #Data Warehouse #Leadership #Snowflake #Scala #Clustering #dbt (data build tool) #Knowledge Graph #Data Governance #Matillion #SQL (Structured Query Language) #Observability #Data Science #Schema Design #Infrastructure as Code (IaC) #Data Architecture #Code Reviews #AWS (Amazon Web Services) #Data Modeling #Spark (Apache Spark) #Data Engineering #Strategy #PySpark #Security #Terraform #Python #Cloud #Airflow #S3 (Amazon Simple Storage Service) #Data Quality
Role description
Senior Data Architect / Engineer (AWS + Snowflake) Location: Hybrid / Onsite in Pennsylvania Experience: 10+ Years Type: On-going Contract We are partnering with a high-growth enterprise organization seeking a Senior Data Architect / Engineer to help design, build, and scale a modern cloud data platform. This role is ideal for someone who can operate at both the architectural and hands-on engineering level β€” someone who not only builds production-grade pipelines but also mentors and elevates other engineers. What You’ll Do β€’ Architect and build scalable data platforms in AWS, leveraging services such as S3, EMR, Glue, Lambda, Kinesis/MSK, Redshift, Step Functions, and related components β€’ Design and optimize Snowflake data warehouse environments, including ELT workflows, performance tuning, clustering strategies, RBAC/security models, and cost optimization β€’ Develop and maintain batch and real-time data pipelines using Spark, Python, SQL, and streaming technologies β€’ Lead design decisions around data modeling (dimensional, normalized, lakehouse patterns) and governance β€’ Establish best practices around data quality, observability, CI/CD, and environment promotion β€’ Mentor junior and mid-level engineers through code reviews, architectural guidance, and technical leadership β€’ Collaborate with analytics, data science, and business stakeholders to translate requirements into scalable solutions β€’ Contribute to long-term platform strategy and modernization initiatives What We’re Looking For β€’ 10+ years of experience in data engineering and platform design β€’ Deep hands-on experience in AWS-based data architectures β€’ Strong expertise in Snowflake (schema design, clustering, performance optimization, warehouse management, security) β€’ Advanced proficiency in Python, SQL, and Spark (PySpark or Scala) β€’ Experience building both batch and streaming data pipelines β€’ Strong understanding of data governance, RBAC, encryption, and cloud security best practices β€’ Experience implementing CI/CD and Infrastructure as Code (Terraform or similar) β€’ Demonstrated ability to mentor engineers and influence technical direction β€’ Strong communication skills and ability to work cross-functionally Nice to Have β€’ Experience with real-time streaming frameworks (Kafka/MSK, Kinesis) β€’ Exposure to knowledge graphs or semantic modeling β€’ Experience in highly regulated or enterprise-scale environments β€’ Familiarity with orchestration tools (Airflow/MWAA, Matillion, dbt, etc.)