

Sr AWS Data Architect at NYC
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr AWS Data Architect, offering a remote contract position with a pay rate of "unknown". Candidates must have 5+ years of experience in data engineering, expertise in AWS services, and proficiency in Snowflake, Python, and SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York City Metropolitan Area
-
π§ - Skills detailed
#Snowflake #Compliance #Leadership #AWS S3 (Amazon Simple Storage Service) #Scala #Data Warehouse #Data Architecture #Redshift #Cloud #Data Pipeline #Monitoring #GitHub #AWS (Amazon Web Services) #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Data Engineering #Data Governance #Security #Data Science #Lambda (AWS Lambda) #Storage #Data Lake #Python #Data Processing #Data Security #Metadata #SQL (Structured Query Language) #Data Modeling #BI (Business Intelligence) #S3 (Amazon Simple Storage Service) #Data Management
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
AWS Data Engineering Architect at Remote - NO H1"s
We are seeking a skilled AWS Data Engineer with expertise in AWS S3 Lake Formation, Snowflake, AWS Step functions, Glue, AWS Bedrock and Amazon Kendra. The ideal candidate will design, implement, and maintain scalable, high-performance data architectures while optimizing workflows in cloud environments. You will work closely with cross-functional teams to build end-to-end data solutions, leveraging cutting-edge technologies to meet business needs.
Key Responsibilities:
β’ Design and Architect: Lead the design and architecture of enterprise-grade data platforms using AWS services including S3, Lake Formation, Glue, Step Functions and Lambda.
β’ Data Lake Management: Develop and manage data lakes with AWS Lake Formation, enforcing strong data governance, access policies and secure storage practices.
β’ ETL & Orchestration: Build and optimize automated data pipelines using Step Functions, Glue and Lambda to orchestrate complex data workflows efficiently.
β’ Snowflake Integration: Integrate and manage Snowflake as the core data warehouse; design performant schemas, optimize query performance and manage costs.
β’ Cross-Team Collaboration: Partner with data science, BI, analytics and application teams to ensure seamless data delivery and usability across the organization.
β’ Security & Compliance: Ensure end-to-end data security, governance and compliance with organizational and regulatory standards.
β’ Monitoring & Optimization: Implement proactive monitoring, alerting and performance tuning to ensure data reliability, uptime and scalability.
β’ Technical Leadership: Mentor engineers, establish best practices and drive adoption of modern data engineering techniques across teams.
β’ Innovation: Stay current with emerging technologies in the cloud and data engineering space, recommend strategic improvements to enhance the data ecosystem.
Required Skills and Qualifications:
β’ 5+ years of hands-on experience as a Data Engineer or Solutions Architect with a strong focus on cloud environments.
β’ Deep expertise in AWS services including S3, Lake Formation, Glue, EMR, Lambda, Step Functions, Bedrock and Redshift.
β’ Strong proficiency in Snowflake including data modeling, performance tuning and query optimization.
β’ Proven experience building scalable, automated ETL pipelines using AWS native tools.
β’ Experience with Amazon Kendra for enterprise search of unstructured content.
β’ Proficient in Python, SQL, and Spark for data processing and transformation.
β’ Solid understanding of data architecture, governance framework and metadata management.
β’ Experience with GitHub/CI-CD pipelines and AWS CDK
β’ Strong analytical, problem-solving and system design skills.
β’ Excellent communication skills and a collaborative mindset, able to lead technical discussions and drive consensus.