

Aligned Automation
Data Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect with a 6-month contract, paying $90.00 - $105.00 per hour, based in Manhattan, NY. Required skills include data modeling, SQL, cloud platforms, ETL orchestration, and data governance, with a Master's degree and 10-15 years of experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
840
-
🗓️ - Date
December 3, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Manhattan, NY 10001
-
🧠 - Skills detailed
#BI (Business Intelligence) #Vault #"ETL (Extract #Transform #Load)" #PCI (Payment Card Industry) #Programming #Azure #MDM (Master Data Management) #Observability #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #Java #Databricks #Apache Spark #Automation #Computer Science #Data Quality #Data Pipeline #Data Architecture #Python #Migration #ML (Machine Learning) #Spark (Apache Spark) #Metadata #Data Vault #BigQuery #Scala #Data Modeling #Airflow #AWS (Amazon Web Services) #Security #SQL (Structured Query Language) #Apache Iceberg #Terraform #Compliance #Scripting #Trino #AI (Artificial Intelligence) #Storage #Data Processing #Data Governance #Batch #Data Integration #dbt (data build tool) #MIS Systems (Management Information Systems) #Cloud #Snowflake
Role description
Data Architect
The Data Architect designs, governs, and evolves enterprise data architectures that enable reliable analytics, AI, and operational reporting. Data Architect defines standards for data modeling, integration, quality, security, and lifecycle management across cloud and on-prem platforms, ensuring data is trusted, performant, and cost-efficient.
This contract position is onsite.
Job Description:
Define end-to-end data architecture patterns (warehouse, lake/lakehouse, streaming, operational data stores) and reference designs aligned to business outcomes.
Own enterprise data models (conceptual, logical, physical), canonical data definitions, and metadata standards to drive consistency and reuse.
Architect data integration pipelines (batch and streaming) including ingestion, transformation, enrichment, and distribution with strong SLAs and observability.
Establish data governance controls (cataloging, lineage, quality rules, MDM, access policies) in partnership with security, compliance, and business stakeholders.
Drive platform selection and design (e.g., cloud data services, analytics engines, storage tiers) balancing scalability, performance, resilience, and total cost.
Implement security and privacy by design (RBAC/ABAC, encryption, tokenization, masking, retention) and ensure regulatory compliance requirements are met.
Set standards and guardrails for SQL, schema evolution, event design, job orchestration, and CI/CD for data workloads; review solutions for architectural fit.
Partner with product, engineering, and analytics teams to translate business requirements into data structures, interfaces, and service contracts.
Lead migration and modernization initiatives (e.g., to cloud/lakehouse), including dependency mapping, cutover plans, and performance optimization.
Define SLOs/SLAs and capacity plans; monitor cost, reliability, and performance; drive continuous improvement via benchmarking and right-sizing.
Mentor engineers and analysts; contribute to architecture governance, patterns, and best practices; present roadmaps and decisions to senior stakeholders.
Required Qualifications
Strong expertise in data modeling (3NF, dimensional, Data Vault), SQL, and distributed compute/storage paradigms.
Practical experience with major cloud platforms (AWS, Azure, GCP) and modern data ecosystems (e.g., Snowflake, BigQuery, Databricks, Starburst/Trino, Apache Spark).
Proficiency in ETL/ELT orchestration and workflow tools (e.g., Airflow, dbt, native cloud services) and event/streaming systems (e.g., Kafka).
Proven track record implementing data governance: catalog, lineage, quality frameworks, MDM, and access controls.
Solid understanding of security and compliance for data (PII/PHI/PCI), including policy enforcement, encryption, and auditability.
Strong programming/scripting in Python (or Scala/Java) for data processing, automation, and tooling.
Excellent communication and stakeholder management; ability to translate complex technical concepts into clear business value.
Preferred Skills
Experience with lakehouse architectures, open table formats (e.g., Apache Iceberg/Delta), and data sharing patterns.
Familiarity with metadata-driven design, semantic layers, and BI acceleration techniques.
Exposure to ML/AI data readiness practices (feature engineering, data labeling, model data pipelines).
Infrastructure-as-Code (e.g., Terraform) and CI/CD for data platform provisioning and jobs.
Cost optimization and FinOps practices for data services.
Key Outcomes
Deliver a scalable, secure, and well-governed data platform that improves time-to-insight and reduces total cost of ownership.
Establish enterprise data standards that increase interoperability and reduce duplication.
Enable trusted analytics and AI by elevating data quality, lineage, and accessibility.
Required Qualifications
Master's in management information systems or computer science.
10-15 Years of Experience
Duration of Contract: 6 months with probable extension
Job Type: Contract
Pay: $90.00 - $105.00 per hour
Education:
Master's (Required)
Experience:
relevant work: 10 years (Required)
Ability to Commute:
Manhattan, NY 10001 (Required)
Willingness to travel:
25% (Required)
Work Location: In person
Data Architect
The Data Architect designs, governs, and evolves enterprise data architectures that enable reliable analytics, AI, and operational reporting. Data Architect defines standards for data modeling, integration, quality, security, and lifecycle management across cloud and on-prem platforms, ensuring data is trusted, performant, and cost-efficient.
This contract position is onsite.
Job Description:
Define end-to-end data architecture patterns (warehouse, lake/lakehouse, streaming, operational data stores) and reference designs aligned to business outcomes.
Own enterprise data models (conceptual, logical, physical), canonical data definitions, and metadata standards to drive consistency and reuse.
Architect data integration pipelines (batch and streaming) including ingestion, transformation, enrichment, and distribution with strong SLAs and observability.
Establish data governance controls (cataloging, lineage, quality rules, MDM, access policies) in partnership with security, compliance, and business stakeholders.
Drive platform selection and design (e.g., cloud data services, analytics engines, storage tiers) balancing scalability, performance, resilience, and total cost.
Implement security and privacy by design (RBAC/ABAC, encryption, tokenization, masking, retention) and ensure regulatory compliance requirements are met.
Set standards and guardrails for SQL, schema evolution, event design, job orchestration, and CI/CD for data workloads; review solutions for architectural fit.
Partner with product, engineering, and analytics teams to translate business requirements into data structures, interfaces, and service contracts.
Lead migration and modernization initiatives (e.g., to cloud/lakehouse), including dependency mapping, cutover plans, and performance optimization.
Define SLOs/SLAs and capacity plans; monitor cost, reliability, and performance; drive continuous improvement via benchmarking and right-sizing.
Mentor engineers and analysts; contribute to architecture governance, patterns, and best practices; present roadmaps and decisions to senior stakeholders.
Required Qualifications
Strong expertise in data modeling (3NF, dimensional, Data Vault), SQL, and distributed compute/storage paradigms.
Practical experience with major cloud platforms (AWS, Azure, GCP) and modern data ecosystems (e.g., Snowflake, BigQuery, Databricks, Starburst/Trino, Apache Spark).
Proficiency in ETL/ELT orchestration and workflow tools (e.g., Airflow, dbt, native cloud services) and event/streaming systems (e.g., Kafka).
Proven track record implementing data governance: catalog, lineage, quality frameworks, MDM, and access controls.
Solid understanding of security and compliance for data (PII/PHI/PCI), including policy enforcement, encryption, and auditability.
Strong programming/scripting in Python (or Scala/Java) for data processing, automation, and tooling.
Excellent communication and stakeholder management; ability to translate complex technical concepts into clear business value.
Preferred Skills
Experience with lakehouse architectures, open table formats (e.g., Apache Iceberg/Delta), and data sharing patterns.
Familiarity with metadata-driven design, semantic layers, and BI acceleration techniques.
Exposure to ML/AI data readiness practices (feature engineering, data labeling, model data pipelines).
Infrastructure-as-Code (e.g., Terraform) and CI/CD for data platform provisioning and jobs.
Cost optimization and FinOps practices for data services.
Key Outcomes
Deliver a scalable, secure, and well-governed data platform that improves time-to-insight and reduces total cost of ownership.
Establish enterprise data standards that increase interoperability and reduce duplication.
Enable trusted analytics and AI by elevating data quality, lineage, and accessibility.
Required Qualifications
Master's in management information systems or computer science.
10-15 Years of Experience
Duration of Contract: 6 months with probable extension
Job Type: Contract
Pay: $90.00 - $105.00 per hour
Education:
Master's (Required)
Experience:
relevant work: 10 years (Required)
Ability to Commute:
Manhattan, NY 10001 (Required)
Willingness to travel:
25% (Required)
Work Location: In person






