GeorgiaTEK Systems Inc.

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect in Seattle, WA, on a hybrid contract for 10+ years. Pay rate is unspecified. Key skills include Databricks, Unity Catalog, ETL, data governance, and security. Experience with cloud platforms is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#Azure #Schema Design #Compliance #Data Governance #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Security #Data Pipeline #Spark (Apache Spark) #Cloud #ML (Machine Learning) #Metadata #Kafka (Apache Kafka) #Data Lineage #Data Architecture #Data Modeling #Data Engineering #Delta Lake #Batch #Databricks #GCP (Google Cloud Platform) #Data Security #Strategy #Scala #Collibra #Data Quality #Data Management #Monitoring #Data Science
Role description
Job Title: Data Platform Engineer – Databricks & Unity Catalog Location: Seattle, WA Work Model: Hybrid – 4 days onsite (mandatory) Experience: 10+ Years Job Description We are seeking a highly experienced Data Platform Engineer to design, build, and manage scalable data platforms leveraging Databricks and modern data governance tools. The ideal candidate will have deep expertise in data architecture, ETL pipelines, governance, and security, with hands-on experience operationalizing data and machine learning workflows in both batch and real-time environments. This role requires close collaboration with cross-functional teams to translate business needs into robust, compliant, and high-performing data solutions. Key Responsibilities • Design and implement enterprise-grade data architecture using Databricks, Unity Catalog, Privacera, and Collibra. • Develop, optimize, and maintain scalable ETL/ELT pipelines in Databricks with a strong focus on data quality, reliability, and performance. • Design and manage data models and schemas aligned with governance and metadata standards using Unity Catalog and Collibra. • Implement and enforce data security, access control, and compliance requirements leveraging Databricks and Privacera capabilities. • Define and establish a comprehensive data governance strategy, including metadata management, data lineage, quality standards, and auditing practices. • Operationalize Machine Learning models within batch and real-time data pipelines, ensuring proper governance and monitoring. • Collaborate with data scientists, data engineers, analysts, and business stakeholders to deliver scalable and reusable data solutions. • Troubleshoot and optimize platform performance, ensuring high availability and operational excellence. Required Skills & Qualifications • 10+ years of experience in data engineering or data platform engineering roles. • Strong hands-on experience with Databricks (Spark, Delta Lake, workflows). • Proven experience with Unity Catalog for data governance and access control. • Experience working with data governance and security tools such as Privacera and Collibra. • Solid understanding of data architecture, data modeling, and schema design. • Experience building and managing batch and real-time data pipelines. • Strong knowledge of data security, compliance, and governance best practices. • Excellent communication and collaboration skills. Nice to Have • Experience with cloud platforms (AWS, Azure, or Google Cloud Platform). • Exposure to streaming technologies (Kafka, Structured Streaming). • Prior experience supporting enterprise-scale data platforms.