Augusta Hitech

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect with a contract length of "Unknown," offering a pay rate of "Unknown." Key requirements include 10+ years of experience, expertise in SQL, Python, Spark, and cloud data ecosystems. Certifications in Databricks or Azure preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 28, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Plano, TX
-
🧠 - Skills detailed
#ADF (Azure Data Factory) #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Cloud #AI (Artificial Intelligence) #AWS Glue #Data Lake #SQL (Structured Query Language) #DevOps #Looker #Data Governance #Strategy #Data Mart #Compliance #Data Integration #Metadata #Scala #GCP (Google Cloud Platform) #Data Management #Data Processing #Spark (Apache Spark) #Data Architecture #Leadership #Data Pipeline #Azure #Visualization #Tableau #Informatica #Hadoop #Computer Science #BigQuery #Data Science #Microsoft Power BI #Data Engineering #Migration #Data Quality #Big Data #Data Modeling #ML (Machine Learning) #Python #AWS (Amazon Web Services) #NoSQL #Kafka (Apache Kafka) #Azure Data Factory #Databricks
Role description
Summary: We are seeking an experienced Data Architect to design and lead enterprise-level data solutions that support advanced analytics, business intelligence, and AI initiatives. The ideal candidate will bring deep expertise in data warehousing, big data engineering, and cloud architecture, ensuring the organization’s data ecosystem is scalable, secure, and future-ready. Key Responsibilities: β€’ Design and maintain the enterprise data architecture blueprint to support analytics, reporting, and data science initiatives. β€’ Lead the development of data models, pipelines, and integration frameworks across on-prem and cloud environments. β€’ Define and enforce data standards, governance, and metadata management policies. β€’ Design scalable ETL/ELT pipelines using tools such as Spark, Hive, SQL, and Databricks. β€’ Implement best practices in data warehousing, data lakes, and data marts for performance and reliability. β€’ Architect and manage data solutions in cloud environments (Azure, AWS, GCP). β€’ Lead data platform modernization and migration initiatives from legacy systems to cloud-native architectures. β€’ Partner with engineering teams to enable real-time data processing and streaming architectures (Kafka, Spark Streaming). β€’ Support the adoption of AI/ML capabilities through modernized and well-structured data pipelines. β€’ Work closely with business, engineering, and analytics teams to translate business needs into technical data solutions. β€’ Collaborate with enterprise architects to align data architecture with overall IT strategy. β€’ Provide technical leadership to data engineers, analysts, and developers. β€’ Ensure data solutions are designed for scalability, quality, and compliance. Qualifications: Required: β€’ Bachelor’s or Master’s degree in Computer Science, Engineering, or related discipline. β€’ 10+ years of experience in data architecture, data engineering, or enterprise data solutions. β€’ Hands-on experience with SQL, Python, Spark, Hive, Hadoop, and ETL frameworks. β€’ Strong understanding of data warehousing concepts and data modeling techniques (dimensional, relational, and NoSQL). β€’ Experience with cloud data ecosystems (Azure Data Factory, Databricks, AWS Glue, GCP BigQuery). β€’ Familiarity with data governance, data quality, and metadata tools such as Apache Atlas or Informatica. Preferred: β€’ Certification in Databricks, Azure Data Engineer, or AWS Data Architect. β€’ Experience with AI/ML data pipelines and modern DevOps practices. β€’ Knowledge of data visualization and BI tools (Looker, Power BI, Tableau). β€’ Strong analytical, leadership, and communication skills. Key Skills: β€’ Data Modeling β€’ Data Warehousing β€’ Big Data β€’ Cloud Architecture β€’ Databricks β€’ Spark β€’ Python β€’ Hive β€’ ETL/ELT β€’ Data Governance β€’ SQL β€’ Data Integration β€’ Metadata Management β€’ Analytics Enablement