Augusta Hitech

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Architect position for a 6-month contract, offering a pay rate of "X" per hour. Key skills include data modeling, cloud architecture, and ETL/ELT expertise. A Bachelor’s or Master’s degree and 10+ years of relevant experience are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 7, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Hadoop #Data Governance #Data Quality #AWS (Amazon Web Services) #Data Modeling #Visualization #SQL (Structured Query Language) #Big Data #DevOps #BI (Business Intelligence) #Data Pipeline #Metadata #Data Integration #Data Science #Data Lake #Cloud #ML (Machine Learning) #Tableau #Kafka (Apache Kafka) #Data Architecture #Compliance #AWS Glue #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Leadership #Strategy #AI (Artificial Intelligence) #Data Processing #Looker #Databricks #Spark (Apache Spark) #Azure #ADF (Azure Data Factory) #Microsoft Power BI #Data Engineering #Migration #NoSQL #Azure Data Factory #Informatica #Scala #Python #Computer Science #BigQuery #Data Management #Data Mart
Role description
Summary: We are seeking an experienced Data Architect to design and lead enterprise-level data solutions that support advanced analytics, business intelligence, and AI initiatives. The ideal candidate will bring deep expertise in data warehousing, big data engineering, and cloud architecture, ensuring the organization’s data ecosystem is scalable, secure, and future-ready. Key Responsibilities: • Design and maintain the enterprise data architecture blueprint to support analytics, reporting, and data science initiatives. • Lead the development of data models, pipelines, and integration frameworks across on-prem and cloud environments. • Define and enforce data standards, governance, and metadata management policies. • Design scalable ETL/ELT pipelines using tools such as Spark, Hive, SQL, and Databricks. • Implement best practices in data warehousing, data lakes, and data marts for performance and reliability. • Architect and manage data solutions in cloud environments (Azure, AWS, GCP). • Lead data platform modernization and migration initiatives from legacy systems to cloud-native architectures. • Partner with engineering teams to enable real-time data processing and streaming architectures (Kafka, Spark Streaming). • Support the adoption of AI/ML capabilities through modernized and well-structured data pipelines. • Work closely with business, engineering, and analytics teams to translate business needs into technical data solutions. • Collaborate with enterprise architects to align data architecture with overall IT strategy. • Provide technical leadership to data engineers, analysts, and developers. • Ensure data solutions are designed for scalability, quality, and compliance. Qualifications: Required: • Bachelor’s or Master’s degree in Computer Science, Engineering, or related discipline. • 10+ years of experience in data architecture, data engineering, or enterprise data solutions. • Hands-on experience with SQL, Python, Spark, Hive, Hadoop, and ETL frameworks. • Strong understanding of data warehousing concepts and data modeling techniques (dimensional, relational, and NoSQL). • Experience with cloud data ecosystems (Azure Data Factory, Databricks, AWS Glue, GCP BigQuery). • Familiarity with data governance, data quality, and metadata tools such as Apache Atlas or Informatica. Preferred: • Certification in Databricks, Azure Data Engineer, or AWS Data Architect. • Experience with AI/ML data pipelines and modern DevOps practices. • Knowledge of data visualization and BI tools (Looker, Power BI, Tableau). • Strong analytical, leadership, and communication skills. Key Skills: • Data Modeling • Data Warehousing • Big Data • Cloud Architecture • Databricks • Spark • Python • Hive • ETL/ELT • Data Governance • SQL • Data Integration • Metadata Management • Analytics Enablement