

Data Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect with a contract length of "unknown", offering a pay rate of "unknown" and remote work location. Requires 12+ years in Data Architecture, 7+ years with Databricks, and expertise in cloud platforms, big data, and data governance.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#MLflow #SQL (Structured Query Language) #Data Modeling #Azure #Data Lake #Delta Lake #Data Warehouse #Data Ingestion #GCP (Google Cloud Platform) #Data Architecture #Apache Spark #Big Data #DevOps #Airflow #Data Processing #Kafka (Apache Kafka) #Python #Scala #Data Governance #"ETL (Extract #Transform #Load)" #Cloud #Data Engineering #Security #Spark SQL #AWS (Amazon Web Services) #Data Science #ML (Machine Learning) #Data Security #Spark (Apache Spark) #Databricks #Leadership #Compliance
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Key Responsibilities: Design, architect, and implement scalable data platforms using Databricks , Apache Spark , and cloud-native services (preferably Azure or AWS ). Collaborate with cross-functional teams to define data strategies, data models, and architecture patterns aligned with business goals. Develop and optimize ELT/ETL pipelines, data lakes, and data warehouse solutions. Define and enforce data governance, security, and best practices for data ingestion, transformation, and analytics. Lead the modernization of legacy data systems and migrate workloads to Databricks-based platforms. Ensure high performance, availability, and scalability of data processing systems. Provide technical leadership and mentoring to data engineers and analysts. Work with product and business teams to translate data needs into architectural solutions. Required Skills & Experience: 12+ years of experience in Data Architecture , Data Engineering , or Analytics roles. Minimum 7 years of hands-on experience with Databricks , including Delta Lake, Spark SQL, MLflow, and Unity Catalog. Strong experience with big data technologies: Apache Spark , Kafka , Hive , Airflow , etc. Deep understanding of data modeling, dimensional modeling, and data warehousing concepts. Proficiency in Python , SQL , and Scala (preferred). Experience with cloud platforms like Azure , AWS , or GCP (Azure preferred). Familiarity with CI/CD , DevOps for data, and infrastructure-as-code. Strong understanding of data security , governance , and compliance standards . Excellent problem-solving, communication, and stakeholder management skills. Preferred Qualifications: Databricks Certification (e.g., Databricks Certified Data Engineer or Architect ) Experience working in regulated industries (e.g., finance, healthcare) Background in data science or machine learning integration is a plus