Jobs via Dice

Senior Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Architect, hybrid in Dallas or New York, with a contract length of 3 months+. Key skills include advanced SQL, Azure Data Factory, and Databricks. Five years of experience preferred; strong performance tuning and data architecture knowledge required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 18, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Data Pipeline #"ACID (Atomicity #Consistency #Isolation #Durability)" #Azure Databricks #Data Modeling #Azure #BI (Business Intelligence) #Big Data #Compliance #Data Integrity #Datasets #SQL Queries #Microservices #Databricks #Batch #ADF (Azure Data Factory) #Scala #AI (Artificial Intelligence) #Database Architecture #Indexing #Monitoring #Cloud #Semantic Models #Data Engineering #DevOps #Security #ML (Machine Learning) #Normalization #Visualization #Microsoft Power BI #Apache Spark #SQL (Structured Query Language) #Storage #Data Processing #Leadership #Data Governance #Azure Data Factory #Delta Lake #Spark (Apache Spark) #Data Architecture
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Charter Global, Inc., is seeking the following. Apply via Dice today! Job Description, Day to Day, Education, Certification, etc.: Notes: Please see the details below regarding the Senior Data Architect position: The preferred location for this role is Dallas, but the team is open to candidates based in New York. The position is hybrid, though we are willing to consider candidates who can come into the office five days a week if needed. They are looking for someone who can combine CST work hours. Contract description: Key Responsibilities: • Database & System Architecture: Design and implement scalable, secure database architectures for transactional and analytical use cases. Define data models, partitioning st Job Title: Senior Data Architect Location: Dallas, TX or New York City Duration: 03 months+ Notes: Please see the details below regarding the Senior Data Architect position: The preferred location for this role is Dallas, but the team is open to candidates based in New York. The position is hybrid, though we are willing to consider candidates who can come into the office five days a week if needed. They are looking for someone who can align with CST work hours. When reviewing resumes, the team are especially interested in candidates who are motivated and eager to build a long-term career, not just seeking a job or short-term assignments. Strong fundamentals are a must. Ideally, candidates should have at least five years of experience, but there are no strict requirements regarding industry experience. The top three technical or functional requirements for this role are strong SQL and optimization skills, experience with AI tools, and proficiency with Azure Fabric and/or Databricks Contract description: Key Responsibilities: • Database & System Architecture: Design and implement scalable, secure database architectures for transactional and analytical use cases. Define data models, partitioning strategies, indexing, and normalization standards. Ensure data integrity, reliability, and maintainability across enterprise systems. Support architectural decisions for high-availability and fault-tolerant systems. • Performance Tuning & Optimization Analyze and optimize complex SQL queries and execution plans. Identify and resolve performance bottlenecks in high-volume environments. Implement proactive monitoring and alerting strategies. Improve system reliability and response times for mission-critical workloads. • Cloud & Distributed Systems Deploy and manage microservices-based data solutions on Azure Service Fabric. Design stateful and stateless services for data-intensive and distributed workloads. Ensure scalability, fault tolerance, and resilience across cloud environments. Optimize service communication patterns and resource utilization. • Modern Data Engineering & Analytics Design and orchestrate batch and streaming pipelines using Azure Data Factory. Build and optimize big data processing workflows on Azure Databricks using Apache Spark. Implement Delta Lake and Delta Tables to support ACID transactions, data versioning, and schema evolution. Enable advanced analytics and machine learning workloads on large-scale data platforms. • Reporting & Data Consumption Partner with BI teams to support Power BI semantic models and reporting needs. Ensure data pipelines deliver clean, consistent, and trusted datasets. Optimize data structures for analytical and visualization performance. Support self-service analytics and enterprise reporting initiatives. • Collaboration & Leadership Collaborate with architects, DevOps, and business stakeholders to align solutions with strategic goals. Provide technical guidance and mentorship to junior engineers. Contribute to engineering best practices, standards, and design reviews. Act as a subject-matter expert for data platform decisions. Qualifications: Advanced SQL expertise, including query optimization and transaction handling. • Strong experience with performance tuning in complex, high-volume systems. • Hands-on experience with Azure Service Fabric, Azure Data Factory, Azure Databricks, and Azure Storage. • Deep knowledge of Apache Spark, Delta Lake, and distributed data processing. • Familiarity with Power BI and data modeling best practices. Preferred: • Experience implementing CI/CD pipelines for data and analytics solutions. • Knowledge of data governance, security, and compliance frameworks. • Exposure to machine learning pipelines and real-time streaming architectures. rategies, indexing, and normalization standards. Ensure data integrity, reliability, and maintainability across enterprise systems. Support architectural decisions for high-availability and fault-tolerant systems. • Performance Tuning & Optimization Analyze and optimize complex SQL queries and execution plans. Identify and resolve performance bottlenecks in high-volume environments. Implement proactive monitoring and alerting strategies. Improve system reliability and response times for mission-critical workloads. • Cloud & Distributed Systems Deploy and manage microservices-based data solutions on Azure Service Fabric. Design stateful and stateless services for data-intensive and distributed workloads. Ensure scalability, fault tolerance, and resilience across cloud environments. Optimize service communication patterns and resource utilization. • Modern Data Engineering & Analytics Design and orchestrate batch and streaming pipelines using Azure Data Factory. Build and optimize big data processing workflows on Azure Databricks using Apache Spark. Implement Delta Lake and Delta Tables to support ACID transactions, data versioning, and schema evolution. Enable advanced analytics and machine learning workloads on large-scale data platforms. • Reporting & Data Consumption Partner with BI teams to support Power BI semantic models and reporting needs. Ensure data pipelines deliver clean, consistent, and trusted datasets. Optimize data structures for analytical and visualization performance. Support self-service analytics and enterprise reporting initiatives. • Collaboration & Leadership Collaborate with architects, DevOps, and business stakeholders to align solutions with strategic goals. Provide technical guidance and mentorship to junior engineers. Contribute to engineering best practices, standards, and design reviews. Act as a subject-matter expert for data platform decisions. Qualifications: Advanced SQL expertise, including query optimization and transaction handling. • Strong experience with performance tuning in complex, high-volume systems. • Hands-on experience with Azure Service Fabric, Azure Data Factory, Azure Databricks, and Azure Storage. • Deep knowledge of Apache Spark, Delta Lake, and distributed data processing. • Familiarity with Power BI and data modeling best practices. Preferred: • Experience implementing CI/CD pipelines for data and analytics solutions. • Knowledge of data governance, security, and compliance frameworks. • Exposure to machine learning pipelines and real-time streaming architectures.