Openkyber

Cognos Optimization Specialist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cognos Optimization Specialist, a remote contract position in Austin, Texas. Key skills include AWS data architecture, Databricks, ETL/ELT pipelines, and data governance. A Bachelor's degree and experience in enterprise-scale data architectures are required.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 26, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas
-
🧠 - Skills detailed
#Strategy #RDS (Amazon Relational Database Service) #Leadership #Schema Design #SQL (Structured Query Language) #S3 (Amazon Simple Storage Service) #Redshift #Data Ingestion #Data Quality #Data Engineering #Scala #Data Pipeline #Compliance #Computer Science #Cloud #Data Modeling #Informatica #Metadata #Storage #Jira #Agile #Tableau #Big Data #Data Architecture #"ETL (Extract #Transform #Load)" #Data Processing #Data Integration #Data Storage #NoSQL #Data Lifecycle #AWS Glue #Delta Lake #Batch #Databricks #ML (Machine Learning) #AWS (Amazon Web Services) #DevOps #Data Strategy #Data Governance
Role description
City : Austin State : Texas Neos is Seeking a Data Architect for a contract role for with our client in Austin, TX. REMOTE - ONLY CANDIDATES CURRENTLY RESIDING IN THE AUSTIN, TEXAS WILL BE CONSIDERED No calls, no emails, please respond directly to the "apply" link with your resume and contact details. Data Architect - Enterprise Data Modernization (D2I Initiative) Environment: Higher Education | Enterprise Data | Hybrid Mainframe + Cloud Position Overview The Data Architect will support UT Austin's enterprise Data to Insights (D2I) modernization initiative. This role will design and guide the transition from legacy mainframe-based data environments to a modern cloud-based data platform leveraging Databricks, while supporting downstream analytics tools including Tableau and Cognos. This is a high-visibility, enterprise-level architecture role focused on data integration, governance, scalability, and performance across academic and administrative domains. Key Responsibilities The Data Architect will play a key role in designing, implementing, and scaling cloud-based data architectures within our AWS environment. As we implement Databricks, this role will focus on modernizing our unified data platform to enable advanced analytics, machine learning, and real-time data processing. This position will collaborate closely with Data Engineering, DevOps, Data Modeling, Analytics, and Metadata teams to ensure scalable, efficient, and well-governed data solutions. Partnering with the Chief Enterprise Data Architect, you will drive data strategy, architecture, and implementation that align with our business. - Cloud Data Architecture Design and implement robust, scalable, and secure AWS-based data architectures with a focus on Databricks adoption. - Partner with the Analytics and Data Modeling Group to ensure alignment on data modeling standards, schema design, and integration with data pipelines. - Architect efficient ETL/ELT pipelines for data ingestion, transformation, and delivery, supporting operational and analytical workloads. - Develop and maintain comprehensive data strategies that align with enterprise goals, enabling real-time and batch data processing. - Create technical artifacts, standards, and architectural frameworks to address current and future business requirements. - Ensure data quality, governance, and compliance throughout the data lifecycle. Databricks Implementation and Leadership Lead the implementation and optimization of Databricks for advanced data engineering, analytics, and machine learning workloads. Drive the adoption of Delta Lake architectures for high-performance data pipelines. Develop and operationalize scalable architectures for collaborative notebooks, machine learning workflows, and real-time processing. Collaboration and Innovation Collaborate with Data Engineering, DevOps, Data Modeling, Analytics, and Metadata teams to align on architectural decisions, standards, and processes. Serve as a technical advisor to stakeholders, effectively communicating complex data concepts to leadership and cross-functional teams. Foster a culture of collaboration, innovation, and continuous improvement within the team. Required Qualifications Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience. Proven experience designing and implementing enterprise-scale data architectures in AWS environments. Strong expertise in data modeling, schema design, and database structures, with experience working closely with modeling teams. Hands-on experience with Databricks for big data processing, analytics, and machine learning. Proficiency in building ETL/ELT pipelines and working with data integration tools (e.g., AWS Glue, Informatica). Deep understanding of SQL, NoSQL, and data storage technologies (e.g., Redshift, RDS, S3). Experience ensuring data governance, quality, and compliance. Strong troubleshooting skills and ability to optimize performance of cloud data solutions. Collaborative team player with excellent communication and leadership skills. Relevant education and experience may be substituted as appropriate. Preferred Qualifications Master's degree in a relevant field. Certifications in AWS (e.g., AWS Solutions Architect) and Databricks (e.g., Databricks Certified Professional). Expertise in Delta Lake architecture design and implementation. Familiarity with Agile development methodologies and tools (e.g., JIRA, Confluence). Proven experience leading data modernization initiatives across cross-functional teams. For applications and inquiries, contact: hirings@openkyber.com