KDB+/Q Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a KDB+/Q Engineer with a contract length of "X months" at a pay rate of "$X/hour". Required skills include KDB+, Q programming, and experience with monitoring tools. Location is "remote/on-site". Industry experience in quant/trading firms is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 15, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York, United States
-
🧠 - Skills detailed
#Deployment #Programming #Data Aggregation #Databases #Time Series #Data Ingestion #Monitoring
Role description
Primary skill: KDB+ / Q (KDB+ is an ultra low latency high performance time series DB that is used by quant / trading firms that drive millisecond efficiency). Q is the programming language to interact with KDB+ databases. Scope: Client has a huge installation of on-premises instances / servers (e.g., KDB deployment). The vision is to develop monitoring / alerting capability to sense when bottlenecks might happen, to proactively address these situations, and build state-of-the-art β€’ Discovery β€’ Current KDB+/Q server deployment, usage patterns, and SLAs β€’ Capture Inventory of available metrics, logs, and traces β€’ Capture data ingestion and query pathways, including latency and throughput β€’ Assessment β€’ Evaluate existing monitoring tools (i.e., Cerebro, Telegraf, custom tools) used in monitoring and managing the KDB+ ecosystem and custom code β€’ Identify gaps or inefficiencies in current monitoring coverage β€’ Analyze the integration points and data consumption patterns of applications interacting with KDB+ β€’ Understand key application dependencies and performance considerations β€’ Analytics/Telemetry Dashboard β€’ Design and implementation of dashboards by aligning with Client on what level the dashboards will be aggregated at, and then to visualize Service Level Indicators (SLIs) such as disk space utilization, query latency, system availability, CPU/memory usage, and error rates β€’ Recommendations for SLI thresholds and alerting Deliverables β€’ Overall Optimization of Telemetry: β€’ Server environment profile report: Generate automated report based on programmatically collected info on server specs, KDB versions, process topology, and deployment architecture β€’ Data aggregation (Process & Session): List of in-scope running processes, user sessions, and their configurations, and aggregating the data for analysis β€’ Resource Utilization and Server Log Analysis: Report and dashboard summarizing latency, CPU, memory, disk, and network usage patterns, and recommendations for tuning and optimization to Client team to perform β€’ Query Performance Analysis: Identification and profiling of slow or resource-intensive queries, with optimization suggestions β€’ Performance Dashboard: Visual dashboard for real-time monitoring of defined metrics