

KDB+ / Q
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a KDB+ / Q freelancer, contracted for "X months" at a pay rate of "$X/hour." Location is "remote." Key skills include KDB+, Q, time series, data ingestion, and monitoring. Experience with quant/trading firms is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 24, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York, United States
-
π§ - Skills detailed
#Monitoring #Databases #Time Series #Data Aggregation #Data Ingestion #Deployment #Programming
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Primary skill: KDB+ / Q (KDB+ is an ultra low latency high performance time series DB that is used by quant / trading firms that drive millisecond efficiency). Q is the programming language to interact with KDB+ databases.
Scope:
Client has a huge installation of on-premises instances / servers (e.g., KDB deployment). The vision is to develop monitoring / alerting capability to sense when bottlenecks might happen, to proactively address these situations, and build state-of-the-art
β’ Discovery
β’ Current KDB+/Q server deployment, usage patterns, and SLAs
β’ Capture Inventory of available metrics, logs, and traces
β’ Capture data ingestion and query pathways, including latency and throughput
β’ Assessment
β’ Evaluate existing monitoring tools (i.e., Cerebro, Telegraf, custom tools) used in monitoring and managing the KDB+ ecosystem and custom code
β’ Identify gaps or inefficiencies in current monitoring coverage
β’ Analyze the integration points and data consumption patterns of applications interacting with KDB+
β’ Understand key application dependencies and performance considerations
β’ Analytics/Telemetry Dashboard
β’ Design and implementation of dashboards by aligning with Client on what level the dashboards will be aggregated at, and then to visualize Service Level Indicators (SLIs) such as disk space utilization, query latency, system availability, CPU/memory usage, and error rates
β’ Recommendations for SLI thresholds and alerting
Deliverables
β’ Overall Optimization of Telemetry:
β’ Server environment profile report: Generate automated report based on programmatically collected info on server specs, KDB versions, process topology, and deployment architecture
β’ Data aggregation (Process & Session): List of in-scope running processes, user sessions, and their configurations, and aggregating the data for analysis
β’ Resource Utilization and Server Log Analysis: Report and dashboard summarizing latency, CPU, memory, disk, and network usage patterns, and recommendations for tuning and optimization to Client team to perform
β’ Query Performance Analysis: Identification and profiling of slow or resource-intensive queries, with optimization suggestions
β’ Performance Dashboard: Visual dashboard for real-time monitoring of defined metrics