AI Coding Agents
AI coding agents like Claude Code and OpenAI Codex can help you build applications that use QuestDB.
These agents work with QuestDB out of the box by reading the online documentation. For deeper integration, agent skills embed rich context directly into the agent so it can work faster and more accurately. The QuestDB agent skill covers SQL, ingestion, and Grafana dashboards, while the TSBS Benchmark skill automates end-to-end performance benchmarking.
Getting started
No setup required. Use the public QuestDB demo with Claude Code:
You: "Use QuestDB's REST API at https://demo.questdb.io/ to list all tables"
Claude Code: [Queries /exec endpoint and lists available tables including trades]
You: "Query the trades table and show me the last 10 trades. Data is time-ordered natively, no ORDER BY needed"
Claude Code: [Sends SQL via HTTP: SELECT * FROM trades LIMIT -10]
You: "What's the total volume traded per symbol, sampled by 1 hour? Use SAMPLE BY"
Claude Code: [Writes and executes SAMPLE BY 1h query grouped by symbol]
You: "Plot the price of BTC-USDT over the last 30 days"
Claude Code: [Queries data and generates a chart using matplotlib]
Connect to your own QuestDB
- Install Claude Code: https://claude.ai/code
- Start QuestDB (default port 9000)
- Ask Claude Code to connect and explore
You: "Connect to my QuestDB at localhost:9000 and show me what tables I have"
Claude Code: I'll query the QuestDB REST API to list your tables.
[Executes curl command and shows results]
QuestDB agent skill
The QuestDB agent skill is an experimental skill for Claude Code and Codex. It embeds QuestDB-specific knowledge directly into the agent's context - SQL syntax, common mistakes, ingestion patterns, Grafana templates, and financial indicator recipes - so the agent can build complete data pipelines without searching the docs for every step. For topics not covered by the skill, the agent falls back to the online documentation automatically.
Installation
Copy the questdb/ folder from the repository into your skills directory:
Claude Code:
~/.claude/skills/questdb/- available in all projects<project>/.claude/skills/questdb/- available in a specific project
Codex:
~/.codex/skills/questdb/- available in all projects<project>/.codex/skills/questdb/- available in a specific project
The folder must contain SKILL.md and the references/ directory.
What's included
- SQL reference - QuestDB-specific syntax including
SAMPLE BY,LATEST ON,ASOF JOIN, window functions, and materialized views - Common mistakes - 50+ patterns to avoid when coming from PostgreSQL (e.g.,
time_bucket()does not exist, useSAMPLE BYinstead) - Ingestion patterns - Python templates for ILP ingestion, including array support for order book data
- Grafana integration - Dashboard deployment via API, datasource configuration, and ready-to-use panel queries
- Financial indicators - 20+ indicator recipes ready for Grafana panels
- Enterprise authentication - REST and ILP token configuration, ACL setup
Example prompt
With QuestDB and Grafana running locally:
Build a real-time crypto market data pipeline using cryptofeed (OKX exchange)
ingesting trades and L2 order book data into QuestDB.
Symbols: BTC-USDT, ETH-USDT, SOL-USDT.
Then create a Grafana dashboard with OHLC candlesticks, VWAP,
Bollinger Bands, and RSI panels, with a symbol dropdown.
The agent will create the database schema, write the ingestion script, wait for data, and deploy a Grafana dashboard - all in a single pass.
Built-in indicators
The following indicators are embedded in the skill and can be generated without online lookups:
Aggressor imbalance, ATR, Bid-ask spread, Bollinger Bands, Bollinger BandWidth, Compound interest, Cumulative product, Donchian Channels, Keltner Channels, Liquidity comparison, MACD, Maximum drawdown, OBV, OHLC bars, Rate of Change, Realized volatility, Rolling std dev, RSI, Stochastic Oscillator, TICK & TRIN, Volume profile, Volume spikes, VWAP.
TSBS Benchmark skill
The TSBS Benchmark skill is a skill for Claude Code and Codex that runs end-to-end Time Series Benchmark Suite (TSBS) benchmarks against QuestDB. It installs prerequisites, spins up QuestDB in Docker, builds TSBS, generates data, loads it, runs all query benchmarks, and cleans up - all from a single prompt.
Installation
Copy claude/SKILL.md from the repository into your skills directory:
Claude Code:
~/.claude/skills/tsbs-benchmark/SKILL.md- available in all projects<project>/.claude/skills/tsbs-benchmark/SKILL.md- available in a specific project
Codex:
~/.codex/skills/tsbs-benchmark/SKILL.md- available in all projects<project>/.codex/skills/tsbs-benchmark/SKILL.md- available in a specific project
What it handles
The skill orchestrates eight sequential steps:
- Prerequisites - validates and installs Docker, Go 1.22.5, and build tools (make, gcc, gzip)
- QuestDB deployment - launches the latest QuestDB container with ports 9000, 9009, 8812, and 9003
- TSBS compilation - clones and builds four QuestDB-specific binaries from the TSBS repository
- Data generation - creates ~12 GB of uncompressed benchmark data (34.5M rows, 345.6M metrics)
- Data loading - ingests via ILP with worker threads matching CPU cores (capped at 32)
- Query generation - produces 1,000 queries for each of the 16 query types
- Benchmark execution - runs all queries in single-worker mode so QuestDB's internal parallelization is measured accurately
- Cleanup - removes the Docker container and temporary files
Benchmark parameters
| Parameter | Value |
|---|---|
| Use case | cpu-only |
| Scale | 4,000 hosts |
| Time window | 1 day (2016-01-01 to 2016-01-02) |
| Log interval | 10 seconds |
| Rows generated | 34.5M |
| Metrics generated | 345.6M |
| Query types | 16 (cpu-max variants, single/double-groupby, high-cpu, lastpoint, groupby-orderby-limit) |
| Queries per type | 1,000 |
Example prompt
Run the full TSBS benchmark against QuestDB with the default cpu-only dataset.
The agent will handle everything from installing prerequisites through reporting the final query-by-query results.
Tips
- Provide context - Tell the agent about your use case, data volume, and requirements
- Ask follow-up questions - Agents remember context within a session
- Request explanations - Ask "why?" to understand recommendations
- Iterate on code - Ask the agent to modify or improve generated code
Next steps
- REST API reference - API documentation
- SQL overview - QuestDB SQL syntax
- Client libraries - Official client libraries
- Sample datasets - Example data to try