Skip to main content
Glama
133,443 tools. Last updated 2026-05-13 00:12

"postgres" matching MCP tools:

  • ⚠️ SQL MUST BE VALID IN EVERY DIALECT YOU TARGET — stick to ANSI-ish SELECT syntax when mixing pg/mysql/mssql. `SELECT TOP 10` (mssql) or `LIMIT` (others) will fail on the wrong side. Run the same query across 2-4 connections in parallel; returns per-connection rows + errors for diffing. Canonical use cases: regional compare (`['mssql-reporting-us', 'mssql-reporting-eu']`), cross-dialect sync check (`['prod-postgres-fleet', 'prod-mysql-app']`), 3-env drift, 4-region compare. Resolve every connection name via `list_connections` first; tool fails per-connection on unknown names. ARCHITECT-tier cap: 4 connections; https://www.thinair.co/ for unlimited. [ARCHITECT tier]
    Connector
  • Execute a read-only SQL query against the target connection. ONLY SELECT / WITH / EXPLAIN permitted. Write dialect-appropriate SQL for the connection's engine — use PostgreSQL syntax for postgres connections (`SELECT NOW()`, `LIMIT`, `ILIKE`), T-SQL for mssql (`SELECT GETDATE()`, `TOP N`, `LIKE`), MySQL for mysql (`SELECT NOW()`, `LIMIT`). Response meta includes `connection` + `dialect` so you know which syntax worked; reuse that dialect in follow-up calls. Default LIMIT 100 unless the user asks for all rows.
    Connector
  • Execute raw, client-provided SQL queries against an ephemeral database initialized with the provided schema. Returns query results in a simple JSON format with column headers and row data as a 2D array. The database type (SQLite or Postgres) is specified via the databaseType parameter: - SQLITE: In-memory, lightweight, uses standard SQLite syntax - POSTGRES: Temporary isolated schema with dedicated user, uses PostgreSQL syntax and features WHEN TO USE: When you need to run your own hand-written SQL queries to test database behavior or compare the output with ExoQuery results from validateAndRunExoquery. This lets you verify that ExoQuery-generated SQL produces the same results as your expected SQL. INPUT REQUIREMENTS: - query: A valid SQL query (SELECT, INSERT, UPDATE, DELETE, etc.) - schema: SQL schema with CREATE TABLE and INSERT statements to initialize the test database - databaseType: Either "SQLITE" or "POSTGRES" (defaults to SQLITE if not specified) OUTPUT FORMAT: On success, returns JSON with the SQL query and a 2D array of results: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} Output format details: - First array element contains column headers - Subsequent array elements contain row data - All values are returned as strings On error, returns JSON with error message and the attempted query (if available): {"error":"Query execution failed: no such table: USERS","sql":"SELECT * FROM USERS"} Or if schema initialization fails: {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users ...\\n--------","sql":"CREAT TABLE users ..."} EXAMPLE INPUT: Query: SELECT * FROM users ORDER BY id Schema: CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER ); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30); INSERT INTO users (id, name, age) VALUES (2, 'Bob', 25); INSERT INTO users (id, name, age) VALUES (3, 'Charlie', 35); EXAMPLE SUCCESS OUTPUT: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} EXAMPLE ERROR OUTPUT (bad table name): {"error":"Query execution failed: no such table: invalid_table","sql":"SELECT * FROM invalid_table"} EXAMPLE ERROR OUTPUT (bad schema): {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users (id INTEGER)\\n--------\\nCheck that the initialization SQL is valid and compatible with SQLite.","sql":"CREAT TABLE users (id INTEGER)"} COMMON QUERY EXAMPLES: Select all rows: SELECT * FROM users Select specific columns with filtering: SELECT name, age FROM users WHERE age > 25 Aggregate functions: SELECT COUNT(*) as total FROM users Join queries: SELECT u.name, o.total FROM users u JOIN orders o ON u.id = o.user_id Insert data: INSERT INTO users (name, age) VALUES ('David', 40) Update data: UPDATE users SET age = 31 WHERE name = 'Alice' Delete data: DELETE FROM users WHERE age < 25 Count with grouping: SELECT age, COUNT(*) as count FROM users GROUP BY age SCHEMA RULES: - Use standard SQLite syntax - Table names are case-sensitive (use lowercase for simplicity or quote names) - Include INSERT statements to populate test data for meaningful results - Supported data types: INTEGER, TEXT, REAL, BLOB, NULL - Use INTEGER PRIMARY KEY for auto-increment columns - Schema SQL is split on semicolons (;), so each statement after a ';' is executed separately - Avoid semicolons in comments as they will cause statement parsing issues COMPARISON WITH EXOQUERY: This tool is designed to work alongside validateAndRunExoquery for comparison purposes: 1. Use validateAndRunExoquery to run ExoQuery Kotlin code and see the generated SQL + results 2. Use runRawSql with your own hand-written SQL to verify you get the same output 3. Compare the outputs to ensure ExoQuery generates the SQL you expect 4. Test edge cases with plain SQL before writing equivalent ExoQuery code
    Connector
  • Checks if a Cloud SQL for PostgreSQL instance is ready for a major version upgrade to the specified target version. The `target_database_version` MUST be provided in the request (e.g., `POSTGRES_15`). This tool helps identify potential issues *before* attempting the actual upgrade, reducing the risk of failure or downtime. This tool is only supported for PostgreSQL primary instances and does not run on read replicas. The precheck typically evaluates: - Database schema compatibility with the target version. - Cloud SQL limitations and unsupported features. - Instance resource constraints (e.g., number of relations). - Compatibility of current database settings and extensions. - Overall instance health and readiness. This tool returns a long-running operation. Use the `get_operation` tool with the operation name returned by this call to poll its status. IMPORTANT: Once the operation status is DONE, the detailed precheck results are available within the `Operation` resource. You will need to inspect the response from `get_operation`. The findings are located in the `pre_check_major_version_upgrade_context.pre_check_response` field. The findings are structured, indicating: - INFO: General information. - WARNING: Potential issues that don't block the upgrade but should be reviewed. - ERROR: Critical issues that MUST be resolved before attempting the upgrade. Each finding should include a message and any required actions. Addressing any reported issues is crucial before proceeding with the major version upgrade. If `pre_check_response` is empty or missing, it indicates that no issues were identified during the precheck. Running this precheck does not impact the instance's availability.
    Connector
  • List every database connection registered for your tenant: name, id, dbType (postgres / mysql / mssql), createdAt. Flags duplicate names — only the first-added connection of a duplicate name is reachable by name. Returns nothing sensitive (no DSN, no credentials).
    Connector
  • List or search charts in a Helm repository. Provide a repository_url, then optionally filter by keyword (e.g. keyword='postgres'). Note: OCI registries (oci://) do not support browsing — for OCI you must already know the chart name, then call get_versions or get_values directly with that name.
    Connector

Matching MCP Servers

  • A
    license
    B
    quality
    D
    maintenance
    Postgres Pro is an open source Model Context Protocol (MCP) server built to support you and your AI agents throughout the entire development process—from initial coding, through testing and deployment, and to production tuning and maintenance.
    Last updated
    9
    2,699
    MIT

Matching MCP Connectors

  • Regex content search across a project's files. Postgres-backed, scoped to one project, with glob filtering. Three output modes: - files_with_matches (default) — list paths containing a match - content — matching lines with optional context and line numbers - count — per-file match counts + total Default head_limit is 250 to prevent context blowups on broad patterns. Use glob to narrow by path (e.g. 'api/**/*.js', 'public/**/*.html'). Regex uses Postgres syntax (~ / ~*). Invalid or catastrophic patterns error out via a 2s statement timeout — simplify the pattern if that happens.
    Connector
  • Create a new Hatchable project. This generates a URL slug, creates a dedicated PostgreSQL database, and returns the project ID and URLs. Call this first before writing files or creating tables. ## Project structure ``` public/ static files, served at their file path api/ backend functions — each file is one endpoint hello.js → /api/hello users/list.js → /api/users/list users/[id].js → /api/users/:id (req.params.id — one segment) docs/[...path].js → /api/docs/*path (req.params.path — string[], catches multi-segment) _lib/ shared code, not routed migrations/*.sql SQL files, run in filename order on every deploy seed.sql optional — runs on first deploy / fork, once per project hatchable.toml optional overrides (cron, auth, project name) package.json dependencies (no build scripts yet — build locally, commit public/) ``` ### Routing precedence Most-specific wins. For a request to `/api/users/42`: 1. `api/users/42.js` (static) — beats 2. `api/users/[id].js` (single-param, `params.id = "42"`) — beats 3. `api/users/[...rest].js` (catch-all, `params.rest = ["42"]`) Catch-all params arrive as `string[]`, never slash-joined. Use `req.params.path` as an array: `const [first, ...rest] = req.params.path;` ### Static file resolution (public/) A request to `/foo/bar/baz` tries, in order: 1. `public/foo/bar/baz` (exact file) 2. `public/foo/bar/baz.html` 3. `public/foo/bar/baz/index.html` 4. Ancestor `index.html` fallback — walks up: `public/foo/bar/index.html` → `public/foo/index.html` → `public/index.html` Step 4 means each folder with an `index.html` acts as its own mini-site. You can ship an `/admin/*` React SPA alongside a static marketing page at `/` — unmatched paths under `/admin/` fall back to `public/admin/index.html`, not the root one. ## Handler contract Every file under api/ exports a default async function: ```js // api/users/list.js import { db, auth } from "hatchable"; export default async function (req, res) { const user = auth.getUser(req); if (!user) return res.status(401).json({ error: "Not logged in" }); const { rows } = await db.query( "SELECT id, name FROM users WHERE org_id = $1", [user.id] ); res.json(rows); } // Optional: restrict methods export const methods = ["GET"]; // Optional: register this endpoint as a recurring scheduled task. // Minimum interval is hourly. See also: scheduler.at() in the SDK // for imperative / one-shot / per-firing-payload scheduling. // export const schedule = "0 */6 * * *"; ``` ### req (Express-shaped) - method, url, path, headers, cookies, params, query - body — parsed by Content-Type: JSON → object, urlencoded → object, multipart/form-data → object of non-file fields - files — present for multipart uploads: [{ field, filename, contentType, buffer }] ### res (Express-shaped) - res.json(data), res.status(code) (chainable), res.send(text|buffer) - res.redirect(url), res.cookie(name, value, opts), res.setHeader(name, value) ## SDK — import from "hatchable" Everything you need lives under one import. Do not reach for npm packages that duplicate these — the deploy linter rejects `puppeteer-core`, `@anthropic-ai/sdk`, `pg`, `nodemailer`, `bullmq`, `ioredis`, `@aws-sdk/client-s3`, `child_process`, etc. and points you here. ``` // project storage / SQL db.query(sql, params) → { rows, rowCount } db.transaction([{sql, params}, ...]) → { results: [...] } storage.put(key, buffer, contentType) → url storage.get(key) → { buffer, contentType } storage.del(key) // identity + comms auth.getUser(req) → { id, email, name } | null email.send({ to, subject, html }) // scheduling + background work scheduler.at(when, route, opts?) → declared/armed cron scheduler.cancel(taskId) // browser, AI, knowledge — managed services, no npm install browser.html(url) / browser.pdf(url) / browser.screenshot(url) browser.session(async page => { ... }) → puppeteer-shaped ai.generateText({ model: 'sonnet', prompt | messages, system?, tools?, maxSteps?, purpose? }) ai.streamText(opts) → AsyncIterator ai.embed(input) → { embedding } | { embeddings } knowledge.base(name, { dimensions }).add/search/searchByVector/remove/table ``` External HTTP via global `fetch` (routed through Hatchable's egress proxy automatically). Project secrets are declared in `hatchable.toml` under `[[secret]]`; humans paste values via the platform-rendered setup gate. `ai.generateText` reads keys server-side via the gateway — never via raw `process.env`. ### What you cannot do - Spawn binaries (no `child_process`, no shell). - Persist to local filesystem between requests (use `storage` instead). - Open a long-lived TCP/WebSocket server. - Install npm packages with native bindings — Hatchable does not run `npm install` at deploy. The SDK above replaces every common reason to reach for one. ### Scheduling Two ways to schedule a function — pick based on whether the "when" is known at deploy time or at runtime. **Declared** (static, lives in source, reconciled on deploy): ```js // api/nightly-report.js export const schedule = "0 9 * * *"; // 5-field cron, minimum hourly export default async function (req, res) { /* ... */ } ``` **Armed** (dynamic, from user code, preserved across deploys): ```js import { scheduler } from "hatchable"; // recurring — first arg is a 5-field cron string await scheduler.at("0 * * * *", "/api/ping"); // one-shot at a specific moment, with per-firing payload await scheduler.at("2026-05-01T07:00:00Z", "/api/book", { payload: { missionId: 42 } }); // idempotent named arm — repeated calls update the same task await scheduler.at("0 9 * * *", "/api/digest", { name: "daily-digest" }); // cancel by id await scheduler.cancel(taskId); ``` Each firing invokes `route` with `req.headers['x-hatchable-trigger'] === 'cron'` and `req.body === payload`. Use one-shot + payload instead of writing your own "pending jobs" table with a polling cron — that's the pattern the primitive replaces. ## Database Postgres. Write schema in migrations/*.sql. Files run in filename order, tracked in __hatchable_migrations so each runs once. Always use RETURNING to get inserted ids in the same round trip: ```sql INSERT INTO users (email) VALUES ($1) RETURNING id ``` Never call lastval() or LAST_INSERT_ID() — each db.query is a fresh connection, so session-local state doesn't carry across calls. ## Available APIs Functions run in V8 isolates. You get: - The full Hatchable SDK (see above). - Plain JS / TypeScript (no transpile step needed for modern syntax). - `fetch` for external HTTP (routed through Hatchable's egress proxy for quota + accounting; pass through transparently to the URL). - Web Crypto and standard ECMAScript builtins. - Pure-JS npm packages — anything that doesn't need native bindings, filesystem persistence, child processes, or raw sockets. Common ones used regularly: csv-parse, xlsx, bcrypt, jsonwebtoken, uuid, date-fns, lodash, marked, sanitize-html, cheerio, xml2js, qrcode, stripe. - Declared secrets via `process.env.KEY` (only for `[[secret]]` entries in hatchable.toml that have `expose = true`; the project owner pastes the value through the setup gate). Most secrets are SDK-mediated and never reach process.env — see the secrets docs. What's NOT available — and the SDK alternative: | You wanted | Use this | |---|---| | `puppeteer-core` / chromium | `import { browser } from "hatchable"` | | `pg` / `mysql2` / SQL drivers | `import { db } from "hatchable"` | | `@anthropic-ai/sdk` / `openai` | `import { ai } from "hatchable"` (BYOK — set ANTHROPIC_API_KEY in project env) | | `nodemailer` / `@sendgrid/mail` | `import { email } from "hatchable"` | | `@aws-sdk/client-s3` | `import { storage } from "hatchable"` | | `ioredis` / `@upstash/redis` | `db` — use a Postgres table for KV-shaped state (Redis clients aren't available) | | `bullmq` / `bull` | `import { tasks } from "hatchable"` | | `sharp` / `jimp` | URL-based storage transforms (planned); `browser.screenshot` for HTML→image | | `fs.writeFileSync('/tmp/...')` | `storage.put(key, bytes)` | | `child_process.spawn` | not available — use `browser` for chromium, file an issue otherwise | The deploy linter rejects deploys that import the deny-listed packages and points you at the right SDK module by name. You'll see the redirect message before the deploy lands. ## Visibility Three tiers — each one a step up in who the software is for: - **personal** — free. You and anyone you invite. Login-gated via Hatchable accounts. Build anything including auth — test the full flow with your invitees before going live. - **public** — $12/mo. On the open web. Custom domains. No branding. No app-level auth (use Hatchable identity only). - **app** — $39/mo. On the open web + your app has its own users. Email/password signup, OAuth, password reset. If your project has [auth] enabled, this is the only live tier — you can't go Public with auth, you go straight to App. ## Calling the API from public/ At deploy time, Hatchable injects a tiny bootstrap into every HTML file: ```js window.__HATCHABLE__ = { slug: "my-app", api: "/api" }; ``` Use it as the base URL: ```js const API = window.__HATCHABLE__.api; fetch(API + "/users/list").then(r => r.json()).then(render); ``` ## Auth (optional) Enable auth in hatchable.toml to get a complete passwordless login flow with one config block. The platform auto-mounts /api/auth/* — do not write files under api/auth/ when auth is enabled. ```toml [auth] enabled = true providers = ["email"] ``` The flow is email-only and passwordless: enter email, receive a 6-digit code, optionally bind a passkey for one-tap returning logins. There are no passwords. Frontend: every page on a project with [auth] enabled automatically gets window.hatchable.auth — the platform-managed client that wraps every endpoint plus the WebAuthn ceremony. Don't fetch /api/auth/* directly, don't import a WebAuthn library: ```js const r = await window.hatchable.auth.startLogin({ email }); // r.has_passkey tells the UI whether to offer the passkey button await window.hatchable.auth.verifyCode({ email, code }); // → { user } await window.hatchable.auth.signInWithPasskey({ email }); // → { user } await window.hatchable.auth.registerPasskey(); // post-signin or settings await window.hatchable.auth.passkeys.list(); // [{ id, name, ... }] await window.hatchable.auth.passkeys.remove(id); await window.hatchable.auth.signOut(); await window.hatchable.auth.getSession(); // current session window.hatchable.auth.supportsPasskeys(); // gate passkey UI ``` Server side, use auth.requireUser / auth.getUser exactly as before. The platform-mounted endpoints (under /api/auth/*) are an implementation detail of window.hatchable.auth — you don't write fetch() calls to them, and you can't put your own files at api/auth/anything.js. Users live in these tables inside your project's own database: users, sessions, verifications, passkeys You can extend the users table with your own columns: ```sql -- migrations/002_user_profile.sql ALTER TABLE users ADD COLUMN phone text; ALTER TABLE users ADD COLUMN tier text DEFAULT 'free'; ``` You CANNOT drop or rename users/sessions/verifications/passkeys or create your own tables with those names — the deploy will fail with a clear error. In your API functions, use auth.requireUser to gate routes: ```js import { auth, db } from "hatchable"; export default async function (req, res) { const user = await auth.requireUser(req, res); if (!user) return; // requireUser already wrote the 401 const { rows } = await db.query( "SELECT * FROM bookings WHERE user_id = $1", [user.id] ); res.json(rows); } ``` For the canonical login + passkey UI shapes, read skills `auth/enable-app-auth` and `auth/register-a-passkey`. ## Deploy After writing files, call the `deploy` tool. It runs migrations, seeds (first deploy only), copies public/ to the CDN, registers api/ routes, and — if [auth] enabled — provisions the auth tables in your database.
    Connector
  • List active sessions + blocking locks. Uses the dialect's own system view — `pg_stat_activity` on postgres, `information_schema.processlist` on mysql, `sys.dm_exec_requests` joined with `sys.dm_tran_locks` on mssql. No dialect arg needed — inferred from the connection. **Required privileges (per dialect):** postgres — `pg_read_all_stats` role membership (or be the role that owns the queries; otherwise you only see your own session); mysql — `PROCESS` privilege; mssql — `VIEW SERVER STATE`. If the role lacks the privilege the tool returns a clean `Query blocked by security policy` error rather than partial data — grant the role above and retry. RDS/Aurora/Azure managed PostgreSQL: `pg_read_all_stats` is grantable but not on by default. [BUILD tier]
    Connector
  • Scan a table for unusual patterns: volume drops/spikes, data gaps, value concentration, high null rates, stale data. Severity-ranked alerts. Tables > 100k rows use a sampled path (~5%) — when a finding has `sampled:true`, surface it to the user with a hedge like 'based on a ~5% sample' rather than presenting the number as exact. Dialect-aware: TABLESAMPLE SYSTEM on postgres, TABLESAMPLE PERCENT on mssql, WHERE RAND() on mysql.
    Connector
  • Analyze a SQL query's execution plan and return plain-English performance recommendations. Runs EXPLAIN ANALYZE (Postgres) or EXPLAIN FORMAT=JSON (MySQL). [BUILD tier]
    Connector
  • Retrieve detailed skills for TimescaleDB operations and best practices. ## Available Skills <available_skills> [8 ]{name description}: design-postgis-tables Comprehensive PostGIS spatial table design reference covering geometry types, coordinate systems, spatial indexing, and performance patterns for location-based applications design-postgres-tables "Use this skill for general PostgreSQL table design.\n\n**Trigger when user asks to:**\n- Design PostgreSQL tables, schemas, or data models when creating new tables and when modifying existing ones.\n- Choose data types, constraints, or indexes for PostgreSQL\n- Create user tables, order tables, reference tables, or JSONB schemas\n- Understand PostgreSQL best practices for normalization, constraints, or indexing\n- Design update-heavy, upsert-heavy, or OLTP-style tables\n\n\n**Keywords:** PostgreSQL schema, table design, data types, PRIMARY KEY, FOREIGN KEY, indexes, B-tree, GIN, JSONB, constraints, normalization, identity columns, partitioning, row-level security\n\nComprehensive reference covering data types, indexing strategies, constraints, JSONB patterns, partitioning, and PostgreSQL-specific best practices.\n" find-hypertable-candidates "Use this skill to analyze an existing PostgreSQL database and identify which tables should be converted to Timescale/TimescaleDB hypertables.\n\n**Trigger when user asks to:**\n- Analyze database tables for hypertable conversion potential\n- Identify time-series or event tables in an existing schema\n- Evaluate if a table would benefit from Timescale/TimescaleDB\n- Audit PostgreSQL tables for migration to Timescale/TimescaleDB/TigerData\n- Score or rank tables for hypertable candidacy\n\n\n**Keywords:** hypertable candidate, table analysis, migration assessment, Timescale, TimescaleDB, time-series detection, insert-heavy tables, event logs, audit tables\n\nProvides SQL queries to analyze table statistics, index patterns, and query patterns. Includes scoring criteria (8+ points = good candidate) and pattern recognition for IoT, events, transactions, and sequential data.\n" migrate-postgres-tables-to-hypertables "Use this skill to migrate identified PostgreSQL tables to Timescale/TimescaleDB hypertables with optimal configuration and validation.\n\n**Trigger when user asks to:**\n- Migrate or convert PostgreSQL tables to hypertables\n- Execute hypertable migration with minimal downtime\n- Plan blue-green migration for large tables\n- Validate hypertable migration success\n- Configure compression after migration\n\n**Prerequisites:** Tables already identified as candidates (use find-hypertable-candidates first if needed)\n\n**Keywords:** migrate to hypertable, convert table, Timescale, TimescaleDB, blue-green migration, in-place conversion, create_hypertable, migration validation, compression setup\n\nStep-by-step migration planning including: partition column selection, chunk interval calculation, PK/constraint handling, migration execution (in-place vs blue-green), and performance validation queries.\n" pgvector-semantic-search "Use this skill for setting up vector similarity search with pgvector for AI/ML embeddings, RAG applications, or semantic search.\n\n**Trigger when user asks to:**\n- Store or search vector embeddings in PostgreSQL\n- Set up semantic search, similarity search, or nearest neighbor search\n- Create HNSW or IVFFlat indexes for vectors\n- Implement RAG (Retrieval Augmented Generation) with PostgreSQL\n- Optimize pgvector performance, recall, or memory usage\n- Use binary quantization for large vector datasets\n\n**Keywords:** pgvector, embeddings, semantic search, vector similarity, HNSW, IVFFlat, halfvec, cosine distance, nearest neighbor, RAG, LLM, AI search\n\nCovers: halfvec storage, HNSW index configuration (m, ef_construction, ef_search), quantization strategies, filtered search, bulk loading, and performance tuning.\n" postgres "Use this skill for any PostgreSQL database work — table design, indexing, data types, constraints, extensions (pgvector, PostGIS, TimescaleDB), search, and migrations.\n\n**Trigger when user asks to:**\n- Design or modify PostgreSQL tables, schemas, or data models\n- Choose data types, constraints, indexes, or partitioning strategies\n- Work with pgvector embeddings, semantic search, or RAG\n- Set up full-text search, hybrid search, or BM25 ranking\n- Use PostGIS for spatial/geographic data\n- Set up TimescaleDB hypertables for time-series data\n- Migrate tables to hypertables or evaluate migration candidates\n\n**Keywords:** PostgreSQL, Postgres, SQL, schema, table design, indexes, constraints, pgvector, PostGIS, TimescaleDB, hypertable, semantic search, hybrid search, BM25, time-series\n" postgres-hybrid-text-search "Use this skill to implement hybrid search combining BM25 keyword search with semantic vector search using Reciprocal Rank Fusion (RRF).\n\n**Trigger when user asks to:**\n- Combine keyword and semantic search\n- Implement hybrid search or multi-modal retrieval\n- Use BM25/pg_textsearch with pgvector together\n- Implement RRF (Reciprocal Rank Fusion) for search\n- Build search that handles both exact terms and meaning\n\n\n**Keywords:** hybrid search, BM25, pg_textsearch, RRF, reciprocal rank fusion, keyword search, full-text search, reranking, cross-encoder\n\nCovers: pg_textsearch BM25 index setup, parallel query patterns, client-side RRF fusion (Python/TypeScript), weighting strategies, and optional ML reranking.\n" setup-timescaledb-hypertables "Use this skill when creating database schemas or tables for Timescale, TimescaleDB, TigerData, or Tiger Cloud, especially for time-series, IoT, metrics, events, or log data. Use this to improve the performance of any insert-heavy table.\n\n**Trigger when user asks to:**\n- Create or design SQL schemas/tables AND Timescale/TimescaleDB/TigerData/Tiger Cloud is available\n- Set up hypertables, compression, retention policies, or continuous aggregates\n- Configure partition columns, segment_by, order_by, or chunk intervals\n- Optimize time-series database performance or storage\n- Create tables for sensors, metrics, telemetry, events, or transaction logs\n\n**Keywords:** CREATE TABLE, hypertable, Timescale, TimescaleDB, time-series, IoT, metrics, sensor data, compression policy, continuous aggregates, columnstore, retention policy, chunk interval, segment_by, order_by\n\nStep-by-step instructions for hypertable creation, column selection, compression policies, retention, continuous aggregates, and indexes.\n" </available_skills>
    Connector
  • 555_MEMORY: Live associative memory via MemoryEngine (Postgres + Qdrant dual-write, BGE-M3 embeddings). Modes: recall — Semantic search across stored memories. store — Persist a new memory entry. get — Exact retrieval by memory_id. list — List memories scoped to current session. prune — Soft-delete (sacred tier requires 888_HOLD). search — Alias for recall. context — Session context window. dry_run — Ephemeral write/recall/cleanup cycle.
    Connector