Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}

Tools

Functions exposed to the LLM to take actions

NameDescription
create_projectA

Create a new Hatchable project. This generates a URL slug, creates a dedicated PostgreSQL database, and returns the project ID and URLs. Call this first before writing files or creating tables.

Project structure

public/              static files, served at their file path
api/                 backend functions — each file is one endpoint
  hello.js           → /api/hello
  users/list.js      → /api/users/list
  users/[id].js      → /api/users/:id         (req.params.id — one segment)
  docs/[...path].js  → /api/docs/*path        (req.params.path — string[], catches multi-segment)
  _lib/              shared code, not routed
migrations/*.sql     SQL files, run in filename order on every deploy
seed.sql             optional — runs on first deploy / fork, once per project
hatchable.toml       optional overrides (cron, auth, project name)
package.json         dependencies (no build scripts yet — build locally, commit public/)

Routing precedence

Most-specific wins. For a request to /api/users/42:

  1. api/users/42.js (static) — beats

  2. api/users/[id].js (single-param, params.id = "42") — beats

  3. api/users/[...rest].js (catch-all, params.rest = ["42"])

Catch-all params arrive as string[], never slash-joined. Use req.params.path as an array: const [first, ...rest] = req.params.path;

Static file resolution (public/)

A request to /foo/bar/baz tries, in order:

  1. public/foo/bar/baz (exact file)

  2. public/foo/bar/baz.html

  3. public/foo/bar/baz/index.html

  4. Ancestor index.html fallback — walks up: public/foo/bar/index.htmlpublic/foo/index.htmlpublic/index.html

Step 4 means each folder with an index.html acts as its own mini-site. You can ship an /admin/* React SPA alongside a static marketing page at / — unmatched paths under /admin/ fall back to public/admin/index.html, not the root one.

Handler contract

Every file under api/ exports a default async function:

// api/users/list.js
import { db, auth } from "hatchable";

export default async function (req, res) {
  const user = auth.getUser(req);
  if (!user) return res.status(401).json({ error: "Not logged in" });

  const { rows } = await db.query(
    "SELECT id, name FROM users WHERE org_id = $1",
    [user.id]
  );
  res.json(rows);
}

// Optional: restrict methods
export const methods = ["GET"];

// Optional: register this endpoint as a cron job
// export const schedule = "0 */6 * * *";

req (Express-shaped)

  • method, url, path, headers, cookies, params, query

  • body — parsed by Content-Type: JSON → object, urlencoded → object, multipart/form-data → object of non-file fields

  • files — present for multipart uploads: [{ field, filename, contentType, buffer }]

res (Express-shaped)

  • res.json(data), res.status(code) (chainable), res.send(text|buffer)

  • res.redirect(url), res.cookie(name, value, opts), res.setHeader(name, value)

SDK — import from "hatchable"

db.query(sql, params) → { rows, rowCount }
db.transaction([{sql, params}, ...]) → { results: [{rows, rowCount}] }

auth.getUser(req) → { id, email, name } | null

email.send({ to, subject, html })

storage.put(key, buffer, contentType) → url
storage.get(key) → { buffer, contentType }
storage.del(key)

That's the entire SDK. Everything else uses standard Node: fetch for external HTTP, process.env.KEY for secrets (set with set_env), crypto/etc from node:*.

Database

Postgres. Write schema in migrations/*.sql. Files run in filename order, tracked in __hatchable_migrations so each runs once.

Always use RETURNING to get inserted ids in the same round trip:

INSERT INTO users (email) VALUES ($1) RETURNING id

Never call lastval() or LAST_INSERT_ID() — each db.query is a fresh connection, so session-local state doesn't carry across calls.

Available Node.js APIs and packages

Functions run in Node.js 20. The full hatchable SDK is always available. In addition, these packages are pre-installed and ready to import:

sharp, puppeteer-core (with Chromium at /usr/bin/chromium), csv-parse, csv-stringify, xlsx, bcrypt, jsonwebtoken, uuid, date-fns, lodash, marked, sanitize-html, cheerio, xml2js, archiver, qrcode, stripe, openai.

Standard Node.js APIs are available: fs, child_process, net, http, Buffer, stream, path, os, crypto, etc. External HTTP via global fetch(). Secrets via process.env (set with the set_env tool).

Visibility

Three tiers — each one a step up in who the software is for:

  • personal — free. You and anyone you invite. Login-gated via Hatchable accounts. Build anything including auth — test the full flow with your invitees before going live.

  • public — $12/mo. On the open web. Custom domains. No branding. No app-level auth (use Hatchable identity only).

  • app — $39/mo. On the open web + your app has its own users. Email/password signup, OAuth, password reset. If your project has [auth] enabled, this is the only live tier — you can't go Public with auth, you go straight to App.

Calling the API from public/

At deploy time, Hatchable injects a tiny bootstrap into every HTML file:

window.__HATCHABLE__ = { slug: "my-app", api: "/api" };

Use it as the base URL:

const API = window.__HATCHABLE__.api;
fetch(API + "/users/list").then(r => r.json()).then(render);

Auth (optional)

Enable auth in hatchable.toml to get a complete signup/login/session system with one config block. The platform auto-mounts /api/auth/* — do not write files under api/auth/ when auth is enabled.

[auth]
enabled = true
providers = ["email"]               # or ["email", "google", "hatchable"]

Auto-mounted endpoints:

  • POST /api/auth/sign-up/email — create account with email + password

  • POST /api/auth/sign-in/email — log in

  • POST /api/auth/sign-out — clear session

  • GET /api/auth/get-session — current session + user

  • POST /api/auth/forget-password — send password-reset email

  • POST /api/auth/reset-password — complete password reset

  • GET /api/auth/sign-in/social/:provider — OAuth flow (google, github)

  • GET /api/auth/hatchable/sso — one-click Hatchable SSO (when enabled)

Users live in these tables inside your project's own database: users, sessions, accounts, verifications

You can extend the users table with your own columns:

-- migrations/002_user_profile.sql
ALTER TABLE users ADD COLUMN phone text;
ALTER TABLE users ADD COLUMN tier text DEFAULT 'free';

You CANNOT drop or rename users/sessions/accounts/verifications or create your own tables with those names — the deploy will fail with a clear error.

In your API functions, auth.getUser works the same whether auth is enabled or not:

import { auth, db } from "hatchable";

export default async function (req, res) {
  const user = await auth.getUser(req);     // NOTE: await when auth is enabled
  if (!user) return res.status(401).json({ error: "Not logged in" });
  const { rows } = await db.query(
    "SELECT * FROM bookings WHERE user_id = $1",
    [user.id]
  );
  res.json(rows);
}

OAuth providers need credentials set via hatchable secret set: GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET GITHUB_CLIENT_ID, GITHUB_CLIENT_SECRET

Deploy

After writing files, call the deploy tool. It runs migrations, seeds (first deploy only), copies public/ to the CDN, registers api/ routes, and — if [auth] enabled — provisions the auth tables in your database.

get_projectA

Get project details including slug, visibility, status, deployed functions, and the database schema (tables, columns, types).

list_projectsA

List all projects you own or collaborate on, with their visibility, tier, role, and current version.

deployA

Deploy the project. Runs migrations/*.sql (tracked so each runs once), runs seed.sql on first deploy, copies public/ files to the CDN, and registers api/ files as live endpoints. Increments the project version. Call this after writing all your files. To verify your functions work after deploying, use run_function — it calls the function directly through your authenticated session and works for all project visibilities. The url field is the public URL for end users — personal projects require visitors to sign up before they can view the site.

write_fileA

Write or overwrite a project file. Paths are relative to the project root.

Valid locations: public/** static files (HTML, CSS, JS, images, etc.) api/.js backend functions (each file is one endpoint) api/_lib/ shared helpers imported by api/ files, not routed migrations/*.sql database migrations, run in filename order seed.sql optional seed data, runs once on fresh installs hatchable.toml optional config overrides package.json dependencies (no build script yet)

Files are stored but not live until you call deploy.

write_filesA

Write multiple project files in a single call. Same rules as write_file but batched — faster for scaffolding a new project or updating several files at once.

Each entry in the files array has a path and content. All files are written atomically — if any path is invalid, none are written.

read_fileA

Read the content of a project file.

Pass offset/limit to read a range of lines — useful for large files where the whole file would blow the context window. When either is set, the response includes cat -n style line-numbered content so subsequent patch_file calls can reference exact line numbers.

grepA

Regex content search across a project's files. Postgres-backed, scoped to one project, with glob filtering.

Three output modes:

  • files_with_matches (default) — list paths containing a match

  • content — matching lines with optional context and line numbers

  • count — per-file match counts + total

Default head_limit is 250 to prevent context blowups on broad patterns. Use glob to narrow by path (e.g. 'api//*.js', 'public//.html'). Regex uses Postgres syntax (~ / ~). Invalid or catastrophic patterns error out via a 2s statement timeout — simplify the pattern if that happens.

list_filesB

List all files in a project with their paths, sizes, and hashes.

patch_fileA

Apply a targeted edit to an existing project file without rewriting the entire file. Finds the first occurrence of old_string and replaces it with new_string. Use this instead of write_file when modifying large files (e.g. HTML) — you only send the changed portion, not the whole file.

The old_string must match exactly (including whitespace). If it's not found, the tool returns an error. To insert at a specific position, use a nearby string as old_string and include it in new_string with your addition.

delete_fileA

Delete a project file. Takes effect after the next deploy.

execute_sqlA

Run SQL against the project's dedicated PostgreSQL database.

Supports: CREATE TABLE, ALTER TABLE, DROP TABLE, INSERT, SELECT, UPDATE, DELETE. Use parameterized queries for safety: pass values in the params array with $1, $2, etc. placeholders.

Return format:

  • SELECT: { rows: [...], count: N } — DECIMAL columns return as strings (e.g. "45.00")

  • INSERT/UPDATE/DELETE: { changes: N }

  • DDL: { changes: 0 }

get_schemaA

Return the database schema for the project's PostgreSQL database: tables, columns (with types), and indexes.

set_envA

Set environment variables for a project. Available in functions via process.env.KEY. Keys containing SECRET, PASSWORD, TOKEN, API_KEY, or PRIVATE are automatically marked as secrets.

set_visibilityA

Change a project's visibility.

  • personal: you + invitees, login-gated, free

  • public: on the open web, requires Public plan ($12/mo). No app-level auth.

  • app: on the open web + user signups, requires App plan ($39/mo). Required if [auth] is enabled.

run_functionA

Execute a deployed function and return the real response. Use this to test your API endpoints.

Returns: { status, headers, body, logs, error, duration_ms }

Example: run_function({ project_id: 1, path: "/api/users", method: "GET" }) Example: run_function({ project_id: 1, path: "/api/users", method: "POST", body: { name: "Alice" } })

IMPORTANT: Always run_function on your API endpoints after writing them. Inspect the response body field names and types. Then write your frontend to match those exact names.

view_logsA

View function execution logs with rich filtering. Each entry includes status_code, duration_ms, log_output (captured console.log), error (if any), and a derived level field (error/warning/info).

Filter by any combination of function_name, route, method, status_code (exact or 4xx/5xx wildcards), level, time range (since/until — ISO or relative like '1h'/'30m'/'7d'), full-text query across log_output and error, or specific request_id.

Use this to debug production issues: e.g. level='error' + since='1h' finds everything that blew up in the last hour.

list_deploymentsA

List deployments for a project in reverse-chronological order. Each entry includes version, status, deployed_at, description, and summary counts (files, functions).

Use this to understand recent deploy history, identify a known-good version for rollback, or debug a regression by comparing two versions.

list_functionsA

List every deployed API function for a project: route, method, runtime tier, cron schedule (if any), and 24-hour invocation and error counts.

This is the 'what routes did I ship' introspection tool. Call it after a fork, after picking up an unfamiliar project, or to verify a deploy registered the endpoints you expected. Much cheaper than reading every api/ file with read_file.

get_deploymentA

Detail view of one deployment by version number — returns the full file manifest (paths, hashes, sizes) and function list captured when that version shipped. Use it with list_deployments to audit or compare what changed between versions.

list_cron_jobsA

List every scheduled (cron) function in a project with its cron expression, 7-day run count, error count, and last_run_at timestamp. Use this to verify a cron job is actually firing without tailing logs manually.

list_envA

List environment variable keys for a project. Only key names and an is_secret flag are returned — values are never exposed through this tool. Use process.env.KEY inside a deployed function to read the actual value.

delete_envA

Delete one or more environment variables by key. Pass key for a single delete or keys for a batch. Missing keys are reported in skipped, not errored, so retries are idempotent. Takes effect on the next deploy.

update_projectA

Update project metadata: name, tagline, description, category. Only the fields you pass are touched. For visibility changes use set_visibility; slug and tier are immutable.

import_file_from_urlA

Fetch a remote URL and save the response body as a project file — server-side, so the bytes never pass through your context window. Useful for seed data, vendor libs, and asset migration.

Capped at 10 MB and 10s timeout. Private/loopback addresses are rejected. Path must live under public/, api/, or migrations/, or be one of seed.sql / hatchable.toml / package.json.

search_documentationA

Search Hatchable's own documentation for platform behavior — routing, the SDK surface, deploy semantics, auth config, runtime limits. Call this instead of guessing when you're unsure how a Hatchable feature works.

Ranks results by term frequency across headed sections. Returns source file, section heading, and a snippet around the hit.

dry_run_deployA

Run every deploy-time validator against the project's current files without actually deploying. Returns errors (hard gates) and warnings (soft lints), plus a would_deploy summary of what would ship.

Errors catch: package.json build scripts, reserved table names in migrations, auth route collisions, usage cap breaches.

Warnings catch known runtime footguns that type-check but silently misbehave — most notably auth.getUser() / auth.getSession() / db.query() calls without await (returning a Promise is truthy, so if (!user) guards pass and downstream user.id is undefined). Safer than calling deploy blindly and finding out mid-flight.

upload_fileA

Multipart file upload for content that exceeds a single model response's output token cap (big SPA bundles, large seed data, inline vendor libs).

Flow: first call with chunk_index=0 and NO upload_id — response returns an upload_id. Subsequent calls pass that upload_id with chunk_index=1, 2, 3…. Last call sets final=true to atomically concatenate and commit as one ProjectFile.

Chunks are staged in Redis with a 10-minute TTL. chunk_index overwrites (safe to retry). Max chunk size: 64 KB. Max assembled file: 20 MB.

list_pending_uploadsA

Show multipart uploads currently staged for this project that haven't yet been committed. Use this to recover from a disconnect — find the upload_id and resume from the next chunk_index. Uploads expire 10 minutes after the last chunk was added.

run_codeA

Execute arbitrary JS in the project's isolate runtime with the same bindings a deployed function gets: db, auth, email, storage from "hatchable", plus process.env and global fetch. The return value of the snippet becomes the result field.

Use this as a REPL: probe the database, verify a computation, test an API shape before committing it to a file. Nothing is persisted — the snippet runs once and disappears.

Caps: 5s default timeout (max 30s), 256 KB max source length.

Example: run_code({ project_id, code: const { db } = await import("hatchable"); const { rows } = await db.query("SELECT count(*) FROM users"); return rows[0]; })

fork_projectA

Fork a public project into your account. Copies all code and database schema (no data). The fork starts as a personal project you can modify freely.

This is the recommended way to start from an existing app: fork it, then modify the code.

search_projectsA

Search the public Hatchable project directory — other people's projects that you can view or fork. Use this to find existing apps to fork-and-modify as a starting point.

Note: this searches the public marketplace. To search inside your own project's files, use the grep tool instead.

setup_accountA

Associate an email and handle with your account.

Step 1: Call with just email — sends a 6-digit verification code. Step 2: Call with email + code + handle — verifies and completes setup.

This lets you log in to the console and sets your permanent @handle.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
deploy-card

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Woobox/hatchable-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server