Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
AGENT_TOOLBELT_KEYYesYour API key for the Agent Toolbelt service (e.g., 'atb_your_key_here').

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}
resources
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
generate_schemaA

Generate a JSON Schema, TypeScript interface, or Zod validation schema from a natural language description of a data structure. Examples: 'a user profile with name, email, and signup date', 'a product listing with title, price, and inventory count'.

extract_from_textA

Extract structured data from raw text: emails, URLs, phone numbers, dates, currencies, addresses, names, or JSON blocks. Useful for parsing documents, emails, web content, or any unstructured text into clean structured data.

build_cronA

Convert natural language schedule descriptions into cron expressions. Examples: 'every weekday at 9am', 'first Monday of each month at noon', 'every 5 minutes'. Returns the expression, human-readable confirmation, and next 5 run times.

build_regexA

Build and test regular expressions from natural language descriptions. Supports emails, URLs, phones, dates, IPs, colors, UUIDs, and 15+ more patterns. Returns the pattern, code snippets in JS/Python/TS, and optional test results.

generate_brand_kitA

Generate a complete brand kit from a company name, industry, and aesthetic keywords. Returns a color palette with WCAG accessibility scores, curated typography pairings, and design tokens in JSON, CSS, or Tailwind format.

convert_markdownA

Convert HTML to clean Markdown, or Markdown to HTML. Use HTML→Markdown when you've fetched a web page and need readable text for an LLM — strips tags, preserves headings, lists, code blocks, links, and tables. Use Markdown→HTML when rendering content in a web context.

fetch_url_metadataA

Fetch a URL and extract its metadata: title, description, Open Graph tags (og:image, og:type), Twitter card tags, favicon, canonical URL, author, and publish date. Use to enrich links with context or understand what a page is about without reading the full content.

count_tokensA

Count tokens for any text across multiple LLM models and get per-model cost estimates. Use before sending text to an LLM to check context window usage or compare costs across models. Supports GPT-4o, GPT-4, GPT-3.5-turbo, Claude 3.5 Sonnet, Claude 3 Opus, and 10+ more.

csv_to_jsonA

Convert CSV data to typed JSON. Auto-detects delimiters, uses the first row as headers, and casts values to proper types (numbers, booleans, nulls). Use when processing spreadsheet exports or any CSV-formatted data.

normalize_addressA

Normalize a US mailing address to USPS standard format. Expands abbreviations (st→ST, ave→AVE), standardizes directionals, converts state names to codes. Returns parsed components and a confidence score (high/medium/low).

generate_color_paletteA

Generate a color palette from a description, mood, industry, or hex seed color. Accepts moods (calm, energetic, luxurious), industries (fintech, healthcare, fashion), nature themes (sunset, ocean, forest), or a specific hex color. Returns hex/RGB/HSL values, WCAG accessibility scores, and CSS custom properties.

compare_documentsA

Compare two versions of a document and produce a semantic diff with additions, deletions, and modifications. Works with contracts, READMEs, policies, essays, or any text. Powered by Claude.

extract_contract_clausesB

Extract key clauses from a contract — parties, payment terms, termination, liability, IP ownership, confidentiality, and more. Optionally flags risky or one-sided clauses with severity ratings. Powered by Claude.

optimize_promptA

Analyze and improve an LLM prompt. Scores clarity, specificity, structure, and completeness. Returns an optimized rewrite with a summary of what changed and why. Powered by Claude.

extract_meeting_action_itemsA

Extract structured action items, decisions, and a summary from meeting notes or transcripts. Identifies task owners, deadlines, and priorities. Powered by Claude.

strip_image_metadataA

Strip EXIF, GPS, IPTC, XMP, and ICC metadata from an image for privacy. Use before uploading or sharing images to remove sensitive embedded data like GPS coordinates, camera model, timestamps, and editing history. Accepts base64-encoded JPEG, PNG, WebP, or TIFF. Returns cleaned base64 image with a removal report.

mock_api_responseA

Generate realistic mock API responses from a JSON Schema. Supports nested objects, arrays, string formats (email, uuid, date-time, url), field-name heuristics, enums, and min/max constraints. Set seed for reproducible output. Returns 1–100 records.

audit_dependenciesA

Audit npm and PyPI packages for known CVEs using the OSV database (GitHub Dependabot's source). Pass packages directly or paste package.json / requirements.txt content.

earnings_analysisB

Analyze a stock's earnings track record — EPS beat/miss history, revenue trend, and what it means for long-term investors. Returns verdict, beat rate, revenue trajectory, last quarter summary, and what to watch next.

insider_signalA

Interpret insider trading activity for any stock. Classifies open-market purchases vs. routine sales/awards, identifies cluster buying, and explains whether the activity is a meaningful signal. Returns signal strength (strong_buy → strong_sell) and a plain-English verdict.

valuation_snapshotA

Assess whether a stock is cheap, fair, or expensive. Pulls P/E, P/S, EV/EBITDA, FCF yield, ROE, and margins, then synthesizes them into a verdict with a specific buy zone price level.

bear_vs_bullA

Generate a structured bull vs. bear case for any stock. Steelmans both sides with specific data, then delivers a net verdict and the key question investors need to answer before buying.

stock_thesisA

Generate a long-term investment thesis for any stock. Pulls live financials, valuation metrics, insider trades, and analyst ratings, then synthesizes them into a Motley Fool-style research note. Returns a bullish/neutral/bearish verdict, thesis paragraphs, key strengths, risks, and valuation read. Use when you want fundamental analysis of a stock for long-term investing.

pack_context_windowA

Pack content chunks into a token budget for an LLM context window. Selects the best subset of chunks that fits within the token limit using priority, greedy, or balanced strategies. Use when you have more content than fits in the context window.

list_toolsA

List all tools available in the Agent Toolbelt API catalog, including descriptions and pricing.

Prompts

Interactive templates invoked by user choice

NameDescription
generate-data-modelGuided workflow for creating a data model schema from a description
extract-and-analyzeExtract all structured data from text and provide analysis

Resources

Contextual data attached and managed by the client

NameDescription
api-docsFull API documentation for the Agent Toolbelt service

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marras0914/agent-toolbelt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server