Uploadkit
Server Details
Official MCP server for UploadKit, the file-uploads platform for developers. Gives Claude Code, Cursor, Windsurf, and Zed first-class knowledge of UploadKit's 40+ open-source React components, Next.js route handler scaffolding, <UploadKitProvider> wiring, BYOS (S3/R2/GCS/B2) configuration, and full-text search across 88+ docs pages. Runs locally via npx — no API key, no telemetry, no config.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 11 of 11 tools scored. Lowest: 2.8/5.
Most tools have distinct purposes, such as get_byos_config for storage setup, get_component for component details, and search_docs for documentation queries. However, there is some overlap between list_components and search_components, as both help find components, which could cause minor confusion for agents. The descriptions clarify their differences (listing vs. searching), but the boundary is not perfectly distinct.
All tool names follow a consistent verb_noun pattern using snake_case, such as get_byos_config, list_components, and search_docs. This predictability makes it easy for agents to understand and navigate the toolset without confusion from mixed naming conventions.
With 11 tools, the count is well-scoped for the UploadKit domain, covering configuration, documentation, components, and scaffolding. Each tool serves a specific role, such as get_quickstart for setup guidance or scaffold_route_handler for code generation, ensuring no tool feels redundant or unnecessary.
The toolset provides comprehensive coverage for UploadKit's core areas, including installation, configuration, documentation, components, and code scaffolding. Minor gaps exist, such as no tools for updating or deleting configurations or handling advanced API operations like webhook management, but agents can work around these with the available search and documentation tools.
Available Tools
11 toolsget_byos_configAInspect
Generate Bring-Your-Own-Storage (BYOS) configuration for an UploadKit Next.js handler — environment variables, handler code, and setup notes for a specific storage provider.
When to use: the user wants to store uploads in their own cloud bucket instead of UploadKit's managed R2. Typical triggers: compliance/data-residency requirements, existing bucket infra, desire to avoid vendor lock-in.
Returns: a plain-text string with three sections — provider-specific notes, the .env variable block, and the TypeScript handler code. Credentials are always server-side; the browser never sees them. Read-only, deterministic. No network calls, no secrets exposed.
| Name | Required | Description | Default |
|---|---|---|---|
| provider | Yes | The storage provider to configure. "s3" = AWS S3 (watch egress costs). "r2" = Cloudflare R2 (recommended — zero egress fees). "gcs" = Google Cloud Storage via HMAC interop. "b2" = Backblaze B2 (S3-compatible, cheap egress). Choose based on where the user's bucket already lives. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses security behavior ('Credentials never reach the browser'), which is valuable context. However, it doesn't mention other behavioral traits like whether this is a read-only operation, error handling, or response format. It adds some value but leaves gaps in behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by a critical security note. Every sentence earns its place by adding essential information without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is moderately complete for a single-parameter tool. It covers purpose and security but lacks details on return values, error conditions, or prerequisites. For a configuration retrieval tool, this is adequate but has clear gaps in behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, but the description compensates by explaining the parameter's purpose: 'with S3, R2, GCS, or Backblaze B2' maps to the 'provider' enum. Since there's only one parameter, this provides adequate semantic context beyond the bare schema, though it doesn't detail format or constraints beyond the enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Return') and resource ('environment + handler configuration for Bring-Your-Own-Storage mode'), specifying the exact functionality. It distinguishes from siblings by focusing on BYOS configuration rather than components, docs, or scaffolding operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('with S3, R2, GCS, or Backblaze B2') and provides a security constraint ('Credentials never reach the browser'), but doesn't explicitly state when to use this tool versus alternatives like 'scaffold_provider' or 'get_install_command'. It offers clear context but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_componentAInspect
Fetch full metadata plus a ready-to-paste React usage example for one specific UploadKit component.
When to use: once you know the exact component name (from list_components or search_components) and need to show the user how to drop it into their code. The returned "usage" field is copy-pasteable TSX including the correct import line and the styles.css import.
Returns: JSON { name, category, description, inspiration, usage }. If the name does not match any component, returns a suggestion message with the 5 closest matches. Read-only, idempotent.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Exact PascalCase component name. Case-sensitive. Examples: "UploadDropzone", "UploadDropzoneAurora", "UploadProgressRadial", "UploadDataStream". Must match one of the names returned by list_components. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return content (metadata and example) but does not describe error handling, rate limits, authentication needs, or response format details. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key information (returning metadata and examples) with zero wasted words. It is appropriately sized for the tool's complexity and structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no annotations, no output schema, and 1 parameter with full schema coverage, the description is minimally adequate. It covers the purpose and output content but lacks details on behavioral aspects like errors or response structure, making it incomplete for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'name' documented as 'Exact component name, e.g. "UploadDropzoneAurora"'. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or examples beyond the schema's example. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and the resource ('full metadata + a ready-to-paste usage example for a specific UploadKit component'), distinguishing it from siblings like 'list_components' (which lists) and 'search_components' (which searches). It specifies the exact output format, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when detailed metadata and examples for a specific component are needed, but it does not explicitly state when to use this tool versus alternatives like 'search_components' or 'list_components'. No exclusions or prerequisites are mentioned, leaving usage context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_docAInspect
Fetch the full markdown content of a single UploadKit docs page by its path, formatted with title, description, source URL, and the body.
When to use: after search_docs identifies a relevant page and you need its full contents to answer a deep question — prefer search_docs first, then get_doc on the top result. Reading the full page avoids relying on snippets that may omit critical context (callbacks, env vars, edge cases).
Returns: a plain-text string — "# {title}\n\n> {description}\n\nSource: {url}\n\n---\n\n{content}". If the path is unknown, returns a not-found message suggesting list_docs. Read-only, idempotent.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Docs page path relative to /docs, WITHOUT leading slash and WITHOUT .mdx extension. Examples: "core-concepts/byos", "sdk/next/middleware", "api-reference/rest-api", "guides/avatar-upload". Get valid paths from search_docs results (the "path" field) or list_docs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It implies this is a read-only operation by stating 'Return the full content,' but does not specify aspects like authentication requirements, rate limits, error handling, or response format. The description adds basic context but lacks detailed behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by a concise usage guideline. Both sentences are necessary and efficient, with no redundant information. It effectively communicates key points without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is reasonably complete. It covers purpose, usage, and parameter context, though it could benefit from more behavioral details (e.g., response format). The absence of an output schema means the description should ideally hint at return values, but it's adequate for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter 'path' with its format. The description adds minimal value by providing example paths ('core-concepts/byos', etc.), but does not explain semantics beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return the full content') and resource ('docs page by path'), distinguishing it from sibling tools like search_docs (which finds pages) and list_docs (which lists pages). It explicitly mentions the tool's role in retrieving detailed content after preliminary searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this after search_docs to read a specific page in depth'), including a clear alternative (search_docs) and context for usage. This helps the agent understand the workflow and avoid misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_install_commandCInspect
Return the exact shell command to install UploadKit packages for a given package manager.
When to use: before asking the user to add dependencies — match their package manager (detect from the presence of pnpm-lock.yaml / package-lock.json / yarn.lock / bun.lockb if you can, otherwise ask or default to pnpm). Saves you from guessing pnpm vs npm vs yarn vs bun syntax.
Returns: a plain-text shell command as a single string (e.g. "pnpm add @uploadkitdev/react @uploadkitdev/next"). Read-only, idempotent, never modifies anything.
| Name | Required | Description | Default |
|---|---|---|---|
| packages | No | Which UploadKit packages to install. Omit to get the default full-stack set: ["@uploadkitdev/react", "@uploadkitdev/next"]. Pass a subset to scope the command, e.g. ["@uploadkitdev/core"] for a framework-agnostic project, or ["@uploadkitdev/react"] for a React app without Next.js. | |
| packageManager | No | Which package manager's syntax to output. Default: "pnpm". Pick the one the user's project actually uses — check their lockfile. | pnpm |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the tool as returning an install command, which implies a read-only operation, but does not address potential side effects (e.g., if it modifies state), error handling, rate limits, or authentication needs. The mention of defaults adds some context, but overall behavioral traits are under-specified for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first sentence. The second sentence adds useful default information without redundancy. Both sentences earn their place, making it efficient, though it could be slightly more structured by explicitly listing parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 parameters, no output schema, no annotations), the description is somewhat complete but has gaps. It covers the purpose and defaults adequately, but lacks details on behavioral aspects (e.g., error cases) and does not fully explain parameter semantics, especially for the 'packageManager' enum. This makes it minimally viable but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%, with one parameter ('packages') having a description and the other ('packageManager') lacking one. The description adds value by explaining defaults for both parameters ('Defaults to pnpm + the full stack (react + next)'), which clarifies the 'packages' default and implies 'packageManager' default. However, it does not fully compensate for the coverage gap, as it doesn't detail the semantics of the 'packageManager' enum options beyond the default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return the install command for UploadKit packages.' It specifies the verb ('Return') and resource ('install command for UploadKit packages'), making the function unambiguous. However, it does not explicitly differentiate from sibling tools like 'get_component' or 'get_doc', which may also retrieve information but for different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: it mentions defaults ('Defaults to pnpm + the full stack (react + next)'), which implies when to use it for standard cases. However, it lacks explicit instructions on when to use this tool versus alternatives (e.g., 'scaffold_provider' or 'scaffold_route_handler' for setup tasks), prerequisites, or exclusions, leaving usage context vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_quickstartCInspect
Return the complete UploadKit quickstart walkthrough for Next.js — install, API key env, route handler, provider, first component, optional BYOS — in one markdown document.
When to use: the user is brand new to UploadKit and asks "how do I get started?", "set this up for me", or any variation that signals zero prior context. Prefer scaffold_route_handler + scaffold_provider + get_install_command when you already know which specific step they need.
Returns: a plain-text markdown document. Takes no parameters. Read-only, static content, idempotent.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions providing a guide but doesn't disclose behavioral traits such as whether it returns text, code snippets, interactive content, or if it requires authentication, has rate limits, or any side effects. This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that clearly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to understand quickly. However, it could be slightly more structured by including key details upfront.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the quickstart guide contains, its format, or how to use the output. For a tool that likely returns instructional content, more context on the return type and usage would be helpful to compensate for missing structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameters need documentation. The description doesn't add parameter details, which is acceptable here. Baseline is 4 for zero parameters, as there's nothing to compensate for.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides a 'full end-to-end quickstart guide for UploadKit on Next.js', which gives a general purpose but lacks specificity about what exactly the guide contains or delivers. It doesn't clearly distinguish this from sibling tools like 'get_doc' or 'get_install_command', which might also provide documentation or instructions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't specify if this is for beginners, for specific use cases, or how it differs from other documentation tools like 'get_doc' or 'search_docs'. The description implies usage for quickstart needs but offers no explicit context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_componentsAInspect
List every React upload component shipped by @uploadkitdev/react with its name, category, one-line description, and design inspiration.
When to use: before recommending or scaffolding any UploadKit component, to confirm the exact name exists and to pick the right variant for the user's context (e.g. browse all "dropzone" variants when the user wants a drag-and-drop area).
Returns: JSON { count, components: [{ name, category, description, inspiration }] }. Read-only, no side effects, idempotent.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Optional filter. Narrows the list to one category. Omit to get every component. Values: "classic" (the original 5 primitives like UploadButton/UploadDropzone), "dropzone" (styled drag-and-drop variants), "button" (styled button variants with motion), "progress" (upload progress indicators), "motion" (motion-forward visualizations like data streams, particles), "specialty" (avatars, chat composers, wizards, envelopes), "gallery" (multi-file layouts like grid, timeline, kanban). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool lists 'every component' and allows optional filtering, which covers basic functionality. However, it doesn't describe output format (e.g., structure of the list, pagination), error handling, or performance characteristics. For a tool with no annotations, this is adequate but lacks depth in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by usage guidance. Every sentence earns its place by providing essential information without redundancy. It's efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and parameter context adequately. However, without annotations or output schema, it could benefit from mentioning the return format (e.g., list of component names or objects) to fully guide the agent. It's sufficient but has a minor gap in output details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'category' fully documented in the schema (including enum values and optional nature). The description adds minimal value beyond the schema by mentioning filtering by category and listing the enum values, but doesn't provide additional semantics like default behavior or examples. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('every component available in @uploadkitdev/react'), making the purpose specific and unambiguous. It distinguishes this tool from siblings like 'get_component' (which retrieves a specific component) and 'search_components' (which likely performs keyword-based searches rather than listing all or filtered components).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('before suggesting a component so you pick one that actually exists') and provides an alternative context (filtering by category). It differentiates from 'search_components' by implying this tool is for comprehensive listing with optional filtering, not keyword-based searching. The guidance is direct and practical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_docsAInspect
Enumerate every available UploadKit docs page with title, description, URL, and path.
When to use: to discover what documentation exists before targeted searching, or to orient yourself around the shape of the docs site. Prefer search_docs when you already have a concrete question.
Returns: JSON { count, generatedAt, pages: [{ path, url, title, description }] }. Pages are sorted alphabetically by path. Read-only, static at bundle time, idempotent.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It describes the output format (title, description, URL, path) and the tool's exploratory purpose, but lacks details on behavioral traits like pagination, rate limits, permissions needed, or whether the list is comprehensive vs. filtered. It adequately covers the basic operation but misses deeper behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences that are front-loaded with the core purpose and efficiently add usage guidance. Every word earns its place, with no redundancy or wasted text, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is reasonably complete for basic use. However, it lacks details on output format (e.g., structure of the list, pagination) and behavioral aspects like performance or limitations, which could be helpful despite the simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%. The description doesn't need to add parameter details, so it meets the baseline of 4 for zero-parameter tools. It appropriately focuses on the tool's function without unnecessary parameter explanations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('List') and resource ('every docs page'), and lists the returned fields (title, description, URL, path). It doesn't explicitly differentiate from sibling tools like 'get_doc' or 'search_docs', but the scope is well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to discover what documentation is available before searching'), which implicitly suggests it's for exploration rather than targeted retrieval. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scaffold_providerBInspect
Return a ready-to-paste snippet that wraps the Next.js root layout with <UploadKitProvider> so React components can talk to the upload route handler.
When to use: right after scaffold_route_handler, to complete the wiring. The snippet goes in app/layout.tsx. Without the provider, UploadKit React components throw at runtime.
Returns: a plain-text string containing a short explanatory note followed by a fenced tsx code block. Takes no parameters — the endpoint path is always /api/uploadkit since that is what scaffold_route_handler produces. Read-only, deterministic, idempotent.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the output ('ready-to-paste snippet') but lacks details on traits like whether this is a read-only operation, if it requires authentication, potential rate limits, or error conditions. For a tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key action ('Return a ready-to-paste snippet') and context ('adds <UploadKitProvider> to the Next.js root layout'). There is no wasted verbiage, and every word contributes to understanding the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but incomplete. It explains what the tool does but lacks context on behavioral traits, usage scenarios, or output format details. For a code-generation tool, more guidance on integration steps or example output would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so there are no parameters to document. The description doesn't need to add parameter semantics, but it correctly implies no inputs are required. This meets the baseline for tools with no parameters, though it doesn't explicitly state 'no parameters needed'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return a ready-to-paste snippet that adds <UploadKitProvider> to the Next.js root layout.' It specifies the verb ('Return'), resource ('ready-to-paste snippet'), and target context ('Next.js root layout'). However, it doesn't explicitly differentiate from sibling tools like 'scaffold_route_handler' or 'get_component' that might also generate code snippets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a Next.js project), exclusions (e.g., not for other frameworks), or comparisons to siblings like 'get_install_command' or 'get_quickstart' that might offer related setup instructions. Usage is implied from the purpose but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scaffold_route_handlerBInspect
Generate the complete file content for a Next.js App Router upload route handler — typed file router, handler export, correct path comment.
When to use: when the user is setting up UploadKit server-side in a Next.js App Router project and needs the app/api/uploadkit/[...uploadkit]/route.ts file created. The returned string is a complete, compilable TypeScript file — write it to disk as-is.
Returns: a markdown-formatted string containing the target path and the complete TS source inside a fenced code block. You must create the file at the literal path app/api/uploadkit/[...uploadkit]/route.ts. Read-only — generates text, never touches the filesystem itself.
| Name | Required | Description | Default |
|---|---|---|---|
| routeName | Yes | The key for this file route in the `FileRouter` object. This exact string is what consumers pass as the `route` prop on components (e.g. `<UploadDropzone route="media" />`). Use a short lowercase identifier matching the file-category — examples: "media" for a general images+videos endpoint, "avatar" for user profile pictures, "attachments" for message/ticket attachments, "documents" for PDFs. | |
| maxFileSize | No | Maximum allowed size per uploaded file, expressed with a unit suffix. Examples: "4MB" (default), "512KB", "1GB", "100MB". Omit to use the default of "4MB". Rejects uploads larger than this value with a 413 response. | |
| allowedTypes | No | MIME types (or wildcard patterns) that this route accepts. Examples: ["image/*"] (default — any image), ["image/jpeg", "image/png"] (two specific types), ["application/pdf"] (PDF only), ["image/*", "video/mp4"] (images plus MP4). Omit for the default of ["image/*"]. Rejects mismatched uploads with a 415 response. | |
| maxFileCount | No | Maximum number of files per single upload request. Default: 1. Set to a larger number to enable multi-file drag-and-drop (e.g. 10 for gallery uploaders). Must be >= 1. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but lacks behavioral disclosure. It states what the tool generates but doesn't mention whether this creates/modifies files, requires specific permissions, has side effects like overwriting existing files, or includes error handling. For a file generation tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose with zero waste. It front-loads the key action and target, making it easy to parse while including necessary technical details like the file path.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with full schema coverage but no output schema or annotations, the description is adequate but has clear gaps. It specifies what's generated but not the format or content of the output (e.g., TypeScript code), leaving the agent uncertain about the result. For a code generation tool, more context on output would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing full parameter documentation. The description doesn't add meaning beyond the schema, as it doesn't explain how parameters like routeName integrate with the generated code or affect the output. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate the file content') and target resource ('Next.js App Router route handler at app/api/uploadkit/[...uploadkit]/route.ts'), with precise technical details including the file path and router typing. It distinguishes from sibling tools like scaffold_provider by specifying route handler generation rather than provider scaffolding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. While the description implies usage for creating route handlers in Next.js with UploadKit, it doesn't mention when not to use it or what alternatives exist among sibling tools like scaffold_provider or get_component for different scaffolding needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_componentsBInspect
Fuzzy-search the UploadKit component catalog by any free-text keyword — component name, category, description, or design inspiration (e.g. "apple", "stripe", "vercel", "terminal", "progress ring", "kanban board", "matrix").
When to use: the user describes the vibe or use case but does not know the component name yet ("I want something like Stripe Checkout", "show me Apple-style uploaders"). Prefer this over list_components when the goal is discovery rather than enumeration.
Returns: JSON { query, count, matches: [{ name, category, description, inspiration }] }. Read-only, idempotent, case-insensitive.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Free-text search string. Case-insensitive substring match against name, category, description, and inspiration fields. Examples: "terminal", "apple", "progress ring", "kanban", "vercel", "matrix". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'fuzzy-search' and gives examples, but doesn't disclose behavioral traits such as how results are returned (e.g., format, pagination), performance characteristics (e.g., speed, limits), or any side effects. This is a significant gap for a search tool with no structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It front-loads the core purpose and includes helpful examples in parentheses, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a search tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., list of components, metadata), how results are structured, or any limitations (e.g., max results, fuzzy matching rules). For a tool with 1 parameter and 100% schema coverage, it should do more to compensate for the lack of output information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'query' parameter documented as 'Free-text search.' The description adds marginal value by providing examples of query types ('keyword, inspiration, or use case') and concrete examples, but doesn't explain syntax or constraints beyond what the schema implies. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fuzzy-search components by keyword, inspiration, or use case.' It specifies the verb (fuzzy-search) and resource (components), and provides concrete examples ('terminal', 'apple', 'progress ring', 'kanban'). However, it doesn't explicitly differentiate from sibling tools like 'list_components' or 'search_docs', which would be needed for a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through examples of search queries, suggesting when to use this tool for fuzzy searching. However, it lacks explicit guidance on when to choose this over alternatives like 'list_components' (which might list all without search) or 'search_docs' (which searches documentation). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_docsAInspect
Full-text search across every UploadKit docs page (88+ pages — getting-started, core-concepts, SDK reference, API reference, dashboard, guides). Ranks matches by keyword frequency in title, description, and body.
When to use: any question about UploadKit behaviour, configuration, or integration that the component tools do not answer — middleware, onUploadComplete callbacks, REST API endpoints, webhooks, presigned URLs, CSS theming variables, type-safety setup, migration from UploadThing, rate limits, etc.
Returns: JSON { query, count, indexGeneratedAt, matches: [{ path, url, title, description, snippet, score }] }. Sorted by score descending. Read-only. Bundled index (no network call) — results reflect docs at build time.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of matches to return. Default: 8. Range 1-50. Use smaller values (3-5) when you already have a narrow query; use larger values (15-20) for exploratory scans across the whole docs site. | |
| query | Yes | Free-text search query. Multiple words are ANDed with per-field weighting (title matches score highest). Examples: "middleware onUploadComplete", "theming css variables", "presigned url", "migration uploadthing". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the search behavior (full-text across all docs), scope (88+ pages), and return format (ranked matches with title, URL, snippet, and path). It doesn't mention performance characteristics, rate limits, or authentication needs, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first establishes purpose and scope, the second provides usage guidelines and return format. Every element serves a clear purpose with zero wasted words, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with no annotations and no output schema, the description provides excellent context about scope, usage scenarios, and return format. It could potentially mention pagination or result ordering details, but covers the essential operational context well given the tool's complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing additional semantic context about how parameters affect the search behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Full-text search') and resource ('across every UploadKit docs page'), with explicit scope covering 88+ pages across multiple documentation sections. It distinguishes from sibling tools by specifying this is for documentation search rather than component search or other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('whenever the user asks about UploadKit behaviour, APIs, middleware, webhooks, theming, BYOS, type safety, or any topic not covered by the component tools'), effectively distinguishing it from sibling tools like search_components and get_doc by specifying its broader documentation search scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!