AgentBuilders
Server Details
Deploy full-stack web apps with database, file storage, auth, and RBAC via a single API call.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 13 of 13 tools scored. Lowest: 3.2/5.
Every tool has a clearly distinct purpose with no ambiguity. For example, create_app reserves an app name without deploying, while deploy_app actually deploys files; get_app retrieves app details, while get_deployment focuses on version history; and set_secret stores API keys, while verify_integration tests connectivity. The descriptions clearly differentiate overlapping concepts like deployment and previews.
All tools follow a consistent verb_noun pattern using snake_case throughout. Examples include claim_deployment, create_app, deploy_app, get_app, list_apps, update_app, set_secret, and verify_integration. This uniformity makes the toolset predictable and easy for agents to understand and navigate.
With 13 tools, the count is well-scoped for the server's purpose of app deployment and management. Each tool earns its place by covering distinct aspects like app creation, deployment, configuration, monitoring, and integration handling. This provides comprehensive coverage without being overwhelming or insufficient for the domain.
The toolset offers complete CRUD/lifecycle coverage for app deployment and management. It includes creation (create_app), deployment (deploy_app, deploy_preview), retrieval (get_app, list_apps, get_deployment, get_logs, get_viewkey), update (update_app), and deletion (implied via deployment management). It also covers advanced features like custom domains, secret management, and integration verification, leaving no obvious gaps for agent workflows.
Available Tools
13 toolsclaim_deploymentAInspect
Claim ownership of an agent-deployed app using its claim code. Transfers the app from the deploying agent to your account.
| Name | Required | Description | Default |
|---|---|---|---|
| claim_code | Yes | The 8-character claim code (format: XXXX-XXXX) provided by the deploying agent |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-readOnly (write operation), which description supports with 'Transfers' and 'Claim ownership'. Adds valuable behavioral detail about the transfer direction ('from the deploying agent to your account') beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First establishes the action and mechanism; second clarifies the ownership transfer outcome. Front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a single-parameter ownership transfer tool. Explains the business operation (transfer) and references the input method (claim code). No output schema exists, so no return value explanation expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage including format (XXXX-XXXX). Description references 'claim code' but adds minimal semantic value beyond the schema's detailed parameter description. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verbs ('Claim ownership', 'Transfers') and clear resource ('agent-deployed app'). Distinctly differs from siblings like create_app, deploy_app, or get_deployment by explicitly targeting ownership transfer using claim codes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies clear usage context ('agent-deployed app', 'provided by the deploying agent') indicating when to use it (taking over agent deployments). Lacks explicit 'when not to use' or named alternatives, but context effectively signals appropriate use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_appAIdempotentInspect
Create an app entry without deploying files. Reserves the name and sets initial configuration. Supports BYO (Bring Your Own) integrations via proxy secrets — after creation, use set_secret to attach third-party API keys and verify_integration to confirm connectivity. Use deploy_app later to push files.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App subdomain name (lowercase alphanumeric + hyphens, 3-63 chars) | |
| public | No | Make app publicly accessible (default: false) | |
| enable_auth | No | Enable user registration/login for this app | |
| enable_files | No | Enable R2 file storage for this app | |
| database_tables | No | Database tables to provision: { table_name: { column: 'TEXT'|'INTEGER'|'REAL'|'BLOB' } } |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotency and non-destructive nature. The description adds valuable behavioral context not in annotations: name reservation behavior, BYO integration patterns via proxy secrets, and the explicit two-phase creation-then-deployment workflow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly structured sentences with zero waste: sentence 1 defines the action, sentence 2 describes side effects, sentence 3 covers advanced BYO usage with specific follow-up tools, and sentence 4 directs to the deployment sibling. Information density is high with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and comprehensive annotations, the description successfully covers the workflow complexity and integration patterns. Minor gap: lacks description of return values (no output schema exists), but the stated side effects (name reservation, config setting) provide sufficient operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description adds conceptual grouping ('initial configuration' covers the boolean flags and database_tables) and clarifies that 'name' involves reservation, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the core action ('Create an app entry'), the specific scope ('without deploying files'), and the side effect ('Reserves the name'). It clearly distinguishes this tool from the sibling 'deploy_app' by emphasizing that this only sets initial configuration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly prescribes the workflow sequence: use 'set_secret' after creation for BYO integrations, 'verify_integration' to confirm connectivity, and 'deploy_app' later to push files. This clearly establishes when to use this tool versus its siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_custom_domainAInspect
Attach a custom domain to an app. Returns DNS instructions (CNAME + TXT records) for verification. Requires Hobby tier or above.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App name to attach the domain to | |
| hostname | Yes | Custom domain hostname (e.g., "app.example.com") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses return value ('Returns DNS instructions (CNAME + TXT records)') which annotations do not cover, explaining the verification workflow. Does not explicitly address idempotency (annotation declares idempotentHint=false) but does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: action statement, return value disclosure (critical given no output schema), and prerequisite. No redundant information; every sentence provides essential operational context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by documenting return values (DNS records). Covers prerequisites (tier requirement) and core functionality. Adequate for a 2-parameter tool with full annotation coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both 'name' and 'hostname'. Description references 'app' and 'custom domain' conceptually aligning with parameters but does not add syntax or semantic details beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Attach') and resource ('custom domain') with clear scope ('to an app'). Distinct from sibling tools like create_app or deploy_app which handle different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite ('Requires Hobby tier or above') defining when the tool can be used. Lacks explicit comparison to alternative approaches or siblings, though no direct domain-related siblings exist in the set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deploy_appAIdempotentInspect
Deploy a full-stack web app. Creates or redeploys an app with static files, optional database, file storage, user auth, and RBAC. Supports BYO (Bring Your Own) integrations — after deploy, use set_secret to attach third-party API keys (Stripe, Resend, Neon, etc.) and the app can proxy requests through them securely. Returns a live URL.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App subdomain name (lowercase alphanumeric + hyphens, 1-63 chars) | |
| files | Yes | Map of filename to content string | |
| public | No | Make app publicly accessible (default: private with viewKey) | |
| version | No | Expected version for optimistic concurrency (CAS) | |
| rbac_roles | No | RBAC role definitions: { role_name: { permissions: [...], inherits: [...] } } | |
| enable_auth | No | Enable user registration/login for this app | |
| enable_files | No | Enable R2 file storage for this app | |
| database_tables | No | Database tables: { table_name: { column: 'TEXT'|'INTEGER'|'REAL'|'BLOB' } } | |
| rbac_default_role | No | Default role for new users |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Aligns with annotations (idempotentHint supported by 'redeploys', openWorldHint supported by external API mentions). Adds crucial behavioral context not in annotations: BYO integration pattern, secure proxy behavior for secrets, and explicit return value (live URL). Could explicitly mention optimistic concurrency/version parameter significance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with core purpose, followed by capabilities, integration workflow, and return value. Each sentence serves distinct purpose. Appropriate length for complex tool with 9 parameters and nested objects.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a complex infrastructure tool. Addresses deployment scope (full-stack), security model (BYO + proxy), output (live URL), and post-deployment workflow. No output schema provided, but description compensates by stating return value. Minor improvement possible by explicitly noting idempotent redeploy behavior or CAS version semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds functional grouping context, mapping parameters to features: 'static files' (files), 'optional database' (database_tables), 'file storage' (enable_files), 'user auth' (enable_auth), and 'RBAC' (rbac_roles). This semantic grouping helps agents understand parameter relationships beyond individual schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Deploy' with clear resource 'full-stack web app'. Distinguishes from siblings by stating 'Creates or redeploys' (differentiating from create_app) and explicitly referencing set_secret for BYO integrations, establishing a workflow relationship with that sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'after deploy, use set_secret' for third-party API keys. Clearly scopes capabilities (static files, database, auth, RBAC). Minor gap: does not explicitly distinguish when to use deploy_preview vs this production deploy tool, though 'live URL' implies production use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deploy_previewAInspect
Deploy an ephemeral preview of an app. The preview auto-expires after the specified TTL (default: 1 hour, max: 24 hours). Returns a unique preview URL.
| Name | Required | Description | Default |
|---|---|---|---|
| ttl | No | Preview lifetime in seconds (default: 3600, max: 86400) | |
| name | Yes | Base app name for the preview | |
| files | Yes | Map of filename to content string |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds significant behavioral context beyond annotations: specifies auto-expiration mechanics, TTL constraints (default/max), and return value format (unique preview URL). Annotations cover safety hints (readOnly: false, destructive: false) but description adds lifecycle and output semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose statement first, behavioral constraints second, return value third. Zero redundancy; every sentence conveys distinct information not duplicated in schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 3-parameter deployment tool. Compensates for missing output schema by explicitly stating the return value (preview URL). Could benefit from explicit mention of idempotency behavior (creates new preview each time) but annotations cover this hint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, but description enhances by providing human-readable time units (hours) for TTL constraints alongside the schema's seconds. Also contextualizes 'files' parameter purpose implicitly through the preview deployment scenario.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Deploy' with clear resource 'ephemeral preview of an app'. The term 'ephemeral' and 'preview' effectively distinguishes this from the sibling 'deploy_app' (permanent deployment) and 'create_app' (app creation vs preview generation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit guidance through 'ephemeral' and 'auto-expires' concepts, suggesting temporary/testing use cases. However, lacks explicit when-to-use guidance or direct comparison to siblings like 'deploy_app' for permanent deployments.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_appBRead-onlyIdempotentInspect
Get details of a specific app including URL, capabilities, and version.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable context about what data is returned (URL, capabilities, version) but omits error behavior (e.g., what happens if app not found) and response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single well-structured sentence front-loaded with the action. Every clause earns its place by specifying either the target resource or the specific fields returned. Minor improvement possible by explicitly mentioning the lookup key.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a low-complexity single-parameter tool with rich annotations. Since no output schema exists, the description partially compensates by listing returned fields (URL, capabilities, version), though it could elaborate on the response structure or error cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'name' described as 'App name'. Description does not add format constraints, validation rules, or clarify whether 'name' is a slug, ID, or display name, but baseline 3 is appropriate given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' and resource 'app' with specific scope (URL, capabilities, version). However, it does not explicitly distinguish from sibling 'list_apps' (which returns many apps) or 'get_deployment' (which returns deployment-specific details).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus alternatives (e.g., 'use list_apps first if you don't know the app name'), prerequisites, or error conditions. The word 'specific' implies lookup by identifier but does not explain how to obtain valid names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_deploymentARead-onlyIdempotentInspect
Get deployment history and version info for an app. Returns the current version, available rollback versions, and deploy trail entries.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App name | |
| limit | No | Max number of deploy trail entries to return (default: 20, max: 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish this is read-only and safe (readOnlyHint=true, destructiveHint=false). The description adds valuable behavioral context by specifying exactly what data is returned: 'current version, available rollback versions, and deploy trail entries'—information not available in the structured annotations or input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first states the purpose, the second states the return value. Information is front-loaded and appropriately sized for the tool's complexity. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by enumerating the specific return values (current version, rollback versions, deploy trail). For a simple read-only tool with two parameters and comprehensive annotations, this provides sufficient context for invocation, though it could mention error cases or pagination behavior for a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('App name' and detailed limit description), the schema carries the full burden of parameter documentation. The description mentions 'deploy trail entries' which conceptually links to the limit parameter, but adds no additional syntax, format, or semantic details beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resources ('deployment history and version info for an app'). It effectively distinguishes from siblings like deploy_app (which performs deployments) and get_app (which retrieves general app configuration rather than deployment-specific history and rollback versions).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description clearly states what the tool does, it lacks explicit guidance on when to use it versus alternatives (e.g., when to use this vs. deploy_preview for checking versions) or prerequisites. Usage must be inferred from the description and sibling tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_logsARead-onlyIdempotentInspect
Get recent deploy logs and operational events for an app. Includes deploy successes, failures, rollbacks, and errors.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App name | |
| limit | No | Max log entries to return (default: 50, max: 200) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable context about 'recent' data and specific event types included (rollbacks, errors), but omits return format details, time window definitions, or pagination behavior since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence establishes core purpose; second provides specific content details. Front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and complete annotations, the description adequately covers intent. Lists specific log content types to compensate for missing output schema, though could benefit from mentioning return structure (array vs object) or log entry format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('App name' and 'Max log entries'), the baseline is appropriately met. Description implies the 'app' target but doesn't explicitly map to the 'name' parameter or explain the 'limit' parameter's role in retrieving 'recent' logs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the resource (deploy logs and operational events) and scope (recent, for an app). The specific inclusion of 'deploy successes, failures, rollbacks, and errors' distinguishes it from sibling get_deployment (likely metadata) and get_app (configuration). Verb 'Get' is slightly generic but acceptable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through content description (troubleshooting deploys, monitoring operations), but lacks explicit when-to-use guidance or contrast with alternatives like get_deployment. No mention of prerequisites or filtering limitations beyond the 'recent' qualifier.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_viewkeyARead-onlyIdempotentInspect
Get the viewKey for a private app. Users need this to access the app.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the safety profile. The description adds functional context about why the viewKey is needed ('to access the app'), but doesn't disclose additional behavioral traits like error handling when the app doesn't exist or the specific format of the returned key.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The first states the action and resource, the second provides user-centric justification. Every sentence earns its place with no filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one string parameter, complete schema coverage, and comprehensive annotations, the description provides adequate context. It successfully explains the business purpose (access control for private apps) even without an output schema defined, though it could briefly mention the expected return type.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (the 'name' parameter is described as 'App name' in the schema), the baseline is 3. The description mentions 'private app' which contextualizes the input, but doesn't add syntax details, validation rules, or format specifications beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('viewKey') and clarifies the context ('for a private app'). It distinguishes from sibling tools like get_app or get_logs by specifying the viewKey retrieval purpose and mentioning user access requirements, though it doesn't explicitly name sibling alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool ('for a private app', 'Users need this to access'), but lacks explicit guidance on when NOT to use it (e.g., for public apps) or which sibling tool to use instead for other app metadata (like get_app).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_appsARead-onlyIdempotentInspect
List all apps deployed with your API key.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable auth scoping context ('with your API key') not present in the empty schema, but omits pagination, rate limits, or return structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded action verb, every phrase earns its place by defining resource and auth scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for low complexity (0 params) with rich annotations provided. Lacks output schema but description compensates with auth scope clarification; sufficient for tool selection despite missing return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline 4 per rubric. No parameters to document beyond schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('apps') with specific scope ('deployed with your API key'). Implicitly distinguishes from sibling get_app via plural form, but lacks explicit differentiation guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus siblings like get_app or claim_deployment, nor any prerequisites or filtering constraints beyond the implied auth scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_secretAIdempotentInspect
Store a BYO (Bring Your Own) proxy secret for an app. Used to attach third-party API keys so the app can proxy requests through AgentBuilders without exposing keys to the client. Common secret names: STRIPE_SECRET_KEY, RESEND_API_KEY, NEON_DATABASE_URL, SENTRY_DSN, POSTHOG_API_KEY, CLERK_SECRET_KEY, OPENAI_API_KEY, GITHUB_TOKEN. Domains are auto-detected from the secret name (e.g., STRIPE_* routes to api.stripe.com). Secret values are never returned in responses. Use verify_integration after setting a secret to confirm connectivity.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App name to attach the secret to | |
| secret_name | Yes | Secret identifier — use the service convention: STRIPE_SECRET_KEY, RESEND_API_KEY, NEON_DATABASE_URL, SENTRY_DSN, POSTHOG_API_KEY, CLERK_SECRET_KEY, OPENAI_API_KEY, GITHUB_TOKEN (alphanumeric + dashes/underscores, max 64 chars) | |
| secret_value | Yes | The secret value (e.g., sk_live_..., re_...). Stored securely, never returned. | |
| allowed_domains | No | Restrict the secret to these target domains. Usually auto-detected from the secret name (e.g., STRIPE_* auto-allows api.stripe.com). Override only if needed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover idempotency and read-only status, but description adds critical behavioral details: 'Secret values are never returned in responses' (security model) and 'Domains are auto-detected from the secret name' (routing logic). Explains the proxy architecture ('proxy requests through AgentBuilders') not captured in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences cover purpose, use case, specific examples/conventions, and security/next steps. Zero redundancy; every clause provides unique value (BYO definition, proxy explanation, naming examples, domain routing, security guarantee, integration pointer).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the security-sensitive nature and 100% schema coverage, the description adequately covers the essential behavioral and security context (values never returned, auto-routing). No output schema exists, but the description appropriately addresses this by noting secrets are never returned, which is sufficient for a storage operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage (baseline 3), description adds value by enumerating common secret_name conventions (STRIPE_SECRET_KEY, etc.) and explaining the auto-detection mechanism for allowed_domains ('Override only if needed'), which contextualizes the parameter relationship beyond the schema's individual field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the specific action ('Store a BYO proxy secret') and resource (third-party API keys for an app). It distinguishes from siblings like create_app or deploy_app by focusing specifically on secret attachment for proxy requests, and explicitly references verify_integration as the complementary check tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use ('attach third-party API keys so the app can proxy requests... without exposing keys to the client') and explicitly names a sibling alternative/next step ('Use verify_integration after setting a secret'). Lacks explicit 'when-not-to-use' guidance, preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_appAIdempotentInspect
Update app settings (e.g., toggle public/private) without full redeploy.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | App name | |
| public | No | Set public accessibility |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (destructive=false, idempotent=true, readOnly=false). Description adds valuable scope context that this modifies settings rather than code/deployment artifacts, but does not elaborate on idempotent behavior, partial update semantics, or error handling for missing apps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loads action ('Update app settings'), parenthetical example provides immediate context, and trailing qualifier ('without full redeploy') addresses the critical sibling distinction efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter mutation with strong annotations. 'e.g.' implies other settings exist but leaves them undocumented. No output schema present; description appropriately does not attempt to document return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% ('App name', 'Set public accessibility'), establishing baseline 3. Description provides example usage ('toggle public/private') that aligns with but does not substantially extend the schema's description of the boolean parameter. No syntax, validation, or format details beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Update') and resource ('app settings') with specific scope example ('toggle public/private'). The phrase 'without full redeploy' effectively distinguishes this from sibling deployment tools (deploy_app, deploy_preview).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual differentiation via 'without full redeploy,' implicitly guiding selection over deployment tools for configuration-only changes. Lacks explicit 'when not to use' (e.g., for code changes), but the exclusion is strongly implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_integrationARead-onlyIdempotentInspect
Verify a BYO (Bring Your Own) integration is correctly configured for an app. Checks that the proxy secret exists, the target service is reachable, and a lightweight health probe succeeds. Returns structured pass/fail per check. Supported services: stripe, resend, neon, sentry, posthog, clerk, openai, github, supabase.
| Name | Required | Description | Default |
|---|---|---|---|
| service | Yes | Service to verify (stripe, resend, neon, sentry, posthog, clerk, openai, github, supabase) | |
| app_name | Yes | The app to verify integration for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable specifics beyond annotations: it details three distinct verification checks (proxy secret existence, service reachability, health probe) and discloses the return format ('structured pass/fail per check'), which compensates for the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear front-loading: purpose first, followed by implementation details, returns, and constraints. Each sentence earns its place, though the services list is somewhat redundant with the schema description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description appropriately explains return values ('structured pass/fail'). It covers verification methodology and supported service constraints, providing sufficient context for an agent to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents both parameters. The description repeats the supported services list but adds no additional semantic context, syntax examples, or format constraints beyond what the schema provides, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Verify') and clear resource ('BYO integration'), distinguishing it from deployment-focused siblings like deploy_app or create_app. It scopes the operation to 'correctly configured' status checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage context (verification after configuration), it lacks explicit when-to-use guidance relative to siblings like set_secret (which configures secrets this tool verifies). No alternatives or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!