ai-agent-scratchpad
Server Details
Cloudflare Workers MCP server: ai-agent-scratchpad
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- lazymac2x/ai-agent-scratchpad-api
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 6 of 6 tools scored.
Each tool has a distinct purpose: save checkpoint, delete key, compute diff, list keys, read value, write value. No overlapping functionality; agents can easily distinguish which tool to use.
All tools follow a consistent 'scratchpad_' prefix with a clear verb (checkpoint, delete, diff, list, read, write), making the naming pattern predictable and easy to understand.
With 6 tools, the server is well-scoped for a scratchpad utility. It provides essential CRUD operations plus diff/checkpoint for state management, without unnecessary bloat.
The tool set covers all key operations: create/update (write), read, delete, list, plus snapshot (checkpoint) and comparison (diff). No dead ends; agents can fully manage a scratchpad session.
Available Tools
6 toolsscratchpad_checkpointAInspect
현재 세션 상태의 스냅샷을 체크포인트 태그로 저장합니다. keys를 지정하면 해당 키만 스냅샷.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | Yes | 체크포인트 태그 (예: "step-3", "before-refactor") | |
| keys | No | 스냅샷할 키 목록 (선택사항, 미지정 시 전체) | |
| session_id | Yes | 세션 식별자 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It describes the basic function but fails to mention important behaviors such as whether snapshots overwrite existing tags, whether they are durable, or what happens if the same tag is used again. The impact on the current state or any side effects are not addressed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short sentences that convey the core functionality and the optional keys behavior. There is no redundant or irrelevant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the input parameters and basic behavior sufficiently for a simple tool. However, it does not mention any output or return value, which could be important for an agent to understand what happens after checkpointing (e.g., success/failure message or a checkpoint object). Given the absence of an output schema, this is a notable gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already explains all parameters. The description adds minor value by restating the optional keys behavior (already in schema) and providing an example tag format in the tag description. This is adequate but not a significant enhancement beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('saves a snapshot'), the resource ('current session state'), and the mechanism ('as a checkpoint tag'). It also distinguishes from siblings like scratchpad_read/write by specifying it creates a checkpoint snapshot. The verb 'saves' and resource 'session state' are specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a usage context (saving checkpoints for later restoration or comparison) but does not explicitly state when to use this tool over siblings. No exclusions or alternative tool names are mentioned. The purpose is clear but guidance on usage conditions is absent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scratchpad_deleteCInspect
세션의 특정 키를 삭제합니다.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | 삭제할 키 | |
| session_id | Yes | 세션 식별자 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully convey behavioral traits. It states the tool deletes a key (mutation), but doesn't disclose irreversibility, permissions, side effects on other keys, or whether the deletion is permanent. The description is too terse for a mutating operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, short sentence that immediately conveys the action and resource. It is efficient and front-loaded, though it could slightly benefit from a brief elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple two-parameter tool with full schema coverage and no output schema, the description is minimally viable. However, it lacks usage guidelines and behavioral depth, which are expected for completeness. It adequately covers the core functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both parameters have descriptions). The tool description adds no extra meaning beyond what's in the schema, so the baseline of 3 applies. The description does not repeat or enhance parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool deletes a specific key from a session, using a verb+resource structure. It distinguishes from sibling tools like scratchpad_write (add/update) and scratchpad_read (retrieve), but doesn't elaborate on the concept of 'session' which is shared across tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., scratchpad_write for updating, scratchpad_checkpoint for saving state). There are no directions on prerequisites or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scratchpad_diffCInspect
현재 키 값과 체크포인트 값의 diff를 반환합니다. 또는 두 임의 값의 diff 비교.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | 비교할 키 (체크포인트 비교 시 필수) | |
| value_a | No | 직접 비교할 첫 번째 값 (선택사항) | |
| value_b | No | 직접 비교할 두 번째 값 (선택사항) | |
| session_id | Yes | 세션 식별자 | |
| checkpoint_tag | No | 비교 기준 체크포인트 태그 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only states the tool returns a diff, but does not disclose output format, error conditions, or whether it modifies state. Critical behavioral traits (e.g., does the key need to exist? Are values limited?) are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose. No redundant words. Efficiently communicates the two modes. Highly concise for a 5-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description is incomplete. It lacks details on return format, error handling, parameter combinations, and relationship to sibling tools (e.g., scratchpad_checkpoint). A tool with 5 parameters requires more comprehensive guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with all parameters described. The description adds minimal extra meaning: it maps 'key' and 'checkpoint_tag' to one mode, and 'value_a'/'value_b' to another. However, it does not explain interaction when both sets are provided, or add beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a diff between a key's current value and a checkpoint, or between two arbitrary values. It provides two specific use cases, distinguishing its core function. However, it does not fully clarify the context of 'current key value' or relate to sibling tools like checkpoint, reducing clarity slightly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It fails to explain which mode to choose, prerequisites (e.g., key must exist for checkpoint comparison), or when not to use it. No mention of sibling tools like scratchpad_checkpoint that likely create the checkpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scratchpad_listBInspect
세션의 모든 키 목록을 반환합니다. 선택적 prefix 필터링 지원.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 최대 반환 수 (기본 100, 최대 500) | |
| prefix | No | 키 prefix 필터 (선택사항) | |
| session_id | Yes | 세션 식별자 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations; description only states returns keys and supports prefix filtering, but doesn't disclose behavior like limit semantics, return format, or whether keys are sorted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise single sentence; front-loaded with the core purpose. Could be slightly more structured but efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with full schema coverage, the description is adequate; lacks output schema but the purpose is clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema fully covers parameters with descriptions; description adds no additional semantics beyond the schema's implicit meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns a list of all keys in the session, with optional prefix filtering. This distinguishes it from sibling tools like read, write, delete, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives; no when-not or context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scratchpad_readBInspect
세션의 특정 키 값을 조회합니다.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | 조회할 키 | |
| session_id | Yes | 세션 식별자 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description only states it reads a key. Does not disclose side effects, read-only nature, or behavior on missing keys, which the description should cover in absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded. Could be slightly expanded to include behavioral notes without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, simple operation, but missing details on behavior for missing keys, error conditions, and session management. Incomplete for safe agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage and describes parameters. Description adds no extra meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool reads a specific key value of a session. It is distinct from sibling tools like scratchpad_delete, scratchpad_write, etc., which perform different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. Usage is implied by the name and description but lacks when-not-to-use or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scratchpad_writeBInspect
세션에 키-값을 저장합니다. value는 문자열 또는 JSON 직렬화 가능한 값.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | 스크래치패드 키 | |
| ttl | No | 만료 시간(초), 선택사항 | |
| value | Yes | 저장할 값 (문자열 또는 객체) | |
| session_id | Yes | 세션 식별자 (에이전트 실행 단위) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description must carry full burden. It mentions value types but fails to disclose whether overwriting occurs, idempotency, or success/failure behavior, which is insufficient for a write operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence with no wasted words, though additional structured information could be included without reducing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 4 parameters, no output schema, and no annotations, the description should cover return behavior, overwrite policy, and ttl implications, which are missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds marginal value (clarifies value can be object or string). However, it does not explain ttl behavior or session_id semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (저장, stores) and resource (세션 키-값, session key-value), and the sibling tools like 읽기, 삭제, 리스트 등 make the purpose distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage (storing values in session) but does not give explicit guidance on when to use this tool versus alternatives like scratchpad_read or scratchpad_delete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!