Skip to main content
Glama

faf_check

Read-only

Inspect human_context field quality with ratings from empty to excellent, and protect high-rated fields from being overwritten.

Instructions

๐Ÿ” Quality inspection for human_context fields + field protection - Shows empty/generic/good/excellent ratings ๐Ÿงกโšก๏ธ

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
protectNoLock good/excellent fields from being overwritten
unlockNoRemove all field protections
pathNoProject path. Sets session context for subsequent calls.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description implies both read and write operations (inspecting 'human_context fields' while also offering 'field protection' via protect/unlock parameters), yet annotations declare readOnlyHint=true and destructiveHint=false. This contradiction undermines transparency. Additionally, the description does not reveal that the tool sets a session context via the path parameter or what happens when protect/unlock are used together.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, consisting of a single sentence plus emojis. It front-loads the core purpose ('Quality inspection for human_context fields + field protection'). However, the emojis add slight noise and the structure is minimal, missing any breakdown of the different aspects (ratings, protection modes).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description omits crucial details: it does not explain the output (e.g., what the 'empty/generic/good/excellent ratings' look like, whether a report is returned, or what 'field protection' entails in terms of side effects). With no output schema, the agent cannot anticipate the tool's response. The interplay between protect and unlock is also unclear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with detailed descriptions for each parameter. The tool description adds no additional semantic value beyond stating 'field protection'. Since schema descriptions already explain the parameters, the description meets the baseline but does not enhance understanding of param usage or interplay.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a quality inspection of human_context fields and shows ratings, with a mention of field protection. It uses specific verbs and resources ('quality inspection', 'human_context fields', 'ratings'), and the emojis add visual cues. However, it does not explicitly distinguish itself from sibling tools like faf_status or faf_doctor, which may also perform inspections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor any prerequisites, limitations, or contexts where it is appropriate. The description is purely declarative, leaving the agent to infer usage from sibling names. For example, it does not clarify when to use faf_check over faf_status or faf_doctor.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Wolfe-Jam/claude-faf-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server