Skip to main content
Glama

search_compounds

Find supplement compounds by resolving misspellings, brands, and abbreviations across 584 aliases to identify ingredients accurately.

Instructions

Fuzzy search for supplement compounds. 584 aliases across 95 compounds. Resolves misspellings, brands, abbreviations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
limitNoMax results (default 5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool's fuzzy search capability and ability to resolve misspellings, brands, and abbreviations, which adds some behavioral context. However, it doesn't disclose important traits like whether this is a read-only operation, performance characteristics, error handling, or what the output format looks like. For a search tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally concise and well-structured in a single sentence. Every element earns its place: the core function ('Fuzzy search for supplement compounds'), scale context ('584 aliases across 95 compounds'), and key capabilities ('Resolves misspellings, brands, abbreviations'). There's zero waste or redundancy, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with fuzzy matching), no annotations, and no output schema, the description provides adequate but incomplete context. It covers what the tool does and some behavioral aspects, but lacks guidance on usage scenarios, output format, error conditions, and performance characteristics. The description is sufficient for basic understanding but leaves important operational details unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with both parameters ('query' and 'limit') having clear descriptions in the schema. The description doesn't add any meaningful parameter semantics beyond what the schema already provides. It doesn't explain query syntax, search algorithms, or result ordering. The baseline score of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fuzzy search for supplement compounds' specifies both the verb (search) and resource (supplement compounds). It distinguishes itself from siblings by focusing on search functionality rather than checking interactions, explaining interactions, getting detailed info, or retrieving evidence. However, it doesn't explicitly contrast with sibling tools, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it mentions capabilities like handling misspellings, brands, and abbreviations, it doesn't indicate when to choose this search tool over sibling tools like 'get_compound_info' or 'get_evidence'. There's no mention of prerequisites, typical use cases, or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TruthStack1/truthstack-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server