Skip to main content
Glama

search_peps

Search active Python Enhancement Proposal (PEP) titles using a query string to find relevant Python language specifications and proposals.

Instructions

Search active PEP titles for a query string.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Registration of the search_peps MCP tool, delegating to the peps_client.
    @mcp.tool
    async def search_peps(query: str) -> list[dict]:
        """Search active PEP titles for a query string."""
        return await peps_client.search_active_peps(query)
  • The actual implementation of searching active PEPs.
    async def search_active_peps(self, query: str) -> list[dict[str, Any]]:
        """Search active PEP titles using case-insensitive substring matching."""
        normalized_query = query.strip().lower()
        if not normalized_query:
            return []
    
        active = await self.list_active_peps()
        return [
            pep
            for pep in active
            if normalized_query in str(pep.get("title", "")).lower()
        ]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'active' PEPs and 'titles' as scope filters, but fails to explain matching logic (partial vs exact), case sensitivity, or what 'active' specifically means. The existence of an output schema reduces some burden, but safety/permission hints are absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with the action front-loaded. There is no redundant or wasted text; every word contributes to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single string parameter) and the presence of an output schema, the description is minimally adequate. However, the lack of annotations (safety hints) and zero schema descriptions leaves gaps that the description does not fully fill, preventing a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It references a 'query string' which maps to the 'query' parameter and implies its purpose (searching titles), providing basic semantic context. However, it lacks format details, examples, or constraints that would fully compensate for the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Search), resource (active PEP titles), and scope limitation (titles only, active status). It implicitly distinguishes from sibling 'get_pep' (retrieval by ID) and 'list_peps' (enumeration) by specifying a text search function, though it doesn't explicitly name the alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool (when searching for PEPs by title text), but provides no explicit guidance on when to prefer 'get_pep' or 'list_peps' instead, nor does it mention prerequisites or limitations that would help an agent select correctly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ribbit-br/mcp-pep-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server