Skip to main content
Glama
jezweb

Smart Prompts MCP Server

search_prompts

Search for prompts by keyword, category, or tags to find existing prompts and avoid duplicates before creating new ones.

Instructions

🔍 ALWAYS START HERE: Search for prompts by keyword, category, or tags. Returns matching prompts with their metadata. This is the recommended first step before using get_prompt or creating new prompts. Helps avoid duplicates and find exactly what you need.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter by specific category. Available: "development", "content-creation", "business", "ai-prompts", "devops", "documentation", "project-management"
queryNoSearch keywords to find in prompt title, description, or content. Examples: "api", "documentation", "code review", "testing"
tagsNoFilter by tags for precise matching. Examples: ["api", "rest"], ["testing", "automation"], ["documentation", "technical-writing"]

Implementation Reference

  • Defines the MCP tool schema for 'search_prompts' including name, description, and input parameters (query: required string, limit: optional number). This is returned by listPrompts() for tool discovery.
    {
      name: 'search_prompts',
      description: 'Search for prompts by keyword',
      arguments: [
        {
          name: 'query',
          description: 'Search query',
          required: true,
        },
        {
          name: 'limit',
          description: 'Maximum number of results (default: 5)',
          required: false,
        },
      ],
    },
  • Main handler for search_prompts tool invocation. Validates inputs, performs search via cache, limits results, and returns formatted assistant message listing matching prompts with metadata.
      private async handleSearchPrompts(args?: Record<string, string>): Promise<PromptMessage[]> {
        if (!args?.query) {
          throw new Error('query argument is required');
        }
    
        const limit = parseInt(args.limit || '5', 10);
        const results = this.cache.searchPrompts(args.query).slice(0, limit);
    
        const content = `# Search Results for "${args.query}"
    
    Found ${results.length} prompts:
    
    ${results.map((prompt, i) => `
    ${i + 1}. **${prompt.metadata.title || prompt.name}**
       - Category: ${prompt.metadata.category || 'general'}
       - Tags: ${prompt.metadata.tags?.join(', ') || 'none'}
       - Description: ${prompt.metadata.description || 'No description'}
    `).join('\n')}
    
    To use a prompt, ask for it by name.`;
    
        return [
          {
            role: 'assistant',
            content: {
              type: 'text',
              text: content,
            },
          },
        ];
      }
  • Helper method implementing prompt search logic: case-insensitive matching of query against prompt title, description, content, and tags.
    searchPrompts(query: string): PromptInfo[] {
      const lowerQuery = query.toLowerCase();
      return this.getAllPrompts().filter(prompt => {
        const inTitle = prompt.metadata.title?.toLowerCase().includes(lowerQuery);
        const inDescription = prompt.metadata.description?.toLowerCase().includes(lowerQuery);
        const inContent = prompt.content?.toLowerCase().includes(lowerQuery);
        const inTags = prompt.metadata.tags?.some(tag => 
          tag.toLowerCase().includes(lowerQuery)
        );
        
        return inTitle || inDescription || inContent || inTags;
      });
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a search operation that returns metadata, implying it's read-only and non-destructive, which is adequate. However, it lacks details on pagination, rate limits, error handling, or response format, leaving gaps for an AI agent to infer behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with key information ('ALWAYS START HERE') and uses only three sentences, each earning its place by explaining purpose, usage guidelines, and benefits. There is no wasted text, and the structure flows logically from action to context to rationale.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with three optional parameters), no annotations, and no output schema, the description is mostly complete. It covers purpose and usage well but lacks details on behavioral aspects like response structure or limitations. It compensates somewhat with strong guidance, but could be more comprehensive for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all three parameters (category, query, tags) with examples and constraints. The description adds no additional parameter semantics beyond implying these are search filters, which the schema covers. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search for prompts') and resources ('by keyword, category, or tags'), distinguishing it from siblings like get_prompt (retrieves specific prompts) and create_github_prompt (creates new prompts). It explicitly mentions returning 'matching prompts with their metadata,' providing a complete picture of the operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'ALWAYS START HERE' and 'recommended first step before using get_prompt or creating new prompts.' It also explains why: 'Helps avoid duplicates and find exactly what you need,' effectively positioning it as a discovery tool versus retrieval or creation alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jezweb/smart-prompts-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server