Skip to main content
Glama
clarkemn

prisma-cloud-docs-mcp-server

search_prisma_api_docs

Find specific information in Prisma Cloud API documentation by searching with keywords to get relevant documentation sections.

Instructions

Search Prisma Cloud API documentation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Primary handler function for the 'search_prisma_api_docs' MCP tool. Registered with @mcp.tool() decorator. It invokes the DocumentationIndexer.search_docs method filtered to the 'prisma_api' site and serializes the top results to JSON.
    @mcp.tool()
    async def search_prisma_api_docs(query: str) -> str:
        """Search Prisma Cloud API documentation"""
        results = await indexer.search_docs(query, site='prisma_api')
        return json.dumps(results, indent=2)
  • Duplicate handler function (likely for HTTP deployment variant) for the 'search_prisma_api_docs' MCP tool. Identical implementation to server.py version.
    @mcp.tool()
    async def search_prisma_api_docs(query: str) -> str:
        """Search Prisma Cloud API documentation"""
        results = await indexer.search_docs(query, site='prisma_api')
        return json.dumps(results, indent=2)
  • Core helper method in DocumentationIndexer class that implements the document search logic: relevance scoring based on title/content matches, snippet extraction, and returns top 10 results. Called by the tool handler with site='prisma_api'.
    async def search_docs(self, query: str, site: str = None) -> List[Dict]:
        """Search indexed documentation"""
        if not self.cached_pages:
            return []
        
        query_lower = query.lower()
        results = []
        
        for url, page in self.cached_pages.items():
            # Filter by site if specified
            if site and page.site != site:
                continue
            
            # Calculate relevance score
            score = 0
            title_lower = page.title.lower()
            content_lower = page.content.lower()
            
            # Higher score for title matches
            if query_lower in title_lower:
                score += 10
                # Even higher for exact title matches
                if query_lower == title_lower:
                    score += 20
            
            # Score for content matches
            content_matches = content_lower.count(query_lower)
            score += content_matches * 2
            
            # Score for partial word matches in title
            query_words = query_lower.split()
            for word in query_words:
                if word in title_lower:
                    score += 5
                if word in content_lower:
                    score += 1
            
            if score > 0:
                # Extract snippet around first match
                snippet = self._extract_snippet(page.content, query, max_length=200)
                
                results.append({
                    'title': page.title,
                    'url': page.url,
                    'site': page.site,
                    'snippet': snippet,
                    'score': score
                })
        
        # Sort by relevance score (highest first) and limit results
        results.sort(key=lambda x: x['score'], reverse=True)
        return results[:10]
  • Dataclass used by DocumentationIndexer to cache indexed documentation pages, including expiration logic.
    @dataclass
    class CachedPage:
        title: str
        content: str
        url: str
        site: str
        timestamp: float
        ttl: float = 3600  # 1 hour default TTL
        
        @property
        def is_expired(self) -> bool:
            return time.time() > self.timestamp + self.ttl
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It only states what the tool does at a high level, without mentioning any behavioral traits like authentication requirements, rate limits, response format, pagination, or error handling. This leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just one sentence that directly states the tool's purpose. There's zero waste or unnecessary information, making it front-loaded and efficient for the agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values) and only one simple parameter, the description's minimal approach is somewhat adequate. However, with no annotations and multiple similar sibling tools, the description should provide more context about scope and differentiation to be truly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for its single parameter 'query', and the tool description provides no additional information about what the query parameter should contain, its format, or examples of valid values. The description doesn't compensate for the complete lack of parameter documentation in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Search Prisma Cloud API documentation', which is a specific verb+resource combination. However, it doesn't distinguish this tool from its sibling 'search_prisma_docs' or 'search_all_docs', leaving some ambiguity about scope and differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_prisma_docs' or 'search_all_docs'. There's no mention of prerequisites, context, or exclusions, leaving the agent with no usage direction beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/clarkemn/prisma-cloud-docs-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server