Skip to main content
Glama
OpenGerwin

mcp-google-agent-platform-docs

by OpenGerwin

search_docs

Search Google AI platform documentation for topics like function calling, Agent Development Kit, or Gemini Pro. Returns matching pages with titles, paths, and excerpts. Supports GEAP and Vertex AI sources. Use get_doc to read full content.

Instructions

Search Google AI platform documentation.

Args: query: Search terms (e.g. "function calling", "Memory Bank setup", "Agent Development Kit", "Gemini 3.1 Pro") max_results: Number of results to return (default: 5, max: 20) source: Documentation source: - "geap" (default) — Gemini Enterprise Agent Platform (current) - "vertex-ai" — Vertex AI Generative AI (legacy)

Returns: Matching documentation pages with titles, paths, and excerpts. Use get_doc(path) to read the full content of any result.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
max_resultsNo
sourceNogeap

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The MCP tool handler for 'search_docs'. This async function is decorated with @mcp.tool() to expose it as an MCP tool. It takes query, max_results, and source parameters, ensures the server is initialized, delegates to the SearchEngine, and returns formatted results with titles, paths, scores, and excerpts.
    @mcp.tool()
    async def search_docs(
        query: str,
        max_results: int = 5,
        source: str = "geap",
    ) -> str:
        """Search Google AI platform documentation.
    
        Args:
            query: Search terms (e.g. "function calling", "Memory Bank setup",
                   "Agent Development Kit", "Gemini 3.1 Pro")
            max_results: Number of results to return (default: 5, max: 20)
            source: Documentation source:
                    - "geap" (default) — Gemini Enterprise Agent Platform (current)
                    - "vertex-ai" — Vertex AI Generative AI (legacy)
    
        Returns:
            Matching documentation pages with titles, paths, and excerpts.
            Use get_doc(path) to read the full content of any result.
        """
        await _ensure_initialized()
    
        max_results = min(max_results, 20)
        results = _search.search(query, max_results=max_results, source_id=source)
    
        if not results:
            return f"No results found for '{query}' in source '{source}'."
    
        lines = [f"## Search results for: {query}\n"]
        lines.append(f"Source: {source} | {len(results)} results\n")
    
        for i, r in enumerate(results, 1):
            lines.append(f"### {i}. {r.title}")
            lines.append(f"**Path:** `{r.path}`")
            lines.append(f"**Score:** {r.score}")
            lines.append(f"**Excerpt:** {r.excerpt}")
            lines.append("")
    
        lines.append(
            "💡 Use `get_doc(path)` to read the full content of any page above."
        )
    
        return "\n".join(lines)
  • Registration of the 'search_docs' tool via the @mcp.tool() decorator on the FastMCP instance, automatically registering the function as an MCP tool.
    @mcp.tool()
  • Input schema parameters for search_docs: query (str, required), max_results (int, default 5, capped at 20), source (str, default 'geap', options: 'geap' or 'vertex-ai').
    async def search_docs(
        query: str,
        max_results: int = 5,
        source: str = "geap",
  • SearchEngine class with TF-IDF search implementation. Contains the build_index() method for indexing documents and the search() method that performs TF-IDF ranked retrieval. Also includes helper methods for tokenization, title extraction, and excerpt generation. Used by the search_docs handler via the _search global instance.
    class SearchEngine:
        """Simple TF-IDF search engine across cached documents."""
    
        def __init__(self):
            # {token: {path: count}}
            self._inverted_index: dict[str, dict[str, int]] = {}
            # {path: total_token_count}
            self._doc_lengths: dict[str, int] = {}
            # {path: raw_content}
            self._documents: dict[str, str] = {}
            # {path: title}
            self._titles: dict[str, str] = {}
            # {path: source_id}
            self._sources: dict[str, str] = {}
            # Total document count
            self._num_docs: int = 0
    
        def build_index(self, pages: dict[str, str], source_id: str) -> None:
            """Build (or extend) the index from {path: content} dict.
    
            Can be called multiple times for different sources.
            """
            for path, content in pages.items():
                unique_key = f"{source_id}:{path}"
    
                # Extract title (first H1 or first line)
                title = self._extract_title(content)
                self._titles[unique_key] = title
                self._documents[unique_key] = content
                self._sources[unique_key] = source_id
    
                # Tokenize
                tokens = self._tokenize(content)
                self._doc_lengths[unique_key] = len(tokens)
    
                # Build inverted index
                token_counts = Counter(tokens)
                for token, count in token_counts.items():
                    if token not in self._inverted_index:
                        self._inverted_index[token] = {}
                    self._inverted_index[token][unique_key] = count
    
            self._num_docs = len(self._documents)
            logger.info(
                "Index built/updated: %d total docs, %d unique tokens",
                self._num_docs,
                len(self._inverted_index),
            )
    
        def search(
            self,
            query: str,
            max_results: int = 5,
            source_id: str | None = None,
        ) -> list[SearchResult]:
            """Search for documents matching the query.
    
            Args:
                query: Search terms.
                max_results: Max number of results to return.
                source_id: Filter by source (None = search all).
    
            Returns:
                Ranked list of SearchResult objects.
            """
            query_tokens = self._tokenize(query)
            if not query_tokens:
                return []
    
            # Calculate TF-IDF scores for each document
            scores: dict[str, float] = {}
    
            for token in query_tokens:
                if token not in self._inverted_index:
                    continue
    
                posting = self._inverted_index[token]
                # IDF: log(N / df)
                df = len(posting)
                idf = math.log(self._num_docs / df) if df > 0 else 0
    
                for unique_key, tf in posting.items():
                    # Filter by source if specified
                    if source_id and self._sources.get(unique_key) != source_id:
                        continue
    
                    # TF: normalized by document length
                    doc_len = self._doc_lengths.get(unique_key, 1)
                    normalized_tf = tf / doc_len
    
                    score = normalized_tf * idf
                    scores[unique_key] = scores.get(unique_key, 0.0) + score
    
            # Sort by score (descending) and take top results
            ranked = sorted(scores.items(), key=lambda x: x[1], reverse=True)
            ranked = ranked[:max_results]
    
            results = []
            for unique_key, score in ranked:
                src_id = self._sources[unique_key]
                path = unique_key.split(":", 1)[1]
                content = self._documents[unique_key]
    
                results.append(
                    SearchResult(
                        path=path,
                        title=self._titles.get(unique_key, path),
                        score=round(score, 6),
                        excerpt=self._extract_excerpt(content, query),
                        source_id=src_id,
                    )
                )
    
            return results
    
        def _tokenize(self, text: str) -> list[str]:
            """Tokenize text: lowercase, split on non-alpha, remove stop words."""
            # Convert to lowercase and split on non-alphanumeric
            tokens = re.findall(r"[a-z0-9]+", text.lower())
            # Remove stop words and very short tokens
            return [t for t in tokens if t not in STOP_WORDS and len(t) > 1]
    
        def _extract_title(self, content: str) -> str:
            """Extract the first H1 heading as the title."""
            for line in content.split("\n"):
                line = line.strip()
                if line.startswith("# ") and not line.startswith("##"):
                    return line[2:].strip()
            # Fallback: first non-empty line
            for line in content.split("\n"):
                line = line.strip()
                if line and not line.startswith(">") and not line.startswith("<!--"):
                    return line[:100]
            return ""
    
        def _extract_excerpt(
            self, content: str, query: str, chars: int = 300
        ) -> str:
            """Extract a relevant excerpt around the first query match."""
            content_lower = content.lower()
            query_lower = query.lower()
    
            # Try to find exact phrase match first
            idx = content_lower.find(query_lower)
    
            if idx == -1:
                # Try individual words
                for word in query_lower.split():
                    if len(word) > 2:
                        idx = content_lower.find(word)
                        if idx != -1:
                            break
    
            if idx == -1:
                # No match found — return start of document
                return content[:chars].strip() + "..."
    
            # Extract context around the match
            start = max(0, idx - chars // 3)
            end = min(len(content), idx + chars * 2 // 3)
    
            excerpt = content[start:end].strip()
    
            # Clean up: don't start/end mid-word
            if start > 0:
                space_idx = excerpt.find(" ")
                if space_idx != -1 and space_idx < 30:
                    excerpt = "..." + excerpt[space_idx + 1 :]
    
            if end < len(content):
                space_idx = excerpt.rfind(" ")
                if space_idx != -1 and space_idx > len(excerpt) - 30:
                    excerpt = excerpt[:space_idx] + "..."
    
            return excerpt
    
        @property
        def doc_count(self) -> int:
            """Total number of indexed documents."""
            return self._num_docs
  • SearchResult dataclass used as the return type from the SearchEngine. Contains fields: path, title, score, excerpt, and source_id. The search_docs handler iterates over these to build the formatted output.
    @dataclass
    class SearchResult:
        """A single search result."""
    
        path: str
        title: str
        score: float
        excerpt: str
        source_id: str
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description details the return format (titles, paths, excerpts) and parameter behavior (defaults, max results, source options). It does not mention rate limits or authorization, but as a search tool with no annotations, the description provides sufficient transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured with bullet points for arguments. Every sentence adds value, and there is no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, parameters, return value, and sibling tools. It could mention result ordering or pagination, but overall it is comprehensive for a search tool with an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description thoroughly explains each parameter: query with examples, max_results with default and maximum, and source with options and defaults. Since schema coverage is 0%, the description fully compensates by providing clear semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search Google AI platform documentation' and specifies it returns matching pages with titles, paths, and excerpts. It distinguishes from sibling get_doc which is for reading full content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (searching documentation) and recommends an alternative (get_doc for reading full content). It also differentiates between documentation sources (geap vs vertex-ai).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OpenGerwin/mcp-google-agent-platform-docs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server