Skip to main content
Glama
gemini2026

Documentation Search MCP Server

by gemini2026

get_code_examples

Find curated code examples for specific libraries and topics to implement programming features with clear explanations.

Instructions

Get curated code examples for a specific topic and library.

Args:
    library: The library to search for examples
    topic: The specific topic or feature
    language: Programming language for examples
    version: Library version to search (e.g., "4.2", "stable", "latest"). Default: "latest"
    auto_detect_version: Automatically detect installed package version. Default: False

Returns:
    Curated code examples with explanations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
libraryYes
topicYes
languageNopython
versionNolatest
auto_detect_versionNo

Implementation Reference

  • The @mcp.tool()-decorated async function that implements the core logic of the 'get_code_examples' tool. It performs filtered searches for code examples using smart_search helpers, falls back to web search and regex extraction if needed, and returns structured examples with metadata. The function signature defines the input schema via type hints.
    async def get_code_examples(
        library: str,
        topic: str,
        language: str = "python",
        version: str = "latest",
        auto_detect_version: bool = False,
    ):
        """
        Get curated code examples for a specific topic and library.
    
        Args:
            library: The library to search for examples
            topic: The specific topic or feature
            language: Programming language for examples
            version: Library version to search (e.g., "4.2", "stable", "latest"). Default: "latest"
            auto_detect_version: Automatically detect installed package version. Default: False
    
        Returns:
            Curated code examples with explanations
        """
    
        await enforce_rate_limit("get_code_examples")
    
        # Enhanced query for code-specific search
        code_query = f"{library} {topic} example code {language}"
    
        try:
            # Use filtered search to find examples with code
            from .smart_search import filtered_search, SearchFilters
    
            filters = SearchFilters(content_type="example", has_code_examples=True)
    
            results = await filtered_search.search_with_filters(
                code_query, library, filters
            )
    
            if not results:
                # Fallback to regular search
                if library not in docs_urls:
                    return {"error": f"Library {library} not supported"}
    
                query = f"site:{docs_urls[library]} {code_query}"
                search_results = await search_web(query)
    
                if not search_results.get("organic"):
                    return {"error": "No code examples found"}
    
                # Process first result for code extraction
                first_result = search_results["organic"][0]
                content = await fetch_url(first_result["link"])
    
                # Extract code snippets (simplified)
                code_blocks = []
                import re
    
                code_pattern = r"```(?:python|javascript|typescript|js)?\n(.*?)```"
                matches = re.finditer(code_pattern, content, re.DOTALL)
    
                for i, match in enumerate(matches):
                    if i >= 3:  # Limit to 3 examples
                        break
                    code_blocks.append(
                        {
                            "example": i + 1,
                            "code": match.group(1).strip(),
                            "language": language,
                            "source_url": first_result["link"],
                        }
                    )
    
                return {
                    "library": library,
                    "topic": topic,
                    "language": language,
                    "total_examples": len(code_blocks),
                    "examples": code_blocks,
                }
    
            else:
                # Process enhanced results
                examples = []
                for i, result in enumerate(results[:3]):
                    examples.append(
                        {
                            "example": i + 1,
                            "title": result.title,
                            "description": (
                                result.snippet[:200] + "..."
                                if len(result.snippet) > 200
                                else result.snippet
                            ),
                            "url": result.url,
                            "difficulty": result.difficulty_level,
                            "estimated_read_time": f"{result.estimated_read_time} min",
                        }
                    )
    
                return {
                    "library": library,
                    "topic": topic,
                    "language": language,
                    "total_examples": len(examples),
                    "examples": examples,
                }
    
        except Exception as e:
            return {"error": f"Failed to get code examples: {str(e)}"}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that examples are 'curated' and include 'explanations,' which adds some context, but fails to describe critical behaviors like whether this is a read-only operation, if it requires network access, rate limits, authentication needs, or how results are formatted. For a tool with 5 parameters and no annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with a clear purpose statement followed by organized sections for Args and Returns. Every sentence earns its place by directly contributing to understanding the tool's functionality without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no annotations, no output schema), the description is partially complete. It covers parameters well but lacks behavioral details like safety, performance, or output format. Without an output schema, the Returns section is vague ('curated code examples with explanations'), leaving the agent uncertain about the response structure. This is adequate but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining all 5 parameters in the Args section, adding meaning beyond the schema. It clarifies that 'version' accepts values like '4.2', 'stable', or 'latest', and that 'auto_detect_version' checks installed packages. This provides essential semantic context not present in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('curated code examples'), specifying the target ('for a specific topic and library'). It distinguishes itself from siblings like 'get_docs' (documentation) or 'semantic_search' (general search) by focusing exclusively on code examples with explanations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_docs' for documentation or 'semantic_search' for broader searches, nor does it specify prerequisites or exclusions. Usage is implied only through the parameter descriptions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gemini2026/documentation-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server