Skip to main content
Glama
gemini2026

Documentation Search MCP Server

by gemini2026

get_learning_path

Generate structured learning paths for programming libraries based on your experience level, providing progressive topics and resources for effective skill development.

Instructions

Get a structured learning path for a library based on experience level.

Args:
    library: The library to create a learning path for
    experience_level: Your current level ("beginner", "intermediate", "advanced")

Returns:
    Structured learning path with progressive topics and resources

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
libraryYes
experience_levelNobeginner

Implementation Reference

  • The core handler function for the 'get_learning_path' tool, decorated with @mcp.tool(). It generates a structured learning path based on the library and experience level using predefined topic lists for each level. Returns a dictionary with the learning path details.
    @mcp.tool()
    async def get_learning_path(library: str, experience_level: str = "beginner"):
        """
        Get a structured learning path for a library based on experience level.
    
        Args:
            library: The library to create a learning path for
            experience_level: Your current level ("beginner", "intermediate", "advanced")
    
        Returns:
            Structured learning path with progressive topics and resources
        """
        # Dynamic learning path generation based on difficulty
        level_topics = {
            "beginner": [
                "Getting Started",
                "Basic Concepts",
                "First Examples",
                "Common Patterns",
            ],
            "intermediate": [
                "Advanced Features",
                "Best Practices",
                "Integration",
                "Testing",
            ],
            "advanced": [
                "Performance Optimization",
                "Advanced Architecture",
                "Production Deployment",
                "Monitoring",
            ],
        }
    
        if experience_level not in level_topics:
            return {"error": f"Experience level {experience_level} not supported"}
    
        learning_steps = []
        for i, topic in enumerate(level_topics[experience_level]):
            learning_steps.append(
                {
                    "step": i + 1,
                    "topic": f"{library.title()} - {topic}",
                    "content_type": "tutorial",
                    "search_query": f"{library} {topic.lower()}",
                    "target_library": library,
                    "estimated_time": "2-4 hours",
                }
            )
    
        return {
            "library": library,
            "experience_level": experience_level,
            "total_topics": len(learning_steps),
            "estimated_total_time": f"{len(learning_steps) * 2}-{len(learning_steps) * 4} hours",
            "learning_path": learning_steps,
            "next_level": {
                "beginner": "intermediate",
                "intermediate": "advanced",
                "advanced": "Consider specializing in specific areas or exploring related technologies",
            }.get(experience_level, ""),
        }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a 'structured learning path with progressive topics and resources,' which hints at a read-only, non-destructive operation, but lacks details on permissions, rate limits, error handling, or output format. For a tool with zero annotation coverage, this is insufficient to fully inform the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first, followed by brief parameter and return explanations. Each sentence adds value without redundancy, though it could be slightly more structured (e.g., bullet points).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, no output schema), the description is minimally adequate. It covers the basic purpose and parameters but lacks details on behavioral traits, usage context, and output specifics, which are needed for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that 'library' is 'The library to create a learning path for' and 'experience_level' is 'Your current level' with enum values, which clarifies semantics beyond the bare schema. However, it doesn't detail parameter constraints or examples, leaving some gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a structured learning path for a library based on experience level.' It specifies the verb ('Get'), resource ('structured learning path'), and key inputs ('library', 'experience level'), making the function evident. However, it doesn't explicitly differentiate from sibling tools like 'get_docs' or 'get_code_examples', which might also provide learning resources, so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the tool's function but doesn't specify scenarios, prerequisites, or exclusions, nor does it reference sibling tools like 'get_docs' or 'suggest_libraries' that might overlap. This leaves the agent without clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gemini2026/documentation-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server