Skip to main content
Glama
aserper

RTFD (Read The F*****g Docs)

by aserper

search_library_docs

Find library documentation using PyPI and GitHub data to prevent outdated code generation and API hallucinations.

Instructions

Find docs for a library using PyPI metadata and GitHub repos combined. Returns data in JSON format with token statistics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
libraryYes
limitNo

Implementation Reference

  • MCP tool registration for search_library_docs using FastMCP's @mcp.tool decorator, including the tool description.
    @mcp.tool( description="Find docs for a library using PyPI metadata and GitHub repos combined. Returns data in JSON format with token statistics." )
  • Core handler function that executes the tool logic by aggregating provider results and serializing the output.
    async def search_library_docs(library: str, limit: int = 5) -> CallToolResult: """Aggregated library documentation search across all providers.""" result = await _locate_library_docs(library, limit=limit) return serialize_response_with_meta(result)
  • Key helper function that implements the aggregation logic: checks cache, queries supporting providers via their search_library method, handles results and errors, and updates cache.
    async def _locate_library_docs(library: str, limit: int = 5) -> dict[str, Any]: """ Try to find documentation links for a given library using all available providers. This is the aggregator function that combines results from PyPI, GoDocs, and GitHub. """ result: dict[str, Any] = {"library": library} # Check cache first cache_enabled, cache_ttl = get_cache_config() cache_key = f"search:{library}:{limit}" if cache_enabled: # Cleanup expired entries occasionally (could be optimized) # For now, we rely on lazy cleanup or external process, # but let's do a quick check on read if we wanted strict TTL. # The CacheManager.get() returns None if not found. # We can also run cleanup on startup or periodically. # Here we just check if we have a valid entry. cached_entry = _cache_manager.get(cache_key) if cached_entry: # Check TTL age = __import__("time").time() - cached_entry.timestamp if age < cache_ttl: return cached_entry.data providers = _get_provider_instances() # Query each provider that supports library search for provider_name, provider in providers.items(): metadata = provider.get_metadata() if not metadata.supports_library_search: continue provider_result = await provider.search_library(library, limit=limit) if provider_result.success: # Success: add data to result # Map provider name to appropriate result key key_mapping = { "pypi": "pypi", "godocs": "godocs", "github": "github_repos", } result_key = key_mapping.get(provider_name, provider_name) result[result_key] = provider_result.data elif provider_result.error: # Error: add error message (skip if error is None - silent fail) error_key = f"{provider_name}_error" result[error_key] = provider_result.error # Update cache if enabled if cache_enabled: _cache_manager.set(cache_key, result) return result

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aserper/RTFD'

If you have feedback or need assistance with the MCP directory API, please join our Discord server