get_server_docs
Retrieve structured documentation for the DevLens MCP server, including tool usage guides, workflow examples, and best practices to integrate web context into development environments.
Instructions
Get documentation about the WebDocx MCP server.
Provides guidance on server capabilities, tool usage, workflows, and best practices.
Args: topic: Documentation topic - 'overview', 'tools', 'workflows', 'orchestration', or 'examples'
Returns: Formatted documentation for the requested topic.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | overview |
Implementation Reference
- src/devlens/server.py:241-295 (handler)The core implementation of the 'get_server_docs' tool, which provides documentation for the MCP server based on the requested topic.
def get_server_docs(topic: str = "overview") -> str: """Get documentation about the WebDocx MCP server. Provides guidance on server capabilities, tool usage, workflows, and best practices. Args: topic: Documentation topic - 'overview', 'tools', 'workflows', 'orchestration', or 'examples' Returns: Formatted documentation for the requested topic. """ docs = { "overview": """ # WebDocx MCP Server MCP server for intelligent web research. 12 tools in 3 layers. ## Tools Primitives: search_web, scrape_url, crawl_docs, summarize_page, extract_links Composed: deep_dive, compare_sources, find_related, monitor_changes Meta: suggest_workflow, classify_research_intent, get_server_docs ## Design - Composable: small tools combine powerfully - Smart: auto-orchestration via suggest_workflow - Efficient: Markdown output, token-optimized - Context-aware: workflows adapt to research state ## Usage search_web → scrape_url (simple) suggest_workflow (auto-recommends) deep_dive (multi-source aggregation) ## Topics tools, philosophy, workflows, orchestration, examples """, "tools": """ # Tools ## Primitives (fast, focused) search_web(query, limit=5) - DuckDuckGo search, returns [{title,url,snippet}] scrape_url(url) - Extract clean Markdown with metadata summarize_page(url) - Headings only, triage before full scrape extract_links(url, filter_external=True) - Categorize internal/external links crawl_docs(root_url, max_pages=5) - Follow links, aggregate docs with TOC ## Composed (workflows) deep_dive(topic, depth=3) - Search + parallel scraping + aggregation compare_sources(topic, sources) - Analyze consensus/differences across 2-5 URLs find_related(url, limit=5) - Discover similar resources via content analysis monitor_changes(url, previous_hash) - Track content changes via hashing ## Meta (intelligence) suggest_workflow(query, known_urls=[]) - Auto-recommend optimal tool sequence classify_research_intent(query) - Detect research goal (7 patterns) - src/devlens/server.py:240-240 (registration)Tool registration using the @mcp.tool() decorator.
@mcp.tool()