Skip to main content
Glama
rcdelacruz

Nexus MCP Server

by rcdelacruz

nexus_read

Extracts clean content from URLs by parsing articles or documentation with intelligent focus modes for general text or code-focused reading.

Instructions

Reads a URL with intelligent parsing logic.

Args:
    url: The URL to visit.
    focus:
        'general' = Returns clean article text (Exa style).
        'code'    = Returns only headers, code blocks, and tables (Ref style).
        'auto'    = Detects if it's a doc site and switches to 'code' mode.

Returns:
    Parsed and cleaned content from the URL.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes
focusNoauto

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The complete implementation of the 'nexus_read' tool handler, including registration via @mcp.tool() decorator. It fetches the given URL, parses HTML with BeautifulSoup, and extracts content based on the 'focus' parameter: 'general' for full text, 'code' for headers/code/tables, 'auto' detects type.
    @mcp.tool()
    async def nexus_read(url: str, focus: str = "auto") -> str:
        """
        Reads a URL with intelligent parsing logic.
    
        Args:
            url: The URL to visit.
            focus:
                'general' = Returns clean article text (Exa style).
                'code'    = Returns only headers, code blocks, and tables (Ref style).
                'auto'    = Detects if it's a doc site and switches to 'code' mode.
    
        Returns:
            Parsed and cleaned content from the URL.
        """
        logger.info(f"Read requested - URL: '{url}', Focus: {focus}")
    
        # Validate inputs
        if not url or not url.strip():
            error_msg = "URL cannot be empty"
            logger.error(error_msg)
            return f"Error: {error_msg}"
    
        if focus not in ["auto", "general", "code"]:
            error_msg = f"Invalid focus '{focus}'. Must be 'auto', 'general', or 'code'"
            logger.error(error_msg)
            return f"Error: {error_msg}"
    
        url = url.strip()
    
        # Validate URL format
        if not url.startswith(("http://", "https://")):
            error_msg = "URL must start with http:// or https://"
            logger.error(error_msg)
            return f"Error: {error_msg}"
    
        # Auto-detection logic
        original_focus = focus
        if focus == "auto":
            technical_indicators = ["docs", "api", "reference", "github", "guide", "documentation"]
            if any(ind in url.lower() for ind in technical_indicators):
                focus = "code"
                logger.debug(f"Auto-detected technical site, switching to code focus")
            else:
                focus = "general"
                logger.debug(f"Auto-detected general site, using general focus")
    
        async with httpx.AsyncClient(
            follow_redirects=True,
            headers={"User-Agent": "NexusMCP/1.0"},
            timeout=REQUEST_TIMEOUT
        ) as client:
            try:
                response = await client.get(url)
                response.raise_for_status()
    
                logger.debug(f"Successfully fetched URL: {url} (Status: {response.status_code})")
    
                soup = bs4.BeautifulSoup(response.text, 'html.parser')
    
                # Pre-cleaning (Remove junk common to all modes)
                for trash in soup(["script", "style", "nav", "footer", "iframe", "svg", "noscript"]):
                    trash.decompose()
    
                output = []
                output.append(f"=== SOURCE: {url} ===")
                output.append(f"=== MODE: {focus.upper()} ===\n")
    
                if focus == "code":
                    # REF MODE (High Signal / Low Noise)
                    relevant_tags = soup.find_all(['h1', 'h2', 'h3', 'h4', 'pre', 'code', 'table'])
    
                    for tag in relevant_tags:
                        if tag.name in ['h1', 'h2', 'h3', 'h4']:
                            header_text = tag.get_text(strip=True)
                            if header_text:
                                output.append(f"\n## {header_text}")
                        elif tag.name == 'pre':
                            code_text = tag.get_text()
                            if code_text.strip():
                                output.append(f"```\n{code_text}\n```")
                        elif tag.name == 'code' and tag.parent.name != 'pre':
                            code_text = tag.get_text(strip=True)
                            if code_text:
                                output.append(f"`{code_text}`")
                        elif tag.name == 'table':
                            # Enhanced table extraction
                            try:
                                rows = tag.find_all('tr')
                                if rows:
                                    output.append("\n[Table]")
                                    for row in rows[:10]:  # Limit to first 10 rows
                                        cells = row.find_all(['td', 'th'])
                                        cell_texts = [cell.get_text(strip=True) for cell in cells]
                                        if cell_texts:
                                            output.append(" | ".join(cell_texts))
                            except Exception as table_error:
                                logger.warning(f"Table parsing failed: {table_error}")
                                output.append("\n[Table - parsing failed]")
    
                    if len(output) < MIN_CODE_ELEMENTS_THRESHOLD:
                        fallback_msg = f"Code-focused extraction found minimal content ({len(output)} elements). The page may not contain structured documentation. Try focus='general' for better results."
                        logger.warning(f"Insufficient code elements found at {url}")
                        return fallback_msg
    
                else:
                    # GENERAL MODE (Full Context)
                    text = soup.get_text(separator='\n')
                    lines = [line.strip() for line in text.split('\n') if line.strip()]
                    output.append("\n".join(lines))
    
                result = "\n".join(output)[:MAX_CONTENT_LENGTH]
                logger.info(f"Read successful - Extracted {len(result)} characters from {url}")
    
                if len("\n".join(output)) > MAX_CONTENT_LENGTH:
                    result += f"\n\n[Content truncated at {MAX_CONTENT_LENGTH} characters]"
                    logger.debug(f"Content truncated for {url}")
    
                return result
    
            except httpx.TimeoutException:
                error_msg = f"Request timed out after {REQUEST_TIMEOUT}s"
                logger.error(f"Timeout reading {url}")
                return f"Error: {error_msg}"
            except httpx.HTTPStatusError as e:
                error_msg = f"HTTP error {e.response.status_code}: {e.response.reason_phrase}"
                logger.error(f"HTTP error reading {url}: {error_msg}")
                return f"Error: {error_msg}"
            except httpx.RequestError as e:
                error_msg = f"Network error: {str(e)}"
                logger.error(f"Network error reading {url}: {error_msg}")
                return f"Error: {error_msg}"
            except Exception as e:
                error_msg = f"Unexpected error reading URL: {str(e)}"
                logger.exception(f"Unexpected error reading {url}")
                return f"Error: {error_msg}"
  • The docstring providing input/output schema description for the nexus_read tool, used by FastMCP for tool schema generation.
    """
    Reads a URL with intelligent parsing logic.
    
    Args:
        url: The URL to visit.
        focus:
            'general' = Returns clean article text (Exa style).
            'code'    = Returns only headers, code blocks, and tables (Ref style).
            'auto'    = Detects if it's a doc site and switches to 'code' mode.
    
    Returns:
        Parsed and cleaned content from the URL.
    """
  • nexus_server.py:97-97 (registration)
    The @mcp.tool() decorator that registers the nexus_read function as an MCP tool in the FastMCP server.
    @mcp.tool()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's parsing logic and return behavior ('Parsed and cleaned content from the URL'), which adds value beyond the input schema. However, it does not cover important aspects like error handling, rate limits, authentication needs, or performance characteristics, resulting in moderate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and well-structured, with a brief purpose statement followed by 'Args:' and 'Returns:' sections. Each sentence adds value without redundancy, making it easy to scan and understand. The formatting enhances readability without unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and an output schema present, the description is largely complete. It covers the purpose, parameters, and return value adequately. However, it could improve by addressing behavioral aspects like error cases or limitations, slightly reducing completeness for a tool with parsing logic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains the 'url' parameter as 'The URL to visit' and details the 'focus' parameter with three modes ('general', 'code', 'auto'), including their effects. This fully compensates for the schema's lack of descriptions, providing clear semantics for both parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Reads a URL with intelligent parsing logic.' It specifies the verb ('Reads') and resource ('URL'), and distinguishes it from the sibling tool 'nexus_search' by focusing on parsing content from a given URL rather than searching. However, it doesn't explicitly differentiate from 'nexus_search' beyond the name, which slightly limits clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the 'focus' parameter options ('general', 'code', 'auto'), suggesting when to use different modes based on content type. However, it lacks explicit guidance on when to use this tool versus 'nexus_search' or other alternatives, and does not mention any prerequisites or exclusions, leaving usage context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rcdelacruz/nexus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server