Skip to main content
Glama
debtstack-ai

DebtStack MCP Server

search_documents

Search SEC filing sections for debt-related terms like covenant language and credit agreement terms. Specify section type to target specific debt documents.

Instructions

Search SEC filing sections for specific terms. Section types: debt_footnote, credit_agreement, indenture, covenants, mda_liquidity. Use to find covenant language, credit agreement terms, or debt descriptions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch terms
tickerNoCompany ticker(s)
section_typeNoSection type to search
limitNoMaximum results (default 10)

Implementation Reference

  • MCP server handler for search_documents tool. Parses arguments, calls api_get('/documents/search'), formats results using format_document_result helper.
    elif name == "search_documents":
        params = {k: v for k, v in arguments.items() if v is not None}
        params.setdefault("limit", 10)
        result = api_get("/documents/search", params)
    
        docs = result.get("data", [])
        if not docs:
            return [TextContent(type="text", text=f"No documents found for '{params.get('q', '')}'.")]
    
        text = f"Found {len(docs)} matching sections:\n\n"
        text += "\n\n---\n\n".join(format_document_result(d) for d in docs)
        return [TextContent(type="text", text=text)]
  • MCP tool registration with input schema for search_documents: query (required string), ticker, section_type (enum), limit (integer).
    Tool(
        name="search_documents",
        description=(
            "Search SEC filing sections for specific terms. "
            "Section types: debt_footnote, credit_agreement, indenture, covenants, mda_liquidity. "
            "Use to find covenant language, credit agreement terms, or debt descriptions."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "Search terms"
                },
                "ticker": {
                    "type": "string",
                    "description": "Company ticker(s)"
                },
                "section_type": {
                    "type": "string",
                    "enum": ["debt_footnote", "credit_agreement", "indenture", "covenants", "mda_liquidity", "exhibit_21", "guarantor_list"],
                    "description": "Section type to search"
                },
                "limit": {
                    "type": "integer",
                    "description": "Maximum results (default 10)"
                }
            },
            "required": ["query"]
        }
  • LangChain toolkit registration: DebtStackSearchDocumentsTool is included in the get_tools() list.
        DebtStackSearchDocumentsTool(api_wrapper=self.api_wrapper),
        DebtStackGetChangesTool(api_wrapper=self.api_wrapper),
    ]
  • Helper function to format a single document search result into readable text with ticker, filing info, and snippet.
    def format_document_result(d: dict) -> str:
        """Format document search result."""
        lines = [
            f"**{d.get('section_type', 'Document')}** - {d.get('ticker', '?')}",
            f"Filing: {d.get('doc_type', '?')} ({d.get('filing_date', '?')})"
        ]
    
        if d.get('snippet'):
            # Clean up HTML tags in snippet
            snippet = d['snippet'].replace('<b>', '**').replace('</b>', '**')
            lines.append(f"...{snippet}...")
    
        return "\n".join(lines)
  • LangChain API wrapper method that makes the actual HTTP GET request to /documents/search endpoint.
    def search_documents(self, **kwargs) -> Dict[str, Any]:
        """Search SEC filing sections."""
        return self._get("/documents/search", params=kwargs)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description only states it is a search operation with no additional behavioral traits (e.g., authentication, rate limits, or side effects). The description relies on the tool name and common knowledge.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise with two sentences and a list, but the list is incomplete (missing two enum values). Efficient but could be more accurate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is adequate for a simple search tool but lacks details on behavior like pagination, limits, or output structure. The miss in enum listing and no guidance on which search tool to use among siblings leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameter descriptions, so the baseline is 3. The description adds some usage context for section_type (listing part of the enum) and overall purpose, but does not significantly enhance understanding of each parameter beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it searches SEC filing sections for specific terms, and provides examples of section types. However, the enum in the schema includes two additional types not listed in the description, slightly reducing completeness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description mentions 'Use to find covenant language, credit agreement terms, or debt descriptions,' giving context but not explicitly differentiating from sibling tools like search_bonds or search_companies. No when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/debtstack-ai/debtstack-python'

If you have feedback or need assistance with the MCP directory API, please join our Discord server