Skip to main content
Glama
v587d

Insights Knowledge Base MCP Server

search_content_detail

Query detailed report pages with multi-criteria combinations on the Insights Knowledge Base MCP Server. Supports keyword, title, content, publisher, and date filters for precise information retrieval. Results include report summaries, content, and access paths.

Instructions

该方法用于查询符合多条件组合的报告详情页面。LLM需根据用户输入的消息(user_message)提炼出以下参数。 ⚠️注意:当LLM引用该方法返回的结果时,必须用markdown格式明确、醒目告知用户引自哪篇报告和具体访问地址! 比如“观点引自《21世纪CEO的成功法则》第10、16页 Path <如果"file_uri"不为空,这里完整填入file_uri>

参数:
    keywords: List[str] = None, 报告详情页的关键词。
        ⚠️注意:
        - 将每个关键词自动翻译为中英双语
        - 例如用户输入"帮我查询下科技上市公司前景哈?" → 应转换为["科技", "technology", "上市公司", "publicly listed company", "前景", "prospect"]
    
    title: str = "", 报告详情页标题包含词。
    content: str = "", 报告详情页内容包含词。
    publisher: str = "", 报告发布者。
    start_date: Optional[datetime] = None, 报告查询开始日期。
    end_date: Optional[datetime] = None, 报告查询结束日期。
    match_logic: str = "OR", 匹配逻辑。"OR" 或者 "AND",二选一,**优先用 "OR"**。
    page_index: int = 1, 页码,默认仅显示第一页。

返回:
    results:报告详情
      - file_name: 详情页来自于哪份报告名
      - page_number: 页码
      - page_abstract: 摘要
      - page_content: 完整内容
      - page_keywords: 详情页关键词
      - published_by: 报告发布机构
      - published_date:报告发布日期
      - file_full_path: 报告存放于本地地址
      - matched_keywords: 匹配关键词组
    current_page:当前页码。⚠️当前页码小于总页码时,LLM需在结尾处提示用户可输入“下一页”查询更多记录。
    total_pages: 总页数
    total_matches: 总匹配记录条数

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentNo
end_dateNo
keywordsNo
match_logicNoOR
page_indexNo
publisherNo
start_dateNo
titleNo

Implementation Reference

  • The handler function for the 'search_content_detail' tool. It is registered via the @mcp.tool() decorator. Constructs a SearchCriteria object from inputs and delegates to ContentRetriever.run() to perform the search and return paginated results. The docstring defines the expected input parameters and output format.
    async def search_content_detail(
            keywords: List[str] = None,
            title: str = "",
            content: str = "",
            publisher: str = "",
            start_date: Optional[datetime] = None,
            end_date: Optional[datetime] = None,
            match_logic: str = "OR",
            page_index: int = 1
    ):
        """该方法用于查询符合多条件组合的报告详情页面。LLM需根据用户输入的消息(user_message)提炼出以下参数。
        ⚠️注意:当LLM引用该方法返回的结果时,必须用markdown格式明确、醒目告知用户引自哪篇报告和具体访问地址!
        比如“**观点引自《21世纪CEO的成功法则》第10、16页。(查看完整报告)[<如果"download_url"不为空,填入download_url>]**”
        !!!注意每份报告单独列举 download_url,不要笼统指向某一个可能不存在的地址。
    
        参数:
            keywords: List[str] = None, 报告详情页的关键词。
            title: str = "", 报告详情页标题包含词。
            content: str = "", 报告详情页内容包含词。
            publisher: str = "", 报告发布者。
            start_date: Optional[datetime] = None, 报告查询开始日期。
            end_date: Optional[datetime] = None, 报告查询结束日期。
            match_logic: str = "OR", 匹配逻辑。"OR" 或者 "AND",二选一,**优先用 "OR"**。
            page_index: int = 1, 页码,默认仅显示第一页。
    
        返回:
            results:报告详情
              - file_name: 详情页来自于哪份报告名
              - page_number: 页码
              - page_abstract: 摘要
              - page_content: 完整内容
              - page_keywords: 详情页关键词
              - published_by: 报告发布机构
              - published_date:报告发布日期
              - local_path: 报告存放于本地地址
              - download_url: 报告网络链接
              - matched_keywords: 匹配关键词组
            current_page:当前页码。⚠️当前页码小于总页码时,LLM需在结尾处提示用户可输入“下一页”查询更多记录。
            total_pages: 总页数
            total_matches: 总匹配记录条数
    
        LLM需将该方法返回结果组织成通畅的语言传达给用户。
        """
        keywords = [] if not keywords else keywords
        criteria = SearchCriteria(
            keywords=keywords,
            title=title,
            content=content,
            publisher=publisher,
            start_date=start_date,
            end_date=end_date,
            match_logic=match_logic, # type: ignore
    
        )
        retriever = ContentRetriever()
        result = retriever.run(criteria, page_index)
        return result
  • The @mcp.tool() decorator registers the search_content_detail function as an MCP tool.
    async def search_content_detail(
  • The function signature provides the input schema with type annotations and defaults. The docstring elaborates on parameters and expected return structure.
    async def search_content_detail(
            keywords: List[str] = None,
            title: str = "",
            content: str = "",
            publisher: str = "",
            start_date: Optional[datetime] = None,
            end_date: Optional[datetime] = None,
            match_logic: str = "OR",
            page_index: int = 1
    ):
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by specifying: 1) How results should be cited (markdown format with specific structure), 2) The automatic translation requirement for keywords, 3) Pagination behavior (when to prompt for 'next page'), 4) Default match logic preference ('优先用 OR'). These are valuable behavioral traits not evident from the schema alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is comprehensive but somewhat verbose and could be more front-loaded. The core purpose is stated first, but much of the text is parameter documentation that could potentially be streamlined. However, given the 0% schema coverage, the detail is necessary. The structure is logical but not optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 8 parameters, 0% schema description coverage, no annotations, and no output schema, the description does an excellent job of providing context. It explains parameters thoroughly, describes the return structure in detail, and specifies behavioral requirements. The main gap is lack of explicit sibling tool differentiation, but overall it's quite complete given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantic explanations for all 8 parameters. It explains: keyword translation requirements, title/content/publisher filtering, date range usage, match logic options with preference guidance, and pagination behavior. The description adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '查询符合多条件组合的报告详情页面' (query report detail pages matching multiple conditions). It specifies the resource (report detail pages) and verb (query/search), though it doesn't explicitly differentiate from the sibling 'search_report_profile' tool. The purpose is specific but lacks sibling comparison context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context through the parameter explanations and the note about how LLMs should handle results, but doesn't explicitly state when to use this tool versus alternatives. The sibling tool 'search_report_profile' exists but no comparison is made. Guidelines are present but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/v587d/InsightsLibrary'

If you have feedback or need assistance with the MCP directory API, please join our Discord server