Skip to main content
Glama
v587d

Insights Knowledge Base MCP Server

search_report_profile

Search and retrieve multi-criteria report summaries from the Insights Knowledge Base MCP Server. Extract key details like titles, topics, publishers, and matched keywords to analyze and reference reports efficiently.

Instructions

该方法用于查询多条件组合的报告概况。LLM需根据用户输入的消息(user_message)提炼出以下参数。 ️⚠️注意:当LLM引用该方法返回的结果时,必须用markdown格式明确、醒目告知用户引自哪篇报告和具体访问地址! 比如“观点引自《Open source technology in the age of AI》 Path <如果"file_uri"不为空,这里完整填入file_uri>

参数:
    keywords: List[str] = None, 整篇报告的关键词。
    ⚠️注意:
        - 将每个关键词自动翻译为中英双语
        - 例如用户输入"帮我查询下科技上市公司前景哈?" → 应转换为["科技", "technology", "上市公司", "publicly listed company", "前景", "prospect"]

    title: str = "", 报告标题包含词。
    content: str = "", 报告内容包含词。
    publisher: str = "", 报告发布者。
    start_date: Optional[datetime] = None, 报告查询开始日期。
    end_date: Optional[datetime] = None, 报告查询结束日期。
    match_logic: str = "OR", 匹配逻辑。"OR" 或者 "AND",二选一,**优先用 "OR"**。

返回:
    results:报告概览
      - "file_name": 报告名称
      - "topic": 报告主题
      - "content": 报告整体摘要
      - "published_by": 发布机构
      - "published_date": 发布日期
      - "file_full_path": 报告存放于本地地址
      - "matched_keywords": 匹配关键词组
    current_page:当前页码。⚠️当前页码小于总页码时,LLM需在结尾处提示用户可输入“下一页”查询更多记录。
    total_pages: 总页数
    total_matches: 总匹配记录条数

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentNo
end_dateNo
keywordsNo
match_logicNoOR
page_indexNo
publisherNo
start_dateNo
titleNo

Implementation Reference

  • Handler function for the 'search_report_profile' MCP tool. Includes registration via @mcp.tool(), input schema via type hints and docstring, and core logic delegating to FileRetriever.
    @mcp.tool()
    async def search_report_profile(
            keywords: List[str] = None,
            title: str = "",
            content: str = "",
            publisher: str = "",
            start_date: Optional[datetime] = None,
            end_date: Optional[datetime] = None,
            match_logic: str = "OR",
            page_index: int = 1
    
    ):
        """该方法用于查询多条件组合的报告整体概况。LLM需根据用户输入的消息(user_message)提炼出以下参数。
        ️⚠️注意:当LLM引用该方法返回的结果时,必须用markdown格式明确、醒目告知用户引自哪篇报告和具体访问地址!
        比如“**观点引自《Open source technology in the age of AI》。(查看完整报告)[<如果"download_url"不为空,填入download_url>]**”
        !!!注意每份报告单独列举 download_url,不要笼统指向某一个可能不存在的地址。
    
        参数:
            keywords: List[str] = None, 整篇报告的关键词。
            title: str = "", 报告标题包含词。
            content: str = "", 报告内容包含词。
            publisher: str = "", 报告发布者。
            start_date: Optional[datetime] = None, 报告查询开始日期。
            end_date: Optional[datetime] = None, 报告查询结束日期。
            match_logic: str = "OR", 匹配逻辑。"OR" 或者 "AND",二选一,**优先用 "OR"**。
    
        返回:
            results:报告概览
              - file_name: 报告名称
              - topic: 报告主题
              - content: 报告整体摘要
              - published_by: 发布机构
              - published_date: 发布日期
              - local_path: 报告存放于本地地址
              - download_url: 报告网络链接
              - matched_keywords: 匹配关键词组
            current_page:当前页码。⚠️当前页码小于总页码时,LLM需在结尾处提示用户可输入“下一页”查询更多记录。
            total_pages: 总页数
            total_matches: 总匹配记录条数
    
        LLM需将该方法返回结果组织成通畅的语言传达给用户。
        """
        keywords = [] if not keywords else keywords
        criteria = SearchCriteria(
            keywords=keywords,
            title=title,
            content=content,
            publisher=publisher,
            start_date=start_date,
            end_date=end_date,
            match_logic=match_logic, # type: ignore
        )
    
        retriever = FileRetriever()
        result = retriever.run(criteria, page_index)
        return result
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and delivers substantial behavioral context: it specifies mandatory citation formatting requirements, automatic keyword translation rules, pagination behavior with '下一页' prompts, and match_logic defaults. It doesn't cover rate limits, authentication needs, or error conditions, but provides more than minimum behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose but contains extensive operational instructions (citation formatting, translation rules, pagination prompts) that could be streamlined. While all content is relevant, the mix of tool purpose, LLM instructions, parameter details, and return format creates a somewhat dense structure that could be better organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 8 parameters, 0% schema coverage, no annotations, and no output schema, the description provides comprehensive parameter semantics, return format details, and behavioral requirements. The main gap is lack of error handling or edge case information, but it covers core functionality thoroughly given the absence of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 8 parameters, the description fully compensates by explaining every parameter's purpose, providing keyword translation examples, clarifying match_logic options and priority, and detailing date parameter usage. It adds significant meaning beyond the bare schema, including implementation requirements like automatic bilingual translation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as '查询多条件组合的报告概况' (query multi-condition combined report profiles), which is a specific verb+resource combination. It distinguishes from the sibling tool 'search_content_detail' by focusing on profile-level results rather than detailed content. However, it doesn't explicitly contrast with the sibling beyond the name difference.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance through parameter explanations and the match_logic priority ('优先用 "OR"'), but lacks explicit when-to-use directives or comparisons with the sibling tool. The LLM instructions about result citation and pagination prompts are operational rather than selection guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/v587d/InsightsLibrary'

If you have feedback or need assistance with the MCP directory API, please join our Discord server