Skip to main content
Glama
r3-yamauchi

Amazon Bedrock Knowledge Base MCP Server

by r3-yamauchi

retrieve

Execute RAG queries against Amazon Bedrock Knowledge Bases to retrieve relevant documents using vector search for enhanced information retrieval.

Instructions

Knowledge Baseに対してRAG(Retrieval-Augmented Generation)クエリを実行します。

ベクトル検索を使用して、クエリに関連するドキュメントを取得します。

Args: knowledge_base_id: クエリ対象のKnowledge BaseのID query: 検索クエリのテキスト number_of_results: 返す結果の数(デフォルト: 5、範囲: 1-100)

Returns: RetrieveResponseDict: クエリ結果 - results: 検索結果のリスト(各結果にはcontent、location、score、metadataが含まれる) - query: 実行したクエリテキスト

Raises: ValueError: 入力値が無効な場合(knowledge_base_idやqueryが空、number_of_resultsが範囲外など)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
knowledge_base_idYes
queryYes
number_of_resultsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
resultsYes

Implementation Reference

  • The 'retrieve' MCP tool handler function registered in the MCP server. It validates inputs and delegates the actual RAG query to the bedrock_client.
    def retrieve(
        knowledge_base_id: str, query: str, number_of_results: int = 5
    ) -> RetrieveResponseDict:
        """
        Knowledge Baseに対してRAG(Retrieval-Augmented Generation)クエリを実行します。
        
        ベクトル検索を使用して、クエリに関連するドキュメントを取得します。
        
        Args:
            knowledge_base_id: クエリ対象のKnowledge BaseのID
            query: 検索クエリのテキスト
            number_of_results: 返す結果の数(デフォルト: 5、範囲: 1-100)
    
        Returns:
            RetrieveResponseDict: クエリ結果
                - results: 検索結果のリスト(各結果にはcontent、location、score、metadataが含まれる)
                - query: 実行したクエリテキスト
        
        Raises:
            ValueError: 入力値が無効な場合(knowledge_base_idやqueryが空、number_of_resultsが範囲外など)
        """
        # 入力値のバリデーション(共通関数を使用)
        # knowledge_base_idとqueryは必須パラメータです
        knowledge_base_id = validate_required_string(knowledge_base_id, "knowledge_base_id")
        query = validate_required_string(query, "query")
        
        # number_of_resultsは1から100の範囲で指定する必要があります
        # AWS APIの制限に従います
        if number_of_results < 1 or number_of_results > 100:
            raise ValueError("number_of_results must be between 1 and 100")
    
        # Bedrockクライアントを使用してRAGクエリを実行
        # ベクトル検索を使用して、クエリに関連するドキュメントを取得します
        # 結果は関連度スコアでソートされ、指定された数の結果が返されます
        result = bedrock_client.retrieve(
            knowledge_base_id,  # 前後の空白は既に削除済み
            query,  # 前後の空白は既に削除済み
            number_of_results  # 返す結果の数
        )
        return result
  • The bedrock_client's implementation of 'retrieve', which interacts with the Bedrock Agent Runtime API to perform the actual vector search.
    def retrieve(
        self,
        knowledge_base_id: str,
        query: str,
        number_of_results: int = 5,
    ) -> RetrieveResponseDict:
        """
        Knowledge Baseに対してRAG(Retrieval-Augmented Generation)クエリを実行します。
        
        ベクトル検索を使用して、クエリに関連するドキュメントを取得します。
        
        Args:
            knowledge_base_id: クエリ対象のKnowledge BaseのID
            query: 検索クエリのテキスト
            number_of_results: 返す結果の数(デフォルト: 5、最大: 100)
        
        Returns:
            RetrieveResponseDict: クエリ結果
                - results: 検索結果のリスト
                    各結果には以下の情報が含まれます:
                    - content: ドキュメントの内容
                    - location: ドキュメントの場所(S3 URIなど)
                    - score: 関連度スコア
                    - metadata: メタデータ
                - query: 実行したクエリテキスト
        
        Raises:
            ClientError: AWS API呼び出しが失敗した場合
        """
        try:
            # Bedrock Agent Runtime APIを使用してRAGクエリを実行
            # retrieve APIは、Knowledge Baseに対してベクトル検索を実行し、
            # クエリに関連するドキュメントチャンクを返します
            response = self.bedrock_agent_runtime.retrieve(
                knowledgeBaseId=knowledge_base_id,  # クエリ対象のKnowledge BaseのID
                retrievalConfiguration={
                    "vectorSearchConfiguration": {
                        "numberOfResults": number_of_results,  # 返す結果の数(1-100の範囲)
                        # オプション: "overrideSearchType": "HYBRID"  # ハイブリッド検索(将来の拡張)
                        # オプション: "filter": {...}  # メタデータフィルター(将来の拡張)
                    }
                },
                retrievalQuery={"text": query},  # 検索クエリのテキスト
                # オプション: nextToken: ページネーション用のトークン(大量の結果がある場合)
            )
            
            # 取得した結果の数をログに記録
            # 検索結果の数をログに記録します(デバッグや監視に有用)
            retrieval_results = response.get("retrievalResults", [])
            logger.info(f"Retrieved {len(retrieval_results)} results")
            
            # レスポンスを整形して返す
            # AWS APIのレスポンスから検索結果を抽出し、クエリテキストと一緒に返します
            # 各結果には、content(テキスト内容)、location(S3 URIなど)、score(関連度)、metadataが含まれます
            return {
                "results": retrieval_results,  # 検索結果のリスト(関連度順にソート済み)
                "query": query,  # 実行したクエリテキスト(確認用)
            }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does mention that it 'raises ValueError' for invalid inputs, which adds some error-handling context. However, it doesn't describe important behavioral aspects like authentication requirements, rate limits, performance characteristics, or what happens when no results are found. The description provides basic operational context but misses key behavioral traits for a query tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, Args, Returns, Raises) and front-loads the core functionality. Each sentence earns its place by providing essential information. The Japanese/English mix is slightly inconsistent but doesn't hinder understanding. It could be slightly more concise in the opening paragraph but overall maintains good information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, query operation), no annotations, but with an output schema (Returns section), the description provides adequate context. The output schema existence means the description doesn't need to fully explain return values, which it acknowledges with the Returns section. It covers parameters well and provides basic error information. For a retrieval tool with output schema support, this is reasonably complete though could benefit from more behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the lack of parameter documentation in the schema. It successfully documents all three parameters with clear explanations: 'knowledge_base_id: クエリ対象のKnowledge BaseのID', 'query: 検索クエリのテキスト', and 'number_of_results: 返す結果の数(デフォルト: 5、範囲: 1-100)'. The description adds meaningful context beyond what the bare schema provides, including default values and valid ranges.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Knowledge Baseに対してRAG(Retrieval-Augmented Generation)クエリを実行します。ベクトル検索を使用して、クエリに関連するドキュメントを取得します。' This specifies the verb (execute RAG query), resource (Knowledge Base), and method (vector search). It distinguishes from siblings by focusing on retrieval rather than creation, listing, or management operations. However, it doesn't explicitly contrast with potential similar retrieval tools (none exist among siblings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While the purpose is clear, there's no mention of prerequisites (e.g., needing an existing knowledge base), typical use cases, or comparisons to other tools. The sibling tools are all management/creation operations, so this is the only query tool, but the description doesn't acknowledge this context or provide any usage context beyond the basic function.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/r3-yamauchi/bedrock-kb-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server