Skip to main content
Glama

jp_lit_refine_results

Refine saved Japanese literature search results locally: sort, filter, and perform set operations (union, intersection, minus) without re-searching upstream. Optionally retrieve duplicate clusters.

Instructions

保存済み jp_lit_search 結果を upstream 再検索せずローカルでソート・フィルタ・集合演算し、必要時だけ重複候補クラスタも返す

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cache_keyNo
cache_keysNo
session_idNo
combineNounion
key_byNosource_record
sort_byNo
sort_orderNoasc
limitNo
offsetNo
include_duplicate_clustersNo
cluster_limitNo
cluster_offsetNo
cluster_member_limitNo
filtersNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
base_cache_keyYes
base_cache_keysYes
combineYes
key_byYes
totals_by_baseYes
total_beforeYes
total_afterYes
limitYes
offsetYes
itemsYes
cluster_summaryNo
clustersNo

Implementation Reference

  • Main handler function createJpLitRefineResultsTool - parses input, resolves cache keys, fetches cached search results, combines/filters/sorts items, builds duplicate clusters, and returns structured output
    export function createJpLitRefineResultsTool(
      cache: FileCache,
      sessions: SessionStore
    ) {
      return async (input: unknown) => {
        const parsed = refineResultsInputSchema.parse(input);
        const cacheKeys = await resolveBaseCacheKeys(parsed, sessions);
        const cachedResults = await Promise.all(
          cacheKeys.map(async (cacheKey) => {
            const cached = await cache.read<SearchOutput>("jp_lit_search", cacheKey);
            if (!cached) {
              throw new Error(`cache_key=${cacheKey} のキャッシュが見つかりません`);
            }
            return {
              cache_key: cacheKey,
              items: cached.structured_content.items
            };
          })
        );
        const combinedItems = combineItems(
          cachedResults.map((result) => result.items),
          parsed
        );
        const filteredItems = applyFilters(combinedItems, parsed);
        const sortedItems = applySort(filteredItems, parsed);
        const slicedItems = sortedItems.slice(parsed.offset, parsed.offset + parsed.limit);
        const rawUnionClusterCandidates = applySort(
          applyFilters(cachedResults.flatMap((result) => result.items), parsed),
          parsed
        );
        const clusterCandidates =
          parsed.combine === "union" ? rawUnionClusterCandidates : sortedItems;
        const clusterOutput = parsed.include_duplicate_clusters
          ? buildDuplicateClusters(clusterCandidates, {
              clusterLimit: parsed.cluster_limit,
              clusterOffset: parsed.cluster_offset,
              memberLimit: parsed.cluster_member_limit
            })
          : null;
    
        const structuredContent: RefineResultsOutput = refineResultsOutputSchema.parse({
          base_cache_key: cacheKeys[0],
          base_cache_keys: cacheKeys,
          combine: parsed.combine,
          key_by: parsed.key_by,
          totals_by_base: cachedResults.map((entry) => ({
            cache_key: entry.cache_key,
            total: entry.items.length
          })),
          total_before: combinedItems.length,
          total_after: sortedItems.length,
          limit: parsed.limit,
          offset: parsed.offset,
          items: slicedItems,
          ...(clusterOutput
            ? {
                cluster_summary: clusterOutput.summary,
                clusters: clusterOutput.clusters
              }
            : {})
        });
    
        return {
          content: [
            {
              type: "text" as const,
              text: JSON.stringify(structuredContent, null, 2)
            }
          ],
          structuredContent
        };
      };
    }
  • refineResultsInputSchema defines input params: cache_key, cache_keys, session_id, combine (union/intersection/minus), key_by, sort_by/order, limit/offset, include_duplicate_clusters, cluster_* config, and filters
    export const refineResultsInputSchema = z.object({
      cache_key: z.string().trim().min(1).optional(),
      cache_keys: z.array(z.string().trim().min(1)).min(1).optional(),
      session_id: z.string().trim().regex(/^\d{4}-\d{2}-\d{2}-\d{6}$/).optional(),
      combine: z.enum(["union", "intersection", "minus"]).default("union"),
      key_by: z
        .enum(["source_record", "duplicate_key", "title_author_year"])
        .default("source_record"),
      sort_by: z.enum(["issued_at", "title"]).optional(),
      sort_order: z.enum(["asc", "desc"]).default("asc"),
      limit: z.number().int().positive().max(200).default(30),
      offset: z.number().int().nonnegative().default(0),
      include_duplicate_clusters: z.boolean().default(false),
      cluster_limit: z.number().int().positive().default(20),
      cluster_offset: z.number().int().nonnegative().default(0),
      cluster_member_limit: z.number().int().positive().default(5),
      filters: refineResultsFiltersSchema.optional()
    });
  • refineResultsOutputSchema defines output shape: base_cache_key(s), totals_by_base, total_before/after, limit, offset, items, cluster_summary, clusters
    export const refineResultsOutputSchema = z.object({
      base_cache_key: z.string(),
      base_cache_keys: z.array(z.string()),
      combine: z.enum(["union", "intersection", "minus"]),
      key_by: z.enum(["source_record", "duplicate_key", "title_author_year"]),
      totals_by_base: z.array(
        z.object({
          cache_key: z.string(),
          total: z.number().int().nonnegative()
        })
      ),
      total_before: z.number().int().nonnegative(),
      total_after: z.number().int().nonnegative(),
      limit: z.number().int().positive(),
      offset: z.number().int().nonnegative(),
      items: z.array(searchItemSchema),
      cluster_summary: duplicateClusterSummarySchema.optional(),
      clusters: z.array(duplicateClusterSchema).optional()
    });
  • refineResultsFiltersSchema defines filter fields: source, issued_from/to, online, digital_collection, title_contains, author_contains
    const refineResultsFiltersSchema = z.object({
      source: sourceSchema.optional(),
      issued_from: z.string().optional(),
      issued_to: z.string().optional(),
      online: z.boolean().optional(),
      digital_collection: z.boolean().optional(),
      title_contains: z.string().trim().min(1).optional(),
      author_contains: z.string().trim().min(1).optional()
    });
  • src/server.ts:417-425 (registration)
    Registration of 'jp_lit_refine_results' tool in server.ts with description, input/output schemas, and handler
    server.registerTool(
      "jp_lit_refine_results",
      {
        description: "保存済み jp_lit_search 結果を upstream 再検索せずローカルでソート・フィルタ・集合演算し、必要時だけ重複候補クラスタも返す",
        inputSchema: refineResultsInputSchema,
        outputSchema: refineResultsOutputSchema
      },
      refineResultsTool
    );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. Discloses local operation, set operations, and duplicate clusters but lacks details on side effects, permissions, or cache modification. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise single sentence in Japanese, front-loaded with core operations. Could structure by operation type, but remains efficient and readable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 14 parameters, nested filters, and multiple operations, the description is incomplete. It omits details about cache_key/cache_keys distinction, key_by options, filter fields, and cluster parameters. Output schema may fill gaps, but description alone is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. Description mentions sort, filter, set operations, and duplicate clusters, covering only a subset of 14 parameters. Many parameters (e.g., cache_key, session_id, limit, offset) are not explained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool refines saved jp_lit_search results locally with sorting, filtering, set operations, and optional duplicate cluster return. It distinguishes from siblings by specifying no upstream re-search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage on saved results from jp_lit_search, providing clear context but no explicit when-not-to-use or alternatives. The sibling jp_lit_search is implied as the prerequisite.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/itarunnn/jp-lit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server