Skip to main content
Glama

check_terminology

Identify and resolve inconsistent terminology usage in writing projects to maintain consistent language throughout documents.

Instructions

Find inconsistent term usage

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_pathNoPath to manuscript directory (defaults to current directory)
scopeNoFile scope pattern
auto_detectNoAuto-detect variants
termsNoSpecific terms to check
limitNoMaximum term groups to return
examples_per_variantNoExamples per term variant

Implementation Reference

  • MCP tool handler: parses input arguments and delegates to WritersAid.checkTerminology with resolved pagination limit
    private async checkTerminology(args: Record<string, unknown>) {
      const scope = args.scope as string | undefined;
      const autoDetect = (args.auto_detect as boolean) ?? true;
      const terms = args.terms as string[] | undefined;
      const limit = resolvePaginationLimit("check_terminology", args.limit as number | undefined);
      const examplesPerVariant = (args.examples_per_variant as number) || 3;
    
      return this.writersAid.checkTerminology({ scope, autoDetect, terms, limit, examplesPerVariant });
    }
  • Tool schema definition with input schema for MCP registration
    {
      name: "check_terminology",
      description: "Find inconsistent term usage",
      inputSchema: {
        type: "object",
        properties: {
          project_path: { type: "string", description: "Path to manuscript directory (defaults to current directory)" },
          scope: { type: "string", description: "File scope pattern" },
          auto_detect: { type: "boolean", description: "Auto-detect variants", default: true },
          terms: {
            type: "array",
            items: { type: "string" },
            description: "Specific terms to check",
          },
          limit: { type: "number", description: "Maximum term groups to return", default: 20 },
          examples_per_variant: { type: "number", description: "Examples per term variant", default: 3 },
        },
      },
    },
  • WritersAid orchestrator method that delegates to TerminologyChecker
    async checkTerminology(options?: {
      scope?: string;
      autoDetect?: boolean;
      terms?: string[];
      limit?: number;
      examplesPerVariant?: number;
    }) {
      return this.terminologyChecker.checkTerminology(options || {});
    }
  • Core terminology consistency checker: handles specific terms or auto-detection, computes variants and reports inconsistencies
    async checkTerminology(options: {
      scope?: string;
      autoDetect?: boolean;
      terms?: string[];
      limit?: number;
      examplesPerVariant?: number;
    }): Promise<TerminologyReport> {
      const { scope, autoDetect = true, terms, limit, examplesPerVariant = 3 } = options;
    
      if (terms && terms.length > 0) {
        return this.checkSpecificTerms(terms, scope, limit, examplesPerVariant);
      }
    
      if (autoDetect) {
        return this.autoDetectVariants(scope, limit, examplesPerVariant);
      }
    
      return {
        groups: [],
        totalIssues: 0,
        filesAffected: 0,
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. 'Find inconsistent term usage' implies a read-only analysis operation, but it doesn't specify whether this tool modifies files, requires specific permissions, has rate limits, or what the output format looks like. For a tool with 6 parameters and no output schema, this is a significant gap in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient phrase with zero wasted words. It's appropriately sized for a tool name that's already descriptive, and it's front-loaded with the core purpose. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, no annotations, no output schema), the description is insufficiently complete. It doesn't explain what 'inconsistent term usage' means operationally, what the tool returns, or how results are structured. For a tool that presumably performs textual analysis across files, more context about behavior and output is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are documented in the schema itself. The description adds no additional parameter semantics beyond what's already in the schema descriptions. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no parameter information in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Find inconsistent term usage' clearly states the tool's purpose with a specific verb ('Find') and target ('inconsistent term usage'), but it doesn't differentiate from sibling tools like 'find_duplicates' or 'find_concept_contradictions' that might have overlapping semantic domains. It's not tautological but remains somewhat vague about what constitutes 'inconsistent term usage' in this context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'find_duplicates' or 'find_concept_contradictions'. There's no mention of prerequisites, typical use cases, or exclusions. The agent must infer usage solely from the tool name and parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xiaolai/claude-writers-aid-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server