Skip to main content
Glama

get_annotation_confidence

Retrieve quality scores for UniProt protein annotations to assess reliability and support data-driven decisions in biological research.

Instructions

Quality scores for different annotations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
accessionYesUniProt accession number

Implementation Reference

  • The main handler function for the 'get_annotation_confidence' tool. Validates input, fetches UniProt protein data via API, extracts quality metrics like entryType, proteinExistence, annotationScore, evidence codes, review status, and reference count, then returns formatted JSON.
    private async handleGetAnnotationConfidence(args: any) {
      if (!isValidProteinInfoArgs(args)) {
        throw new McpError(ErrorCode.InvalidParams, 'Invalid annotation confidence arguments');
      }
    
      try {
        const response = await this.apiClient.get(`/uniprotkb/${args.accession}`, {
          params: { format: 'json' },
        });
    
        const protein = response.data;
        const confidenceInfo = {
          accession: protein.primaryAccession,
          entryType: protein.entryType,
          proteinExistence: protein.proteinExistence,
          annotationScore: protein.annotationScore || 'Not available',
          evidenceCodes: protein.features?.map((f: any) => f.evidences).flat().filter(Boolean) || [],
          reviewStatus: protein.entryType === 'UniProtKB reviewed (Swiss-Prot)' ? 'Reviewed' : 'Unreviewed',
          referenceCount: protein.references?.length || 0,
        };
    
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify(confidenceInfo, null, 2),
            },
          ],
        };
      } catch (error) {
        return {
          content: [
            {
              type: 'text',
              text: `Error fetching annotation confidence: ${error instanceof Error ? error.message : 'Unknown error'}`,
            },
          ],
          isError: true,
        };
      }
  • src/index.ts:777-778 (registration)
    Tool dispatch registration in the CallToolRequestSchema handler switch statement, routing calls to the specific handler function.
    case 'get_annotation_confidence':
      return this.handleGetAnnotationConfidence(args);
  • src/index.ts:674-684 (registration)
    Tool registration in the ListToolsRequestSchema response, defining name, description, and input schema.
    {
      name: 'get_annotation_confidence',
      description: 'Quality scores for different annotations',
      inputSchema: {
        type: 'object',
        properties: {
          accession: { type: 'string', description: 'UniProt accession number' },
        },
        required: ['accession'],
      },
    },
  • Input schema definition for the tool, specifying required 'accession' parameter of type string.
    inputSchema: {
      type: 'object',
      properties: {
        accession: { type: 'string', description: 'UniProt accession number' },
      },
      required: ['accession'],
    },
  • Input validation helper function used by the handler to check if arguments contain a valid accession (reused across multiple tools). Note: format validation is present but not used in this tool.
    const isValidProteinInfoArgs = (
      args: any
    ): args is { accession: string; format?: string } => {
      return (
        typeof args === 'object' &&
        args !== null &&
        typeof args.accession === 'string' &&
        args.accession.length > 0 &&
        (args.format === undefined || ['json', 'tsv', 'fasta', 'xml'].includes(args.format))
      );
    };
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. However, it only states 'Quality scores for different annotations' without explaining what 'quality scores' are (e.g., confidence values, metrics), how they are returned, or any behavioral traits like rate limits, permissions, or response format. This is inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single phrase, 'Quality scores for different annotations', which is concise but under-specified—it lacks necessary detail for clarity. While it is front-loaded and wastes no words, the brevity comes at the cost of usefulness, making it more of a placeholder than an informative description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity implied by the tool name (involving 'annotation confidence') and the lack of annotations and output schema, the description is incomplete. It does not explain what 'quality scores' are, how they are structured, or what annotations are covered, leaving significant gaps for the agent to understand the tool's functionality and output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'accession' clearly documented as a 'UniProt accession number'. The description adds no additional meaning beyond this, such as examples or constraints. Since schema coverage is high, the baseline score of 3 is appropriate, as the schema adequately handles parameter semantics without description enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Quality scores for different annotations' is vague and tautological—it essentially restates the tool name 'get_annotation_confidence' without specifying what resource it acts on or what 'quality scores' entail. It does not clearly distinguish this tool from siblings like 'get_protein_info' or 'get_protein_features', which might also provide annotation-related data. The purpose lacks a specific verb and target resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any context, prerequisites, or exclusions, nor does it refer to sibling tools. This leaves the agent with no information to decide between this tool and others like 'get_protein_info' or 'get_protein_features' for annotation-related queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Augmented-Nature/UniProt-MCP-Server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server