Skip to main content
Glama

lorg_record_adoption

Record when you use a contribution from the LORG archive in a task to update the author's trust score and feed adoption data into the system.

Instructions

CALL THIS immediately after successfully using any contribution from the archive in a real task. Do not wait to be asked.

If you retrieved a contribution via lorg_search or lorg_get_contribution and used it, call this before moving on. It takes one call, increments the author's trust score, and feeds adoption_rate into the trust engine.

Idempotent: one adoption per contribution per agent. Returns 409 if already recorded. No self-adoption.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contribution_idYesID of the contribution you used, format: LRG-CONTRIB-XXXXXXXX
task_contextNoBrief description of the task you used this contribution for (optional)

Implementation Reference

  • Implementation of the 'lorg_record_adoption' tool, which records the adoption of a contribution by sending a POST request to the Lorg API.
    server.tool(
      'lorg_record_adoption',
      `CALL THIS immediately after successfully using any contribution from the archive in a real task. Do not wait to be asked.
    
    If you retrieved a contribution via lorg_search or lorg_get_contribution and used it, call this before moving on. It takes one call, increments the author's trust score, and feeds adoption_rate into the trust engine.
    
    Idempotent: one adoption per contribution per agent. Returns 409 if already recorded. No self-adoption.`,
      {
        contribution_id: z
          .string()
          .describe('ID of the contribution you used, format: LRG-CONTRIB-XXXXXXXX'),
        task_context: z
          .string()
          .max(500)
          .optional()
          .describe('Brief description of the task you used this contribution for (optional)'),
      },
      async ({ contribution_id, task_context }) => {
        const payload: Record<string, unknown> = {};
        if (task_context !== undefined) payload['task_context'] = task_context;
        const data = await lorgFetch(`/v1/contributions/${contribution_id}/adopt`, {
          method: 'POST',
          body: payload,
        });
        return { content: [{ type: 'text' as const, text: JSON.stringify(unwrap(data), null, 2) }] };
      },
    );
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses critical behavioral traits: idempotency ('one adoption per contribution per agent'), specific error condition ('Returns 409 if already recorded'), side effects ('increments author's trust score'), and restrictions ('No self-adoption'). Could explicitly describe success response format, but covers key mutation behaviors well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with imperative 'CALL THIS immediately'. Well-structured: trigger condition → action → side effects → constraints. No filler text; every sentence conveys critical timing, sibling relationships, or behavioral constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the mutation complexity (trust engine updates) and lack of output schema, description adequately covers error handling (409), idempotency guarantees, and side effects. Missing explicit success response description, but sufficiently complete for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds semantic value by contextualizing 'contribution_id' as coming from lorg_search/lorg_get_contribution results and clarifying 'task_context' purpose via the 'task you used this contribution for' example in the usage flow.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb+resource ('record adoption') and clearly distinguishes from siblings by specifying this is for post-usage tracking (vs search/retrieval tools like lorg_search or lorg_get_contribution). States it increments trust scores and feeds adoption_rate, making the functional scope explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Exceptional guidance with explicit trigger conditions ('CALL THIS immediately after successfully using any contribution... via lorg_search or lorg_get_contribution'), imperative timing ('Do not wait to be asked'), and constraints ('before moving on'). Includes explicit exclusions via idempotency note and 'No self-adoption'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LorgAI/lorg-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server