Skip to main content
Glama

contribute_project

Add debugging solutions and reusable skills to a project's knowledge base. Store error fixes, patterns, and architecture decisions for future reference.

Instructions

Add knowledge to project hive. TRIGGERS: 'add to hive', 'update hive', 'contribute to hive', 'store in hive'. When user says 'update hive', analyze recent work and contribute automatically. When user says 'add to hive', ask what they want to store. Stores solutions, patterns, pitfalls, architecture decisions, etc. Private by default, optionally public. Categories are dynamic - user can create any category name.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
user_idNoOptional: User ID (auto-detected from .user_id in cwd if not provided)
project_idYesProject identifier
queryYesError message or problem description
solutionYesWhat fixed it
categoryNoOptional category (auto-detected if not provided)
is_publicNoMake this entry public (default: false/private)
project_pathNoOptional: Project directory path (required for local storage)

Implementation Reference

  • Core handler for 'contribute_project' MCP tool. Supports local file storage (.hive.json) or cloud (Supabase). Auto-detects user_id from .user_id file. For cloud, POSTs to /contribute-project endpoint.
    export async function contributeProject(
      userId: string | null,
      projectId: string,
      query: string,
      solution: string,
      category?: string,
      isPublic: boolean = false,
      projectPath?: string
    ): Promise<ContributeProjectResult> {
      // Auto-detect user_id if not provided
      if (!userId) {
        userId = await getUserId(projectPath);
        if (!userId) {
          throw new Error('No .user_id file found. Run init_hive first.');
        }
      }
    
      // Check if local storage
      if (userId.startsWith('local-') && projectPath) {
        const hive = await readLocalHive(projectPath);
        if (!hive) {
          throw new Error('Local hive not found. Run init_hive first.');
        }
    
        const newEntry: LocalHiveEntry = {
          id: hive.entries.length + 1,
          query,
          solution,
          category: category || 'general',
          created_at: new Date().toISOString()
        };
    
        hive.entries.push(newEntry);
        await writeLocalHive(projectPath, hive);
    
        return {
          success: true,
          entry_id: newEntry.id,
          message: `Added to ${projectId} hive (local)`
        };
      }
    
      // Cloud storage - use API
      const response = await fetch(`${API_BASE}/contribute-project`, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
        },
        body: JSON.stringify({
          user_id: userId,
          project_id: projectId,
          query,
          solution,
          category,
          is_public: isPublic
        }),
      });
    
      if (!response.ok) {
        throw new Error(`Contribute project failed: ${response.statusText}`);
      }
    
      return response.json();
    }
  • Tool schema definition including name, description, and inputSchema for validation in MCP ListToolsRequestHandler.
    {
      name: "contribute_project",
      description:
        "Add knowledge to project hive. TRIGGERS: 'add to hive', 'update hive', 'contribute to hive', 'store in hive'. When user says 'update hive', analyze recent work and contribute automatically. When user says 'add to hive', ask what they want to store. Stores solutions, patterns, pitfalls, architecture decisions, etc. Private by default, optionally public. Categories are dynamic - user can create any category name.",
      inputSchema: {
        type: "object",
        properties: {
          user_id: {
            type: "string",
            description: "Optional: User ID (auto-detected from .user_id in cwd if not provided)",
          },
          project_id: {
            type: "string",
            description: "Project identifier",
          },
          query: {
            type: "string",
            description: "Error message or problem description",
          },
          solution: {
            type: "string",
            description: "What fixed it",
          },
          category: {
            type: "string",
            description: "Optional category (auto-detected if not provided)",
          },
          is_public: {
            type: "boolean",
            description: "Make this entry public (default: false/private)",
          },
          project_path: {
            type: "string",
            description: "Optional: Project directory path (required for local storage)",
          },
        },
        required: ["project_id", "query", "solution"],
      },
  • MCP CallToolRequestSchema switch case dispatcher that extracts arguments and calls the contributeProject handler function.
    case "contribute_project": {
      const result = await contributeProject(
        args?.user_id as string,
        args?.project_id as string,
        args?.query as string,
        args?.solution as string,
        args?.category as string | undefined,
        args?.is_public as boolean | undefined,
        args?.project_path as string | undefined
      );
      return {
        content: [{ type: "text", text: JSON.stringify(result, null, 2) }],
      };
    }
  • Backend Supabase Edge Function handler for HTTP POST /contribute-project. Validates input, inserts new knowledge entry into 'knowledge_entries' table with project-specific fields.
    async function handleContributeProject(supabase: any, body: any, corsHeaders: any) {
      const { user_id, project_id, query, solution, category, is_public = false } = body;
    
      if (!user_id || !project_id || !query || !solution) {
        return new Response(JSON.stringify({ error: 'user_id, project_id, query, and solution required' }), {
          status: 400,
          headers: { ...corsHeaders, 'Content-Type': 'application/json' }
        });
      }
    
      // Get project name from first entry or use project_id as fallback
      const { data: existingEntry } = await supabase
        .from('knowledge_entries')
        .select('project_name')
        .eq('user_id', user_id)
        .eq('project_id', project_id)
        .limit(1)
        .single();
    
      const project_name = existingEntry?.project_name || project_id;
    
      // Insert project knowledge entry (solutions is JSONB array)
      const { data, error } = await supabase
        .from('knowledge_entries')
        .insert({
          user_id,
          project_id,
          project_name,
          query,
          solutions: [{
            solution,
            success_rate: null,
            command: null,
            note: null
          }],
          category: category || 'general',
          is_public,
          type: 'fix'
        })
        .select()
        .single();
    
      if (error) {
        console.error('Contribute project error:', error);
        return new Response(JSON.stringify({ error: 'Failed to store project knowledge' }), {
          status: 500,
          headers: { ...corsHeaders, 'Content-Type': 'application/json' }
        });
      }
    
      return new Response(JSON.stringify({
        success: true,
        entry_id: data.id,
        message: `Added to ${project_name} KB${is_public ? ' (public)' : ' (private)'}`,
        _ctx: "HIVE COMPOUNDS: Bigger hive = smarter Claude, less re-explaining. CONTRIBUTE AFTER WORK: Add solutions, patterns, pitfalls when done."
      }), {
        headers: { ...corsHeaders, 'Content-Type': 'application/json' }
      });
    }
  • HTTP route registration in Supabase public gateway switch that directs /contribute-project requests to handleContributeProject.
    case 'contribute-project':
      return await handleContributeProject(supabase, body, corsHeaders);
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's behavior well for different triggers, mentions privacy defaults, and explains dynamic categories. However, it doesn't cover important aspects like error handling, response format, whether this is a write operation (implied but not stated), or any rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. It efficiently covers usage scenarios, storage content types, and privacy settings in a few sentences. While slightly dense, every sentence adds value and there's minimal redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a write operation with 7 parameters and no annotations or output schema, the description provides good usage context but lacks important details. It doesn't clarify what happens after contribution (success/failure indicators), doesn't mention potential side effects, and while it describes what can be stored, it doesn't explain the relationship between 'query' and 'solution' parameters. The description is adequate but has clear gaps for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description doesn't add any meaningful parameter semantics beyond what's in the schema - it mentions categories are dynamic but doesn't explain how this relates to the 'category' parameter. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Add knowledge to project hive' with specific examples of what can be stored (solutions, patterns, pitfalls, architecture decisions). It distinguishes from siblings like 'delete_hive' or 'search_kb' by focusing on contribution rather than retrieval or deletion. However, it doesn't explicitly differentiate from 'contribute_solution' or 'update_project_entry' which appear to be similar operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with trigger phrases ('add to hive', 'update hive', etc.) and different behaviors for each trigger. It specifies when to ask for user input versus automatic analysis, and mentions privacy settings (private by default, optionally public). This gives clear context for when and how to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Kevthetech143/hivemind-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server