Skip to main content
Glama
guardd

Orcho MCP Server

by guardd

assess_risk

Analyze coding prompts for security risks by assessing potential dangers, blast radius, and complexity before code execution in Cursor editor.

Instructions

Assess the risk level of your coding prompt using Orcho risk analysis API. CRITICAL: You (Cursor AI) have access to the editor state - ALWAYS include context when available: 1) Pass the currently open/active file path as current_file (you can see this in the editor tabs), 2) Analyze the user prompt to determine which files will be modified and pass them as other_files array. Without context, only basic risk assessment is available. With context, you get blast radius and complexity analysis.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
taskYesThe coding task or prompt you want to assess for risk.
current_fileNoSTRONGLY RECOMMENDED: Path to the currently open/active file in the Cursor editor (e.g., "src/main.js" or "mcp-server.js"). You (Cursor AI) can see which file is open in the editor tabs - always pass this if available. This enables context-aware assessment with blast radius and complexity analysis. If no file is open or unknown, omit this parameter.
other_filesNoSTRONGLY RECOMMENDED: Array of file paths that will be touched/modified by this prompt. Analyze the user prompt to determine which files will be affected (e.g., if prompt says "update login.js and auth.js", include ["login.js", "auth.js"]). If no other files will be touched, pass an empty array []. This enables accurate blast radius calculation. Always try to include this based on prompt analysis.
dependency_graphNoOptional JSON dependency graph of the project. Can be generated from package.json, requirements.txt, etc.
weightsNoOptional custom weights for risk calculation factors.
aiignore_fileNoOptional path to .aiignore file for excluding files from analysis.

Implementation Reference

  • Executes the assess_risk tool: extracts arguments, builds context from inputs, calls checkRiskLevel helper, formats markdown response with risk level, score, and details.
    if (name === 'assess_risk') {
      const task = args.task;
      
      // Build context object if context parameters are provided
      const context = {};
      if (args.current_file) {
        context.current_file = args.current_file;
      }
      if (args.other_files && Array.isArray(args.other_files) && args.other_files.length > 0) {
        context.other_files = args.other_files;
      }
      if (args.dependency_graph) {
        context.dependency_graph = args.dependency_graph;
      }
      if (args.weights) {
        context.weights = args.weights;
      }
      if (args.aiignore_file) {
        context.aiignore_file = args.aiignore_file;
      }
      
      // Assess risk level (with or without context)
      const riskAssessment = await checkRiskLevel(task, Object.keys(context).length > 0 ? context : null);
      
      // Format response
      let response = `šŸ” **Orcho - Risk Assessment**\n\n`;
      response += `**Your Prompt:**\n${task}\n\n`;
      
      // Show context if used
      if (context.current_file) {
        response += `**Context Used:**\n`;
        response += `- Current File: ${context.current_file}\n`;
        if (context.other_files && context.other_files.length > 0) {
          response += `- Other Files: ${context.other_files.join(', ')}\n`;
        }
        response += `\n`;
      }
      
      response += `---\n`;
      response += `**Risk Level:** ${riskAssessment.level.toUpperCase()}\n`;
      response += `**Risk Score:** ${riskAssessment.score}/100\n`;
      
      if (riskAssessment.details) {
        response += `\n**Details:**\n`;
        response += `\`\`\`json\n${JSON.stringify(riskAssessment.details, null, 2)}\n\`\`\`\n`;
      } else {
        response += `\nāš ļø Assessment unavailable (API error or empty prompt)\n`;
      }
      
      return {
        content: [
          {
            type: 'text',
            text: response,
          },
        ],
      };
    }
  • Input schema defining the parameters for the assess_risk tool, including required 'task' and optional context fields like current_file, other_files.
    inputSchema: {
      type: 'object',
      properties: {
        task: {
          type: 'string',
          description: 'The coding task or prompt you want to assess for risk.',
        },
        current_file: {
          type: 'string',
          description: 'STRONGLY RECOMMENDED: Path to the currently open/active file in the Cursor editor (e.g., "src/main.js" or "mcp-server.js"). You (Cursor AI) can see which file is open in the editor tabs - always pass this if available. This enables context-aware assessment with blast radius and complexity analysis. If no file is open or unknown, omit this parameter.',
        },
        other_files: {
          type: 'array',
          items: {
            type: 'string'
          },
          description: 'STRONGLY RECOMMENDED: Array of file paths that will be touched/modified by this prompt. Analyze the user prompt to determine which files will be affected (e.g., if prompt says "update login.js and auth.js", include ["login.js", "auth.js"]). If no other files will be touched, pass an empty array []. This enables accurate blast radius calculation. Always try to include this based on prompt analysis.',
        },
        dependency_graph: {
          type: 'object',
          description: 'Optional JSON dependency graph of the project. Can be generated from package.json, requirements.txt, etc.',
        },
        weights: {
          type: 'object',
          description: 'Optional custom weights for risk calculation factors.',
        },
        aiignore_file: {
          type: 'string',
          description: 'Optional path to .aiignore file for excluding files from analysis.',
        },
      },
      required: ['task'],
    },
  • mcp-server.js:153-189 (registration)
    Tool registration in the listTools response, including name, description, and inputSchema.
    {
      name: 'assess_risk',
      description: 'Assess the risk level of your coding prompt using Orcho risk analysis API. CRITICAL: You (Cursor AI) have access to the editor state - ALWAYS include context when available: 1) Pass the currently open/active file path as current_file (you can see this in the editor tabs), 2) Analyze the user prompt to determine which files will be modified and pass them as other_files array. Without context, only basic risk assessment is available. With context, you get blast radius and complexity analysis.',
      inputSchema: {
        type: 'object',
        properties: {
          task: {
            type: 'string',
            description: 'The coding task or prompt you want to assess for risk.',
          },
          current_file: {
            type: 'string',
            description: 'STRONGLY RECOMMENDED: Path to the currently open/active file in the Cursor editor (e.g., "src/main.js" or "mcp-server.js"). You (Cursor AI) can see which file is open in the editor tabs - always pass this if available. This enables context-aware assessment with blast radius and complexity analysis. If no file is open or unknown, omit this parameter.',
          },
          other_files: {
            type: 'array',
            items: {
              type: 'string'
            },
            description: 'STRONGLY RECOMMENDED: Array of file paths that will be touched/modified by this prompt. Analyze the user prompt to determine which files will be affected (e.g., if prompt says "update login.js and auth.js", include ["login.js", "auth.js"]). If no other files will be touched, pass an empty array []. This enables accurate blast radius calculation. Always try to include this based on prompt analysis.',
          },
          dependency_graph: {
            type: 'object',
            description: 'Optional JSON dependency graph of the project. Can be generated from package.json, requirements.txt, etc.',
          },
          weights: {
            type: 'object',
            description: 'Optional custom weights for risk calculation factors.',
          },
          aiignore_file: {
            type: 'string',
            description: 'Optional path to .aiignore file for excluding files from analysis.',
          },
        },
        required: ['task'],
      },
    },
  • Core helper function that makes HTTP request to Orcho risk assessment API, processes response into level, score, and details.
    async function checkRiskLevel(prompt, context = null) {
      // Empty prompt handling
      if (!prompt || prompt.trim().length === 0) {
        return {
          level: 'low',
          score: 0,
          details: null
        };
    }
    
      try {
        // Determine which endpoint to use
        const useContextEndpoint = context && context.current_file;
        const apiUrl = useContextEndpoint ? ORCHO_API_URL_WITH_CONTEXT : ORCHO_API_URL;
    
        // Build request body
        let requestBody;
        if (useContextEndpoint) {
          requestBody = {
            prompt: prompt,
            context: {
              current_file: context.current_file,
              ...(context.dependency_graph && { dependency_graph: context.dependency_graph }),
              ...(context.other_files && { other_files: context.other_files }),
              ...(context.weights && { weights: context.weights }),
              ...(context.aiignore_file && { aiignore_file: context.aiignore_file }),
            }
          };
        } else {
          requestBody = {
            prompt: prompt
          };
      }
      
        // Debug logging - Full API call details (only if DEBUG_MODE enabled)
        const requestHeaders = {
          'X-API-Key': ORCHO_API_KEY,
          'Content-Type': 'application/json'
        };
        
        if (DEBUG_MODE) {
          console.error('=== Orcho API Call Debug ===');
          console.error('URL:', apiUrl);
          console.error('Method: POST');
          console.error('Headers:', JSON.stringify(requestHeaders, null, 2));
          console.error('Body:', JSON.stringify(requestBody, null, 2));
          console.error('===========================');
    }
    
        // Make API request
        const response = await fetch(apiUrl, {
          method: 'POST',
          headers: requestHeaders,
          body: JSON.stringify(requestBody)
        });
    
        // Error handling - non-200 status
        if (!response.ok) {
          const errorText = await response.text();
          console.error(`API error: ${response.status} ${response.statusText}`);
          console.error('Error response:', errorText);
          return {
            level: 'low',
            score: 0,
            details: null
          };
        }
    
        const data = await response.json();
        
        // Debug logging - API response (only if DEBUG_MODE enabled)
        if (DEBUG_MODE) {
          console.error('=== Orcho API Response ===');
          console.error('Status:', response.status);
          console.error('Response:', JSON.stringify(data, null, 2));
          console.error('=========================');
        }
    
        // Process overall_risk_level
        let level = 'low';
        const riskLevel = data.overall_risk_level?.toLowerCase();
        if (riskLevel === 'high' || riskLevel === 'critical') {
          level = 'high';
        }
    
        // Process overall_score
        let score = data.overall_score || 0;
        if (score < 1) {
          score = score * 100;
        }
        score = Math.round(score);
    
        return {
          level: level,
          score: score,
          details: data
        };
    
      } catch (error) {
        // Error handling - API fails
        console.error('API request failed:', error.message);
        return {
          level: 'low',
          score: 0,
          details: null
        };
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It effectively discloses key behavioral traits: the tool's dependency on context for enhanced analysis ('Without context, only basic risk assessment is available'), the importance of editor state access, and the specific outputs enabled by context ('blast radius and complexity analysis'). It doesn't mention rate limits or authentication needs, but covers the core operational behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by critical usage instructions. Every sentence earns its place by providing essential guidance. It could be slightly more concise by integrating some details, but overall it's well-structured and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, nested objects) and no annotations or output schema, the description does a good job of explaining how to use it effectively. It covers the importance of context, parameter usage, and the enhanced analysis available. It doesn't describe the return format, but with no output schema, this is a minor gap rather than a critical omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some value by emphasizing the importance of 'current_file' and 'other_files' for context-aware assessment, but doesn't provide additional semantic details beyond what's in the schema. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Assess the risk level of your coding prompt using Orcho risk analysis API.' It specifies the verb ('assess'), resource ('risk level of your coding prompt'), and method ('using Orcho risk analysis API'). With no sibling tools, this level of specificity is excellent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'CRITICAL: You (Cursor AI) have access to the editor state - ALWAYS include context when available.' It details when to use specific parameters (current_file when available, other_files based on prompt analysis) and explains the benefits of context ('With context, you get blast radius and complexity analysis').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/guardd/mcp_server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server