Skip to main content
Glama
yoda-digital

Cerebra Legal MCP Server

by yoda-digital

legal_ask_followup_question

Ask precise follow-up questions to gather additional information for legal analysis, with domain-specific options and terminology.

Instructions

A specialized tool for asking follow-up questions in legal contexts. This tool helps gather additional information needed for legal analysis by formulating precise questions with domain-specific options.

When to use this tool:

  • When you need additional information to complete a legal analysis

  • When clarification is needed on specific legal points

  • When gathering evidence or documentation for a legal case

  • When exploring alternative legal interpretations

Key features:

  • Automatic detection of legal domains

  • Domain-specific question suggestions

  • Legal terminology formatting

  • Structured options for efficient information gathering

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
questionYesThe legal question to ask the user
optionsNoAn array of 2-5 options for the user to choose from (optional)
contextNoAdditional context about the legal issue (optional)

Implementation Reference

  • src/index.ts:94-131 (registration)
    Defines the LEGAL_ASK_FOLLOWUP_QUESTION_TOOL constant with the tool's name 'legal_ask_followup_question', detailed description, and inputSchema for MCP registration.
    const LEGAL_ASK_FOLLOWUP_QUESTION_TOOL: Tool = {
      name: "legal_ask_followup_question",
      description: `A specialized tool for asking follow-up questions in legal contexts.
    This tool helps gather additional information needed for legal analysis by formulating precise questions with domain-specific options.
    
    When to use this tool:
    - When you need additional information to complete a legal analysis
    - When clarification is needed on specific legal points
    - When gathering evidence or documentation for a legal case
    - When exploring alternative legal interpretations
    
    Key features:
    - Automatic detection of legal domains
    - Domain-specific question suggestions
    - Legal terminology formatting
    - Structured options for efficient information gathering`,
      inputSchema: {
        type: "object",
        properties: {
          question: {
            type: "string",
            description: "The legal question to ask the user"
          },
          options: {
            type: "array",
            items: {
              type: "string"
            },
            description: "An array of 2-5 options for the user to choose from (optional)"
          },
          context: {
            type: "string",
            description: "Additional context about the legal issue (optional)"
          }
        },
        required: ["question"]
      }
    };
  • The inputSchema object specifying the tool's parameters: required 'question' string, optional 'options' array of strings (2-5), and optional 'context' string.
      type: "object",
      properties: {
        question: {
          type: "string",
          description: "The legal question to ask the user"
        },
        options: {
          type: "array",
          items: {
            type: "string"
          },
          description: "An array of 2-5 options for the user to choose from (optional)"
        },
        context: {
          type: "string",
          description: "Additional context about the legal issue (optional)"
        }
      },
      required: ["question"]
    }
  • The processLegalAskFollowupQuestion function that implements the tool logic: validates input, detects legal domain from question/context, suggests domain-specific options if none provided, formats the question with legal terminology, and returns a JSON response with question, options, and detectedDomain.
    function processLegalAskFollowupQuestion(input: unknown): { content: Array<{ type: string; text: string }>; isError?: boolean } {
      try {
        const data = input as Record<string, unknown>;
        
        // Validate input
        if (!data.question || typeof data.question !== 'string') {
          throw new Error('Invalid question: must be a string');
        }
        
        // Detect domain from question and context
        const textToAnalyze = data.context 
          ? `${data.question} ${data.context}`
          : data.question as string;
        const domain = detectDomain(textToAnalyze);
        
        // If no options provided, suggest domain-specific options
        let options = data.options as string[];
        if (!options || options.length === 0) {
          options = domainQuestionTemplates[domain] || domainQuestionTemplates["legal_reasoning"];
          options = options.slice(0, 4); // Return up to 4 options
        }
        
        // Format the question with legal terminology
        let question = data.question as string;
        
        // Add domain-specific legal terminology if not present
        if (domain === 'ansc_contestation') {
          if (!question.includes('procurement') && !question.includes('tender') && !question.includes('ANSC')) {
            question = question.replace(/specifications/i, 'procurement specifications');
            question = question.replace(/requirements/i, 'tender requirements');
          }
        } else if (domain === 'consumer_protection') {
          if (!question.includes('warranty') && !question.includes('consumer')) {
            question = question.replace(/product/i, 'consumer product');
            question = question.replace(/refund/i, 'consumer refund');
          }
        } else if (domain === 'contract_analysis') {
          if (!question.includes('clause') && !question.includes('contract')) {
            question = question.replace(/agreement/i, 'contractual agreement');
            question = question.replace(/terms/i, 'contractual terms');
          }
        }
        
        // Ensure question ends with a question mark
        if (!question.endsWith('?')) {
          question += '?';
        }
        
        // Prepare response
        return {
          content: [{
            type: "text",
            text: JSON.stringify({
              question,
              options,
              detectedDomain: domain
            }, null, 2)
          }]
        };
      } catch (error) {
        return {
          content: [{
            type: "text",
            text: JSON.stringify({
              error: error instanceof Error ? error.message : String(error),
              status: 'failed'
            }, null, 2)
          }],
          isError: true
        };
      }
    }
  • src/index.ts:836-838 (registration)
    Dispatch logic in the CallToolRequestSchema handler: checks if request.params.name === 'legal_ask_followup_question' and calls the processLegalAskFollowupQuestion handler.
    if (request.params.name === "legal_ask_followup_question") {
      return processLegalAskFollowupQuestion(request.params.arguments);
    }
  • domainQuestionTemplates: Hardcoded array of domain-specific followup question options used when no options are provided in input.
    const domainQuestionTemplates: Record<string, string[]> = {
      ansc_contestation: [
        "What specific provisions of Law 131/2015 are relevant to this contestation?",
        "Are there any ANSC precedent decisions that address similar technical specifications?",
        "What documentation has the contracting authority provided to justify the specifications?",
        "Has the claimant provided evidence that the specifications are unnecessarily restrictive?"
      ],
      consumer_protection: [
        "What are the exact terms of the warranty as stated in the documentation?",
        "Has the retailer provided any evidence to support their claim of improper use?",
        "When exactly did the product failure occur relative to the purchase date?",
        "Are there any similar cases or precedents regarding this type of product failure?"
      ],
      contract_analysis: [
        "What are the key obligations of each party under this contract?",
        "Are there any ambiguous terms that require clarification?",
        "What termination provisions exist in the contract?",
        "Are there any potentially unenforceable clauses in this agreement?"
      ],
      legal_reasoning: [
        "What are the key facts that need to be established for this analysis?",
        "Which legal provisions are most relevant to this situation?",
        "Are there any precedents that should be considered?",
        "What counterarguments should be anticipated?"
      ]
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It lists 'Key features' like 'Automatic detection of legal domains' and 'Legal terminology formatting,' which hint at functionality, but fails to disclose critical behavioral traits such as whether this tool modifies data, requires specific permissions, has rate limits, or what the output looks like. For a tool with no annotations, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections ('When to use this tool,' 'Key features'), making it easy to scan. It's appropriately sized with no redundant sentences, though it could be slightly more concise by integrating some points. Every sentence adds value, such as explaining domain-specific options.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (legal domain tool with 3 parameters, no annotations, no output schema), the description is partially complete. It covers purpose and usage well but lacks details on behavioral aspects and output. Without annotations or an output schema, the description should do more to explain what happens when the tool is invoked, but it provides enough context for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (question, options, context) with descriptions. The description adds no additional meaning beyond the schema, such as examples or usage tips for parameters. The baseline score of 3 is appropriate since the schema does the heavy lifting, but the description doesn't compensate or enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'asking follow-up questions in legal contexts' and 'gather additional information needed for legal analysis,' which is specific and actionable. However, it doesn't explicitly differentiate from sibling tools like legal_attempt_completion or legal_think, leaving some ambiguity about when to choose this tool over alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'When to use this tool' section with four clear scenarios (e.g., 'When you need additional information to complete a legal analysis'), providing explicit guidance on appropriate contexts. It lacks explicit exclusions or comparisons to sibling tools, but the scenarios are well-defined and helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yoda-digital/mcp-cerebra-legal-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server