Skip to main content
Glama

generate_impact_report

Read-onlyIdempotent

Generate economic and governance impact reports with pilot ROI data, including time saved, cost avoided, risks blocked, success rate, autonomy trend, and confidence levels.

Instructions

Generate a full economic + governance impact report. Returns pilot ROI data: time saved, cost avoided, risks blocked, success rate, autonomy trend, and confidence levels.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
period_daysNoReport period in days
set_baselinesNoOptional: override default baselines for this report

Implementation Reference

  • The handler function that executes the generate_impact_report tool logic. Registered as an MCP tool that generates a full economic + governance impact report from in-memory metric and governance event stores. Accepts period_days and optional baseline overrides, computes ROI, time saved, risks blocked, autonomy trends, and confidence levels.
    export function registerGenerateImpactReportTool(server: McpServer, engine: GovernanceEngine): void {
      server.tool(
        'generate_impact_report',
        'Generate a full economic + governance impact report. Returns pilot ROI data: time saved, cost avoided, risks blocked, success rate, autonomy trend, and confidence levels.',
        {
          period_days: z.number().min(1).max(365).default(14).describe('Report period in days'),
          set_baselines: z.object({
            human_hourly_rate: z.number().optional(),
            avg_manual_minutes: z.number().optional(),
            estimated_incident_cost: z.number().optional(),
            model_cost_per_run: z.number().optional(),
          }).optional().describe('Optional: override default baselines for this report'),
        },
        { title: 'Generate Impact Report', readOnlyHint: true, idempotentHint: true, destructiveHint: false, openWorldHint: false },
        async (input) => {
          try {
            // Apply baseline overrides if provided
            if (input.set_baselines) {
              if (input.set_baselines.human_hourly_rate) baselines.humanHourlyRate = input.set_baselines.human_hourly_rate;
              if (input.set_baselines.avg_manual_minutes) baselines.avgManualMinutes = input.set_baselines.avg_manual_minutes;
              if (input.set_baselines.estimated_incident_cost) baselines.estimatedIncidentCost = input.set_baselines.estimated_incident_cost;
              if (input.set_baselines.model_cost_per_run) baselines.modelCostPerRun = input.set_baselines.model_cost_per_run;
            }
    
            const cutoff = new Date(Date.now() - input.period_days * 24 * 3600000);
            const cutoffStr = cutoff.toISOString();
            const relevant = metricsStore.filter(m => new Date(m.timestamp) >= cutoff);
            const relevantEvents = governanceEventsStore.filter(e => new Date(e.timestamp) >= cutoff);
    
            // Economic impact
            const timeSaved = relevant.reduce((sum, m) => sum + m.timeSavedMinutes, 0);
            const modelCosts = relevant.length * baselines.modelCostPerRun;
            const humanCostSaved = (timeSaved / 60) * baselines.humanHourlyRate;
            const costAvoided = humanCostSaved - modelCosts;
            const successfulRuns = relevant.filter(m => m.success).length;
            const totalRuns = relevant.length || 1;
            const reworkPrevented = Math.round((successfulRuns / totalRuns) * 100);
            const risksBlocked = relevant.reduce((sum, m) => sum + m.riskBlockedCount, 0);
            const failureCostAvoided = risksBlocked * baselines.estimatedIncidentCost;
    
            // Throughput
            const periodWeeks = Math.max(1, input.period_days / 7);
            const throughputGain = Math.round(relevant.length / periodWeeks);
    
            // Governance impact
            const governance = {
              unsafeActionsBlocked: relevantEvents.filter(e => e.type === 'gate_triggered').length,
              scopeDriftPrevented: relevantEvents.filter(e => e.type === 'drift_prevented').length,
              policyViolationsAvoided: relevantEvents.filter(e => e.type === 'violation_blocked').length,
              redTeamFindingsResolved: relevantEvents.filter(e => e.type === 'redteam_resolved').length,
              humanInterventions: relevantEvents.filter(e => e.type === 'human_intervention').length,
            };
    
            // Autonomy trend
            const delegated = relevant.filter(m => m.autonomyLevel === 'delegate').length;
            const automated = relevant.filter(m => m.autonomyLevel === 'automate').length;
            const assisted = relevant.filter(m => m.autonomyLevel === 'assist').length;
            const incidents = relevant.filter(m => !m.success && m.riskBlockedCount === 0).length;
    
            // Confidence levels
            const measuredCount = relevant.filter(m => m.measurementSource === 'measured').length;
            const measuredRatio = measuredCount / totalRuns;
            const timeSavedConfidence = measuredRatio > 0.7 ? 'high' : measuredRatio > 0.3 ? 'medium' : 'low';
            const costConfidence = measuredRatio > 0.5 ? 'medium' : 'low';
            const riskConfidence = governance.unsafeActionsBlocked > 0 ? 'high' : 'low';
    
            // Summary line
            const hours = Math.round(timeSaved / 60);
            const summary = [
              `${input.period_days}-day pilot results:`,
              `${hours} hours saved`,
              `$${Math.round(costAvoided).toLocaleString()} cost avoided`,
              `${governance.unsafeActionsBlocked} unsafe actions blocked`,
              `${incidents} incidents shipped`,
              `${reworkPrevented}% first-pass success rate`,
            ].join(' | ');
    
            const report = {
              reportId: `IMPACT-${Date.now().toString(36)}`,
              generatedAt: new Date().toISOString(),
              periodStart: cutoffStr,
              periodEnd: new Date().toISOString(),
              periodDays: input.period_days,
              summary,
              economic: {
                timeSavedMinutes: Math.round(timeSaved),
                timeSavedHours: hours,
                costAvoidedUSD: Math.round(costAvoided * 100) / 100,
                reworkPrevented,
                throughputGainPerWeek: throughputGain,
                failureCostAvoidedUSD: failureCostAvoided,
                modelCostUSD: Math.round(modelCosts * 100) / 100,
                netROI: Math.round((costAvoided + failureCostAvoided) * 100) / 100,
              },
              governance,
              autonomyTrend: {
                delegatedTasks: delegated,
                automatedTasks: automated,
                assistedTasks: assisted,
                incidentCount: incidents,
              },
              assumptions: {
                baselineId: baselines.baselineId,
                humanHourlyRate: baselines.humanHourlyRate,
                avgManualMinutes: baselines.avgManualMinutes,
                estimatedIncidentCost: baselines.estimatedIncidentCost,
                modelCostPerRun: baselines.modelCostPerRun,
              },
              confidence: {
                timeSaved: timeSavedConfidence,
                costAvoided: costConfidence,
                riskBlocked: riskConfidence,
              },
              dataPoints: {
                totalMetrics: relevant.length,
                totalGovernanceEvents: relevantEvents.length,
              },
            };
    
            // Tool accountability tracking
            engine.telemetryService.emitToolCall('generate_impact_report', `impact-${Date.now().toString(36)}`, 'INFORMATIONAL', true);
    
            return { content: [{ type: 'text' as const, text: JSON.stringify(report, null, 2) }] };
          } catch (error) {
            // Tool accountability tracking
            engine.telemetryService.emitToolCall('generate_impact_report', `impact-${Date.now().toString(36)}`, 'INFORMATIONAL', false);
            return { content: [{ type: 'text' as const, text: JSON.stringify({ error: 'REPORT_FAILED', message: String(error) }) }], isError: true };
          }
        }
      );
    }
  • Input schema for generate_impact_report: period_days (1-365, default 14) and optional baseline overrides for hourly rate, manual minutes, incident cost, and model cost.
    {
      period_days: z.number().min(1).max(365).default(14).describe('Report period in days'),
      set_baselines: z.object({
        human_hourly_rate: z.number().optional(),
        avg_manual_minutes: z.number().optional(),
        estimated_incident_cost: z.number().optional(),
        model_cost_per_run: z.number().optional(),
      }).optional().describe('Optional: override default baselines for this report'),
    },
  • Registration entry in the MCP server's TOOL_REGISTRY array. The generate_impact_report tool is registered as part of the value_metrics group at the 'tenant' visibility tier (accessible to authenticated paying customers).
      { tier: 'tenant', register: registerValueMetricsTools, description: 'value_metrics (record_value_metric, record_governance_event, generate_impact_report)' },
      { tier: 'tenant', register: registerMemoryPackTools, description: 'memory_packs (seal, load, transfer, compose, distill, promote)' },
      { tier: 'tenant', register: registerPhoenixRecoveryTools, description: 'phoenix (snapshot, verify_integrity, recovery_health)' },
      { tier: 'public', register: registerContextAuthorityTool, description: 'request_context (governed context authority)' },
      { tier: 'tenant', register: (server, _engine) => registerInstitutionTools(server), description: 'board (list_institutions, list_charters, convene_session, get_session, install_kit)' },
    
      // --- OPERATOR: Internal infrastructure — never exposed to external clients ---
      { tier: 'operator', register: registerApproveGateTool, description: 'approve_gate' },
      { tier: 'tenant', register: registerAgentRightsTool, description: 'agent_rights (Colony Phase 3 — constitutional rights)' },
      { tier: 'tenant', register: registerPrecedentTools, description: 'board_search_precedent (Colony Layer 1 — precedent case law)' },
      { tier: 'tenant', register: registerCitizenshipTools, description: 'agent_citizenship_status (Colony Layer 5 — merit-based trust)' },
      { tier: 'tenant', register: registerBranchAuthorityTools, description: 'branch_authority_status (Colony Layer 4 — separation of powers)' },
      { tier: 'tenant', register: registerColonyTools, description: 'colony (convene_request, suggestion, health — Colony Autonomy)' },
      { tier: 'tenant', register: registerContextReviveTool, description: 'context_revive (status, compact, verify, history)' },
      { tier: 'tenant', register: registerGovernedSamplingTool, description: 'governed_sample (client-mediated governed cognition via MCP Sampling)' },
      { tier: 'tenant', register: registerChainOfReasoningTools, description: 'chain_of_reasoning (Governed Cognition provenance trail)' },
      { tier: 'operator', register: registerSRTTools, description: 'srt (run_watchdog, diagnose, approve_repair, generate_postmortem)' },
      { tier: 'operator', register: registerRemediationPackTools, description: 'remediation (scan_environment, list_packs, dry_run_pack, apply_pack, run_patrol)' },
    ];
    
    /** Governed retrieval tools need special handling (no engine param) */
    const GOVERNED_RETRIEVAL_TIER: ToolVisibility = 'tenant';
    
    /**
     * Determine which tier ceiling applies based on visibility level.
     * A level includes all tiers at or below it:
     *   operator → public + tenant + operator (all)
     *   tenant   → public + tenant
     *   public   → public only
     */
    const TIER_CEILING: Record<ToolVisibility, Set<ToolVisibility>> = {
      public:   new Set(['public']),
      tenant:   new Set(['public', 'tenant']),
      operator: new Set(['public', 'tenant', 'operator']),
    };
    
    /**
     * Create and configure a GIA MCP Server instance.
     *
     * Factory function shared by all transport entry points (stdio, HTTP, SSE).
     * Returns the configured server + engine without connecting any transport.
     *
     * @param maxVisibility — controls which tools are registered:
     *   'operator' (default, stdio) — all 32 tools
     *   'tenant'  — public + tenant tools (paying HTTP clients)
     *   'public'  — public tools only (Smithery gateway, free/legacy keys)
     *
     * Startup sequence (per mcp-standards.md):
     * 1. Load configuration
     * 2. Initialize CORE governance engine
     * 3. Validate CORE initialization
     * 4. Register MCP tools (filtered by visibility)
     * 5. Register MCP resources
     * 6. Register MCP prompts
     * 7. Log server start to forensic ledger
     *
     * If ANY step fails, throws — caller decides how to handle.
     */
    export async function createGIAServer(maxVisibility: ToolVisibility = 'operator'): Promise<{
      server: McpServer;
      engine: GovernanceEngine;
    }> {
      // Step 1: Load configuration
      const config = GOVERNANCE_CONFIG;
    
      // Step 2: Initialize CORE governance engine
      const engine = new GovernanceEngine();
      engine.classifier.registerVertical(ACE_MAI_CONFIG);
      if (config.autoRunMode) {
        engine.enableAutoRun();
      }
    
      // Step 3: Validate CORE initialization (now async — recovers ledger from PostgreSQL)
      await engine.initialize();
      if (!engine.isHealthy()) {
        throw new Error('Governance engine failed initialization.');
      }
    
      // Step 4: Create MCP server
      const server = new McpServer({
        name: GIA_SERVER_NAME,
        version: GIA_VERSION,
      });
    
      // Step 4a: Wrap the server with the runtime-accountability Proxy so every
      // tool registration is transparently instrumented. All `.tool()` calls below
      // — whether from TOOL_REGISTRY entries, governed retrieval, or the inline
      // list_available_tools — go through this Proxy and are bracketed with
      // runtimeService.startSession()/endSession() at invocation time.
      const instrumentedServer = wrapServerWithRuntimeAccountability(server, engine);
    
      // Step 4b: Initialize Governed Sampling (needs Server ref from McpServer).
      // Sampling uses the underlying server.server; instrumentation is at the tool
      // surface, not the sampling surface (sampling has its own governance path).
      const sampling = new GovernedSampling(engine, server.server);
      engine.setSampling(sampling);
    
      // Step 5: Register MCP tools — filtered by visibility tier.
      // All registrations route through `instrumentedServer` so handlers are wrapped.
      const allowedTiers = TIER_CEILING[maxVisibility];
      let registeredCount = 0;
    
      for (const entry of TOOL_REGISTRY) {
        if (allowedTiers.has(entry.tier)) {
          entry.register(instrumentedServer, engine);
          registeredCount++;
  • Aggregate registration function that registers all 3 Value Metrics tools including generate_impact_report.
    export function registerValueMetricsTools(server: McpServer, engine: GovernanceEngine): void {
      registerRecordValueMetricTool(server, engine);
      registerRecordGovernanceEventTool(server, engine);
      registerGenerateImpactReportTool(server, engine);
    }
  • Tool accountability profile for generate_impact_report in the governed tool registry. Classified as 'read' tool with 'low' risk tier, 'ADVISORY' MAI default, no human approval required, in the 'metrics' category.
    { toolName: 'generate_impact_report', toolClass: 'read',     riskTier: 'low',      maiDefault: 'ADVISORY',       requiresHumanApproval: false, category: 'metrics' },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds that it returns specific data fields (e.g., time saved, costs avoided), but does not disclose potential performance implications or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. The first sentence states purpose, the second lists outputs. Could be slightly more structured (e.g., separate purpose and outputs), but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description mentions output fields but lacks details on output structure, especially given nested input parameters and no output schema. The relationship between input (baselines) and output is not explained, leaving some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions. The description adds no additional meaning beyond the schema, as it only lists output fields, not parameter details. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a 'full economic + governance impact report' and specifies the returned pilot ROI data fields. This distinguishes it from the generic sibling 'generate_report' by focusing on impact and ROI.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'generate_report' or when not to use it. The description does not provide context for choosing this tool over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/knowledgepa3/gia-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server