Skip to main content
Glama

localnest_server_status

Check runtime status and active configuration of the LocalNest MCP server to monitor local-first AI agent operations and ensure safe codebase access functionality.

Instructions

Return runtime status and active configuration summary for this MCP server.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
response_formatNojson

Implementation Reference

  • Registration of localnest_server_status tool.
    registerJsonTool(
      'localnest_server_status',
      {
        title: 'Server Status',
        description: 'Return runtime status and active configuration summary for this MCP server.',
        inputSchema: {},
        annotations: {
          readOnlyHint: true,
          destructiveHint: false,
          idempotentHint: true,
          openWorldHint: false
        }
      },
      async () => buildServerStatus()
    );
  • Implementation of buildServerStatus which is called by localnest_server_status.
    return async function buildServerStatus() {
      const indexStatus = vectorIndex?.getStatus?.() || null;
      const activeIndexBackend = getActiveIndexBackend();
      const memoryStatus = await memory.getStatus();
      const updateStatus = updates.getCachedStatus
        ? updates.getCachedStatus()
        : await updates.getStatus({ force: false });
    
      return {
        name: serverName,
        version: serverVersion,
        mode: runtime.mcpMode,
        roots: workspace.listRoots(),
        has_ripgrep: runtime.hasRipgrep,
        health: buildHealthSummary({
          runtime,
          memoryStatus,
          indexStatus,
          activeIndexBackend
        }),
        memory: buildMemorySummary(memoryStatus),
        search: {
          auto_project_split: runtime.autoProjectSplit,
          max_auto_projects: runtime.maxAutoProjects,
          force_split_children: runtime.forceSplitChildren,
          rg_timeout_ms: runtime.rgTimeoutMs
        },
        vector_index: {
          backend: activeIndexBackend,
          requested_backend: runtime.indexBackend,
          index_path: runtime.vectorIndexPath,
          db_path: runtime.sqliteDbPath,
          chunk_lines: runtime.vectorChunkLines,
          chunk_overlap: runtime.vectorChunkOverlap,
          max_terms_per_chunk: runtime.vectorMaxTermsPerChunk,
          max_indexed_files: runtime.vectorMaxIndexedFiles,
          embedding_provider: runtime.embeddingProvider,
          embedding_model: runtime.embeddingModel,
          embedding_cache_dir: runtime.embeddingCacheDir,
          embedding_cache_status: runtime.embeddingCacheStatus || null,
          embedding_dimensions: runtime.embeddingDimensions,
          reranker_provider: runtime.rerankerProvider,
          reranker_model: runtime.rerankerModel,
          reranker_cache_dir: runtime.rerankerCacheDir,
          reranker_cache_status: runtime.rerankerCacheStatus || null,
          diagnostics: {
            sqlite_vec_loaded: indexStatus?.sqlite_vec_loaded ?? indexStatus?.sqlite_vec_extension?.loaded ?? null,
            sqlite_vec_extension_path: indexStatus?.sqlite_vec_extension?.path || runtime.sqliteVecExtensionPath || '',
            sqlite_vec_extension_configured: Boolean(indexStatus?.sqlite_vec_extension?.configured || runtime.sqliteVecExtensionPath),
            sqlite_vec_table_ready: indexStatus?.sqlite_vec_table_ready ?? null,
            index_sweep_interval_minutes: runtime.indexSweepIntervalMinutes
          },
          upgrade_recommended: indexStatus?.upgrade_recommended || false,
          upgrade_reason: indexStatus?.upgrade_reason || null
        },
        updates: updateStatus
      };
    };

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wmt-mobile/localnest'

If you have feedback or need assistance with the MCP directory API, please join our Discord server