get_llm_full_context
Retrieve the complete Slice.js documentation bundle to add comprehensive context for AI assistants, enabling thorough understanding of the framework's capabilities.
Instructions
Fetches the complete documentation bundle (~2000 lines, consumes considerable tokens but provides all documentation in one go). IMPORTANT: Ask the user for confirmation before executing this tool as it will add the entire Slice.js documentation to the context.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/tools/get-llm-full-context.ts:4-53 (handler)Main implementation of get_llm_full_context tool. Defines the tool with its name, description, empty parameters schema, and execute function that fetches and caches the complete llm.txt documentation from GitHub (~2000 lines).
export const getLlmFullContextTool = { name: "get_llm_full_context", description: "Fetches the complete documentation bundle (~2000 lines, consumes considerable tokens but provides all documentation in one go). IMPORTANT: Ask the user for confirmation before executing this tool as it will add the entire Slice.js documentation to the context.", parameters: z.object({}), execute: async () => { const cached = getCached('llm.txt'); if (cached) { console.error('[MCP] Returning cached llm.txt'); return cached; } console.error('[MCP] Fetching llm.txt from GitHub'); const url = `${BASE_URL}llm.txt`; try { const response = await fetch(url); if (!response.ok) throw new Error(`HTTP ${response.status}`); const content = await response.text(); setCache('llm.txt', content); console.error('[MCP] Fetched and cached llm.txt, now populating individual doc cache'); // Populate cache with individual docs from llm.txt const sections = content.split(/\n=== /).slice(1); // Skip first empty let populatedCount = 0; for (const section of sections) { const lines = section.split('\n'); const filePath = lines[0].replace(' ===', ''); // e.g., 'markdown/getting-started.md' const docContent = lines.slice(1).join('\n').trim(); if (filePath && docContent) { // Compute cache key as doc id: remove 'markdown/' prefix and '.md' suffix const docId = filePath.replace(/^markdown\//, '').replace(/\.md$/, ''); setCache(docId, docContent); populatedCount++; } } console.error(`[MCP] Populated cache with ${populatedCount} individual docs`); // Update DOCS_STRUCTURE if already initialized if (isInitialized) { DOCS_STRUCTURE.length = 0; // Clear the array DOCS_STRUCTURE.push(...parseDocsFromLlmTxt(content)); console.error(`[MCP] Updated DOCS_STRUCTURE to ${DOCS_STRUCTURE.length} documents from llm.txt`); } return content; } catch (error) { console.error(`[MCP] Error fetching llm.txt: ${error}`); return `Error fetching llm.txt: ${error}`; } }, }; - Schema definition using Zod with empty parameters object, indicating this tool takes no input parameters.
parameters: z.object({}), - src/index.ts:17-17 (registration)Tool registration where getLlmFullContextTool is added to the FastMCP server instance.
server.addTool(getLlmFullContextTool); - src/utils.ts:5-15 (helper)Cache retrieval helper function getCached used by the tool to check for cached llm.txt content.
export function getCached(key: string): string | null { const cached = cache.get(key); if (!cached) return null; if (Date.now() - cached.timestamp > CACHE_TTL) { cache.delete(key); return null; } return cached.content; } - src/utils.ts:17-19 (helper)Cache storage helper function setCache used by the tool to cache the fetched llm.txt content and individual documentation sections.
export function setCache(key: string, content: string): void { cache.set(key, { content, timestamp: Date.now() }); }