Skip to main content
Glama

read_symbol

Extract symbol blocks by name from files, supporting formats like TS, JS, GraphQL, and CSS. Use wildcards for flexible searching, specify file paths, and control result limits for efficient analysis.

Instructions

Find and extract symbol block by name from files, supports a lot of file formats (like TS, JS, GraphQL, CSS and most that use braces for blocks). Uses streaming with concurrency control for better performance

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathsNoFile paths to search (supports relative and glob). Defaults to "." (current directory). IMPORTANT: Be specific with paths when possible, minimize broad patterns like "node_modules/**" to avoid mismatches
limitNoMaximum number of results to return. Defaults to 5
symbolYesSymbol name to find (functions, classes, types, etc.), case-sensitive, supports * for wildcard

Implementation Reference

  • The main handler function for the read_symbol tool. It processes input arguments, scans files using scanForSymbol generator, collects and scores matching symbol blocks, optimizes output if requested, sorts by score, limits results, and formats the output with file/line info.
    handler: async (args) => { const { symbols, file_paths: filePaths = [], limit = DEFAULT_MAX_RESULTS, optimize = true } = args if (!filePaths.length) { filePaths.push('.') } const patterns = filePaths.map(mapPattern) const results: Block[] = [] let totalFound = 0 const maxMatches = MAX_MATCHES * symbols.length try { for await (const result of scanForSymbol(symbols, patterns)) { totalFound++ results.push(result) if (results.length >= maxMatches) { break } } } catch (err) { if (!results.length) { throw err } } if (!results.length) { if (env.COLLECT_MISMATCHES) { collectMismatch({ symbols, file_paths: filePaths, limit }) } throw new Error(`Failed to find the \`${symbols.join(', ') }\` symbol(s) in any files`) } let output = results .sort((a, b) => b.score - a.score) .slice(0, limit * symbols.length) .map(block => formatResult(block, optimize)) .join('\n\n') if (totalFound > results.length) { output += `\n\n--- Showing ${results.length} matches out of ${totalFound} ---` } return output },
  • Zod schema defining the input parameters for the read_symbol tool: symbols (required array of strings), optional file_paths, limit, and optimize.
    schema: z.object({ symbols: z.array(z.string().min(1)).describe('Symbol name(s) to find (functions, classes, types, etc.), case-sensitive, supports * for wildcard'), file_paths: z.array(z.string().min(1)).optional().describe('File paths to search (supports relative and glob). Defaults to "." (current directory). IMPORTANT: Be specific with paths when possible, minimize broad patterns like "node_modules/**" to avoid mismatches'), limit: z.number().optional().describe(`Maximum number of results to return. Defaults to ${DEFAULT_MAX_RESULTS}`), optimize: z.boolean().optional().describe('Unless explicitly false, this tool will strip comments and spacing to preserve AI\'s context window, omit unless you REALLY it unchanged (default: true)'), }),
  • src/tools.ts:22-29 (registration)
    Registers the read_symbol tool (imported as readSymbol) into the central tools object that is exported for use.
    const tools = { read_symbol: readSymbol, import_symbol: importSymbol, search_replace: searchReplace, insert_text: insertText, os_notification: osNotification, utils_debug: utilsDebug, } as const satisfies Record<string, Tool<any>>
  • Core helper generator that concurrently scans files matching patterns (with ignores), reads contents, finds symbol blocks, and yields matches with concurrency limits and early stopping.
    async function* scanForSymbol(symbols: string[], patterns: string[]): AsyncGenerator<Block> { const limit = pLimit(MAX_CONCURRENCY) let shouldStop = false const ignorePatterns = generateIgnorePatterns(patterns) const allPatterns = [...patterns, ...ignorePatterns] const entries = fg.stream(allPatterns, { cwd: util.CWD, onlyFiles: true, absolute: false, stats: true, suppressErrors: true, deep: 4, }) as AsyncIterable<fg.Entry> const pendingTasks = new Set<Promise<Block[]>>() let filesProcessed = 0 try { for await (const entry of entries) { if (++filesProcessed === MAX_FILE_COUNT) break if (shouldStop) break if (entry.stats && entry.stats.size > MAX_FILE_SIZE) continue const fileIndex = filesProcessed const task = limit(async () => { if (shouldStop) return [] try { const content = await fs.promises.readFile(util.resolve(entry.path), 'utf8') if (shouldStop) return [] return findBlocks(content, symbols, entry.path, fileIndex) } catch { return [] } }) pendingTasks.add(task) const taskResults = await task pendingTasks.delete(task) for (const result of taskResults) { yield result } } } finally { shouldStop = true await Promise.allSettled([...pendingTasks]) } }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/flesler/mcp-tools'

If you have feedback or need assistance with the MCP directory API, please join our Discord server