Skip to main content
Glama

read_symbol

Read-only

Extract symbol blocks by name from files, supporting formats like TS, JS, GraphQL, and CSS. Use wildcards for flexible searching, specify file paths, and control result limits for efficient analysis.

Instructions

Find and extract symbol block by name from files, supports a lot of file formats (like TS, JS, GraphQL, CSS and most that use braces for blocks). Uses streaming with concurrency control for better performance

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathsNoFile paths to search (supports relative and glob). Defaults to "." (current directory). IMPORTANT: Be specific with paths when possible, minimize broad patterns like "node_modules/**" to avoid mismatches
limitNoMaximum number of results to return. Defaults to 5
symbolYesSymbol name to find (functions, classes, types, etc.), case-sensitive, supports * for wildcard

Implementation Reference

  • The main handler function for the read_symbol tool. It processes input arguments, scans files using scanForSymbol generator, collects and scores matching symbol blocks, optimizes output if requested, sorts by score, limits results, and formats the output with file/line info.
    handler: async (args) => {
      const { symbols, file_paths: filePaths = [], limit = DEFAULT_MAX_RESULTS, optimize = true } = args
      if (!filePaths.length) {
        filePaths.push('.')
      }
      const patterns = filePaths.map(mapPattern)
      const results: Block[] = []
      let totalFound = 0
      const maxMatches = MAX_MATCHES * symbols.length
      try {
        for await (const result of scanForSymbol(symbols, patterns)) {
          totalFound++
          results.push(result)
          if (results.length >= maxMatches) {
            break
          }
        }
      } catch (err) {
        if (!results.length) {
          throw err
        }
      }
    
      if (!results.length) {
        if (env.COLLECT_MISMATCHES) {
          collectMismatch({ symbols, file_paths: filePaths, limit })
        }
        throw new Error(`Failed to find the \`${symbols.join(', ') }\` symbol(s) in any files`)
      }
    
      let output = results
        .sort((a, b) => b.score - a.score)
        .slice(0, limit * symbols.length)
        .map(block => formatResult(block, optimize))
        .join('\n\n')
      if (totalFound > results.length) {
        output += `\n\n--- Showing ${results.length} matches out of ${totalFound} ---`
      }
      return output
    },
  • Zod schema defining the input parameters for the read_symbol tool: symbols (required array of strings), optional file_paths, limit, and optimize.
    schema: z.object({
      symbols: z.array(z.string().min(1)).describe('Symbol name(s) to find (functions, classes, types, etc.), case-sensitive, supports * for wildcard'),
      file_paths: z.array(z.string().min(1)).optional().describe('File paths to search (supports relative and glob). Defaults to "." (current directory). IMPORTANT: Be specific with paths when possible, minimize broad patterns like "node_modules/**" to avoid mismatches'),
      limit: z.number().optional().describe(`Maximum number of results to return. Defaults to ${DEFAULT_MAX_RESULTS}`),
      optimize: z.boolean().optional().describe('Unless explicitly false, this tool will strip comments and spacing to preserve AI\'s context window, omit unless you REALLY it unchanged (default: true)'),
    }),
  • src/tools.ts:22-29 (registration)
    Registers the read_symbol tool (imported as readSymbol) into the central tools object that is exported for use.
    const tools = {
      read_symbol: readSymbol,
      import_symbol: importSymbol,
      search_replace: searchReplace,
      insert_text: insertText,
      os_notification: osNotification,
      utils_debug: utilsDebug,
    } as const satisfies Record<string, Tool<any>>
  • Core helper generator that concurrently scans files matching patterns (with ignores), reads contents, finds symbol blocks, and yields matches with concurrency limits and early stopping.
    async function* scanForSymbol(symbols: string[], patterns: string[]): AsyncGenerator<Block> {
      const limit = pLimit(MAX_CONCURRENCY)
      let shouldStop = false
      const ignorePatterns = generateIgnorePatterns(patterns)
      const allPatterns = [...patterns, ...ignorePatterns]
      const entries = fg.stream(allPatterns, {
        cwd: util.CWD, onlyFiles: true, absolute: false, stats: true, suppressErrors: true, deep: 4,
      }) as AsyncIterable<fg.Entry>
      const pendingTasks = new Set<Promise<Block[]>>()
      let filesProcessed = 0
    
      try {
        for await (const entry of entries) {
          if (++filesProcessed === MAX_FILE_COUNT) break
          if (shouldStop) break
          if (entry.stats && entry.stats.size > MAX_FILE_SIZE) continue
    
          const fileIndex = filesProcessed
          const task = limit(async () => {
            if (shouldStop) return []
            try {
              const content = await fs.promises.readFile(util.resolve(entry.path), 'utf8')
              if (shouldStop) return []
              return findBlocks(content, symbols, entry.path, fileIndex)
            } catch {
              return []
            }
          })
    
          pendingTasks.add(task)
          const taskResults = await task
          pendingTasks.delete(task)
          for (const result of taskResults) {
            yield result
          }
        }
      } finally {
        shouldStop = true
        await Promise.allSettled([...pendingTasks])
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true and openWorldHint=false, indicating a safe, bounded operation. The description adds valuable context beyond annotations by mentioning streaming with concurrency control for performance, which helps the agent understand execution behavior, though it lacks details on error handling or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by supporting details in a second sentence. Both sentences earn their place by clarifying functionality and performance, with no wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, annotations cover safety, and schema fully describes inputs, the description is mostly complete. However, the lack of an output schema means the description could better explain return values or error cases, leaving a minor gap in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters like 'file_paths', 'limit', and 'symbol'. The description adds minimal semantics by noting support for many file formats and performance features, but it does not significantly enhance parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Find and extract') and resource ('symbol block by name from files'), and distinguishes it from siblings like 'insert_text' and 'os_notification' by focusing on read-only symbol extraction rather than insertion or system notifications. It also specifies the supported file formats, adding precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for extracting symbols from files in various formats, but it does not explicitly state when to use this tool versus alternatives or provide exclusions. No sibling-specific guidance is given, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/flesler/mcp-tools'

If you have feedback or need assistance with the MCP directory API, please join our Discord server