Skip to main content
Glama

MCP File Context Server

by bsmi021

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
CACHE_TTLNoCache time-to-live in milliseconds3600000
MAX_FILE_SIZENoMaximum file size in bytes for reading
MAX_CACHE_SIZENoMaximum number of cached entries1000

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
read_context

Read and analyze code files with advanced filtering and chunking. The server automatically ignores common artifact directories and files:

  • Version Control: .git/
  • Python: .venv/, pycache/, *.pyc, etc.
  • JavaScript/Node.js: node_modules/, bower_components/, .next/, dist/, etc.
  • IDE/Editor: .idea/, .vscode/, .env, etc.

For large files or directories, use get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.

get_chunk_count

Get the total number of chunks that will be returned for a read_context request. Use this tool FIRST before reading content to determine how many chunks you need to request. The parameters should match what you'll use in read_context.

set_profile

Set the active profile for context generation

get_profile_context

Get repository context based on current profile settings

generate_outline

Generate a code outline for a file, showing its structure (classes, functions, imports, etc). Supports TypeScript/JavaScript and Python files.

getFiles

Retrieve multiple files by their paths, returning content and metadata for each file

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsmi021/mcp-file-context-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server