Skip to main content
Glama

MCP Claude Code

by SDGLBL

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Schema

Prompts

Interactive templates invoked by user choice

NameDescription
Compact current conversationSummarize the conversation so far.
Create a new releaseCreate a new release for my project.
Continue todo by session idContinue from the last todo list for the current session.
Continue latest todoContinue from the last todo list for the current session.
System promptDetailed system prompt include env,git etc information about the specified project.

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
read

Reads a file from the local filesystem. You can access any file directly by using this tool. Assume this tool is able to read all files on the machine. If the User provides a path to a file assume that path is valid. It is okay to read a file that does not exist; an error will be returned.

Usage:

  • The file_path parameter must be an absolute path, not a relative path
  • By default, it reads up to 2000 lines starting from the beginning of the file
  • You can optionally specify a line offset and limit (especially handy for long files), but it's recommended to read the whole file by not providing these parameters
  • Any lines longer than 2000 characters will be truncated
  • Results are returned using cat -n format, with line numbers starting at 1
  • For Jupyter notebooks (.ipynb files), use the notebook_read instead
  • When reading multiple files, you MUST use the batch tool to read them all at once
write

Writes a file to the local filesystem.

Usage:

  • This tool will overwrite the existing file if there is one at the provided path.
  • If this is an existing file, you MUST use the Read tool first to read the file's contents. This tool will fail if you did not read the file first.
  • ALWAYS prefer editing existing files in the codebase. NEVER write new files unless explicitly required.
  • NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User.
edit

Performs exact string replacements in files with strict occurrence count validation.

Usage:

  • When editing text from Read tool output, ensure you preserve the exact indentation (tabs/spaces) as it appears AFTER the line number prefix. The line number prefix format is: spaces + line number + tab. Everything after that tab is the actual file content to match. Never include any part of the line number prefix in the old_string or new_string.
  • ALWAYS prefer editing existing files in the codebase. NEVER write new files unless explicitly required.
multi_edit

This is a tool for making multiple edits to a single file in one operation. It is built on top of the Edit tool and allows you to perform multiple find-and-replace operations efficiently. Prefer this tool over the Edit tool when you need to make multiple edits to the same file.

Before using this tool:

  1. Use the Read tool to understand the file's contents and context
  2. Verify the directory path is correct

To make multiple file edits, provide the following:

  1. file_path: The absolute path to the file to modify (must be absolute, not relative)
  2. edits: An array of edit operations to perform, where each edit contains:
    • old_string: The text to replace (must match the file contents exactly, including all whitespace and indentation)
    • new_string: The edited text to replace the old_string
    • expected_replacements: The number of replacements you expect to make. Defaults to 1 if not specified.

IMPORTANT:

  • All edits are applied in sequence, in the order they are provided
  • Each edit operates on the result of the previous edit
  • All edits must be valid for the operation to succeed - if any edit fails, none will be applied
  • This tool is ideal when you need to make several changes to different parts of the same file
  • For Jupyter notebooks (.ipynb files), use the NotebookEdit instead

CRITICAL REQUIREMENTS:

  1. All edits follow the same requirements as the single Edit tool
  2. The edits are atomic - either all succeed or none are applied
  3. Plan your edits carefully to avoid conflicts between sequential operations

WARNING:

  • The tool will fail if edits.old_string matches multiple locations and edits.expected_replacements isn't specified
  • The tool will fail if the number of matches doesn't equal edits.expected_replacements when it's specified
  • The tool will fail if edits.old_string doesn't match the file contents exactly (including whitespace)
  • The tool will fail if edits.old_string and edits.new_string are the same
  • Since edits are applied in sequence, ensure that earlier edits don't affect the text that later edits are trying to find

When making edits:

  • Ensure all edits result in idiomatic, correct code
  • Do not leave the code in a broken state
  • Always use absolute file paths (starting with /)

If you want to create a new file, use:

  • A new file path, including dir name if needed
  • First edit: empty old_string and the new file's contents as new_string
  • Subsequent edits: normal edit operations on the created content
directory_tree

Get a recursive tree view of files and directories with customizable depth and filtering.

Returns a structured view of the directory tree with files and subdirectories. Directories are marked with trailing slashes. The output is formatted as an indented list for readability. By default, common development directories like .git, node_modules, and venv are noted but not traversed unless explicitly requested. Only works within allowed directories.

grep

Fast content search tool that works with any codebase size. Searches file contents using regular expressions. Supports full regex syntax (eg. "log.Error", "function\s+\w+", etc.). Filter files by pattern with the include parameter (eg. ".js", "*.{ts,tsx}"). Returns matching file paths sorted by modification time. Use this tool when you need to find files containing specific patterns. When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead.

content_replace

Replace a pattern in file contents across multiple files.

Searches for text patterns across all files in the specified directory that match the file pattern and replaces them with the specified text. Can be run in dry-run mode to preview changes without applying them. Only works within allowed directories.

grep_ast

Search through source code files and see matching lines with useful AST (Abstract Syntax Tree) context. This tool helps you understand code structure by showing how matched lines fit into functions, classes, and other code blocks.

Unlike traditional search tools like search_content that only show matching lines, grep_ast leverages the AST to reveal the structural context around matches, making it easier to understand the code organization.

When to use this tool:

  1. When you need to understand where a pattern appears within larger code structures
  2. When searching for function or class definitions that match a pattern
  3. When you want to see not just the matching line but its surrounding context in the code
  4. When exploring unfamiliar codebases and need structural context
  5. When examining how a specific pattern is used across different parts of the codebase

This tool is superior to regular grep/search_content when you need to understand code structure, not just find text matches.

Example usage:

grep_ast(pattern="function_name", path="/path/to/file.py", ignore_case=False, line_number=True)
notebook_read

Reads a Jupyter notebook (.ipynb file) and returns all of the cells with their outputs. Jupyter notebooks are interactive documents that combine code, text, and visualizations, commonly used for data analysis and scientific computing. The notebook_path parameter must be an absolute path, not a relative path.

notebook_edit

Completely replaces the contents of a specific cell in a Jupyter notebook (.ipynb file) with new source. Jupyter notebooks are interactive documents that combine code, text, and visualizations, commonly used for data analysis and scientific computing. The notebook_path parameter must be an absolute path, not a relative path. The cell_number is 0-indexed. Use edit_mode=insert to add a new cell at the index specified by cell_number. Use edit_mode=delete to delete the cell at the index specified by cell_number.

run_command

Executes a given bash command in a shell with optional timeout, ensuring proper handling and security measures.

Before executing the command, please follow these steps:

  1. Directory Verification:
    • If the command will create new directories or files, first use the directory_tree tool to verify the parent directory exists and is the correct location
    • For example, before running "mkdir foo/bar", first use directory_tree to check that "foo" exists and is the intended parent directory
  2. Command Execution:
    • After ensuring proper quoting, execute the command.
    • Capture the output of the command.

Usage notes:

  • The command argument is required.
  • You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 120000ms (2 minutes).
  • It is very helpful if you write a clear, concise description of what this command does in 5-10 words.
  • If the output exceeds 30000 characters, output will be truncated before being returned to you.
  • VERY IMPORTANT: You MUST avoid using search commands like find and grep. Instead use Grep, Glob, or Task to search. You MUST avoid read tools like cat, head, tail, and ls, and use read and directory_tree to read files.
  • If you still need to run grep, STOP. ALWAYS USE ripgrep at rg (or /opt/homebrew/Cellar/ripgrep/14.1.1/bin/rg) first, which all Claude Code users have pre-installed.
  • When issuing multiple commands, use the ';' or '&&' operator to separate them. DO NOT use newlines (newlines are ok in quoted strings).<good-example> cd /foo/bar && pytest tests </good-example>

Committing changes with git

When the user asks you to create a new git commit, follow these steps carefully:

  1. You have the capability to call multiple tools in a single response. When multiple independent pieces of information are requested, batch your tool calls together for optimal performance. ALWAYS run the following bash commands in parallel, each using the Bash tool:
    • Run a git status command to see all untracked files.
    • Run a git diff command to see both staged and unstaged changes that will be committed.
    • Run a git log command to see recent commit messages, so that you can follow this repository's commit message style.
  2. Analyze all staged changes (both previously staged and newly added) and draft a commit message. Wrap your analysis process in <commit_analysis> tags:
<commit_analysis> - List the files that have been changed or added - Summarize the nature of the changes (eg. new feature, enhancement to an existing feature, bug fix, refactoring, test, docs, etc.) - Brainstorm the purpose or motivation behind these changes - Assess the impact of these changes on the overall project - Check for any sensitive information that shouldn't be committed - Draft a concise (1-2 sentences) commit message that focuses on the "why" rather than the "what" - Ensure your language is clear, concise, and to the point - Ensure the message accurately reflects the changes and their purpose (i.e. "add" means a wholly new feature, "update" means an enhancement to an existing feature, "fix" means a bug fix, etc.) - Ensure the message is not generic (avoid words like "Update" or "Fix" without context) - Review the draft message to ensure it accurately reflects the changes and their purpose </commit_analysis>
  1. You have the capability to call multiple tools in a single response. When multiple independent pieces of information are requested, batch your tool calls together for optimal performance. ALWAYS run the following commands in parallel:
    • Add relevant untracked files to the staging area.
    • Create the commit with a message ending with: 🤖 Generated with Claude Code

    Co-Authored-By: Claude noreply@anthropic.com

    • Run git status to make sure the commit succeeded.
  2. If the commit fails due to pre-commit hook changes, retry the commit ONCE to include these automated changes. If it fails again, it usually means a pre-commit hook is preventing the commit. If the commit succeeds but you notice that files were modified by the pre-commit hook, you MUST amend your commit to include them.

Important notes:

  • Use the git context at the start of this conversation to determine which files are relevant to your commit. Be careful not to stage and commit files (e.g. with git add .) that aren't relevant to your commit.
  • NEVER update the git config
  • DO NOT run additional commands to read or explore code, beyond what is available in the git context
  • DO NOT push to the remote repository
  • IMPORTANT: Never use git commands with the -i flag (like git rebase -i or git add -i) since they require interactive input which is not supported.
  • If there are no changes to commit (i.e., no untracked files and no modifications), do not create an empty commit
  • Ensure your commit message is meaningful and concise. It should explain the purpose of the changes, not just describe them.
  • Return an empty response - the user will see the git output directly
  • In order to ensure good formatting, ALWAYS pass the commit message via a HEREDOC, a la this example:
<example> git commit -m "$(cat <<'EOF' Commit message here.

🤖 Generated with MCP Claude Code

EOF )" </example>

Creating pull requests

Use the gh command via the Bash tool for ALL GitHub-related tasks including working with issues, pull requests, checks, and releases. If given a Github URL use the gh command to get the information needed.

IMPORTANT: When the user asks you to create a pull request, follow these steps carefully:

  1. You have the capability to call multiple tools in a single response. When multiple independent pieces of information are requested, batch your tool calls together for optimal performance. ALWAYS run the following bash commands in parallel using the Bash tool, in order to understand the current state of the branch since it diverged from the main branch:
    • Run a git status command to see all untracked files
    • Run a git diff command to see both staged and unstaged changes that will be committed
    • Check if the current branch tracks a remote branch and is up to date with the remote, so you know if you need to push to the remote
    • Run a git log command and git diff main...HEAD to understand the full commit history for the current branch (from the time it diverged from the main branch)
  2. Analyze all changes that will be included in the pull request, making sure to look at all relevant commits (NOT just the latest commit, but ALL commits that will be included in the pull request!!!), and draft a pull request summary. Wrap your analysis process in <pr_analysis> tags:
<pr_analysis> - List the commits since diverging from the main branch - Summarize the nature of the changes (eg. new feature, enhancement to an existing feature, bug fix, refactoring, test, docs, etc.) - Brainstorm the purpose or motivation behind these changes - Assess the impact of these changes on the overall project - Do not use tools to explore code, beyond what is available in the git context - Check for any sensitive information that shouldn't be committed - Draft a concise (1-2 bullet points) pull request summary that focuses on the "why" rather than the "what" - Ensure the summary accurately reflects all changes since diverging from the main branch - Ensure your language is clear, concise, and to the point - Ensure the summary accurately reflects the changes and their purpose (ie. "add" means a wholly new feature, "update" means an enhancement to an existing feature, "fix" means a bug fix, etc.) - Ensure the summary is not generic (avoid words like "Update" or "Fix" without context) - Review the draft summary to ensure it accurately reflects the changes and their purpose </pr_analysis>
  1. You have the capability to call multiple tools in a single response. When multiple independent pieces of information are requested, batch your tool calls together for optimal performance. ALWAYS run the following commands in parallel:
    • Create new branch if needed
    • Push to remote with -u flag if needed
    • Create PR using gh pr create with the format below. Use a HEREDOC to pass the body to ensure correct formatting.
<example> gh pr create --title "the pr title" --body "$(cat <<'EOF' ## Summary <1-3 bullet points>

Test plan

[Checklist of TODOs for testing the pull request...]

🤖 Generated with Claude Code EOF )" </example>

Important:

  • NEVER update the git config
  • Return the PR URL when you're done, so the user can see it

Other common operations

  • View comments on a Github PR: gh api repos/foo/bar/pulls/123/comments
todo_read

Use this tool to read the current to-do list for the session. This tool should be used proactively and frequently to ensure that you are aware of the status of the current task list. You should make use of this tool as often as possible, especially in the following situations:

  • At the beginning of conversations to see what's pending
  • Before starting new tasks to prioritize work
  • When the user asks about previous tasks or plans
  • Whenever you're uncertain about what to do next
  • After completing tasks to update your understanding of remaining work
  • After every few messages to ensure you're on track

Usage:

  • This tool requires a session_id parameter to identify the Claude Desktop conversation
  • Returns a list of todo items with their status, priority, and content
  • Use this information to track progress and plan next steps
  • If no todos exist yet for the session, an empty list will be returned
todo_write

Use this tool to create and manage a structured task list for your current coding session. This helps you track progress, organize complex tasks, and demonstrate thoroughness to the user. It also helps the user understand the progress of the task and overall progress of their requests.

When to Use This Tool

Use this tool proactively in these scenarios:

  1. Complex multi-step tasks - When a task requires 3 or more distinct steps or actions
  2. Non-trivial and complex tasks - Tasks that require careful planning or multiple operations
  3. User explicitly requests todo list - When the user directly asks you to use the todo list
  4. User provides multiple tasks - When users provide a list of things to be done (numbered or comma-separated)
  5. After receiving new instructions - Immediately capture user requirements as todos. Feel free to edit the todo list based on new information.
  6. After completing a task - Mark it complete and add any new follow-up tasks
  7. When you start working on a new task, mark the todo as in_progress. Ideally you should only have one todo as in_progress at a time. Complete existing tasks before starting new ones.

When NOT to Use This Tool

Skip using this tool when:

  1. There is only a single, straightforward task
  2. The task is trivial and tracking it provides no organizational benefit
  3. The task can be completed in less than 3 trivial steps
  4. The task is purely conversational or informational

NOTE that you should use should not use this tool if there is only one trivial task to do. In this case you are better off just doing the task directly.

Examples of When to Use the Todo List

<example> User: I want to add a dark mode toggle to the application settings. Make sure you run the tests and build when you're done! Assistant: I'll help add a dark mode toggle to your application settings. Let me create a todo list to track this implementation. *Creates todo list with the following items:* 1. Create dark mode toggle component in Settings page 2. Add dark mode state management (context/store) 3. Implement CSS-in-JS styles for dark theme 4. Update existing components to support theme switching 5. Run tests and build process, addressing any failures or errors that occur *Begins working on the first task* <reasoning> The assistant used the todo list because: 1. Adding dark mode is a multi-step feature requiring UI, state management, and styling changes 2. The user explicitly requested tests and build be run afterward 3. The assistant inferred that tests and build need to pass by adding "Ensure tests and build succeed" as the final task </reasoning> </example> <example> User: Help me rename the function getCwd to getCurrentWorkingDirectory across my project Assistant: Let me first search through your codebase to find all occurrences of 'getCwd'. *Uses grep or search tools to locate all instances of getCwd in the codebase* Assistant: I've found 15 instances of 'getCwd' across 8 different files. Let me create a todo list to track these changes. *Creates todo list with specific items for each file that needs updating* <reasoning> The assistant used the todo list because: 1. First, the assistant searched to understand the scope of the task 2. Upon finding multiple occurrences across different files, it determined this was a complex task with multiple steps 3. The todo list helps ensure every instance is tracked and updated systematically 4. This approach prevents missing any occurrences and maintains code consistency </reasoning> </example> <example> User: I need to implement these features for my e-commerce site: user registration, product catalog, shopping cart, and checkout flow. Assistant: I'll help implement these features. First, let's add all the features to the todo list. *Creates a todo list breaking down each feature into specific tasks based on the project architecture* Assistant: Let's start with user registration. This will involve creating database models, API endpoints, and frontend forms that integrate with your existing authentication system. <reasoning> The assistant used the todo list because: 1. The user provided multiple complex features to implement in a comma separated list 2. The todo list helps organize these large features into manageable tasks 3. This approach allows for tracking progress across the entire implementation </reasoning> </example> <example> User: Can you help optimize my React application? It's rendering slowly and has performance issues.</user> Assistant: I'll help optimize your React application. First, let me examine your codebase to identify potential performance bottlenecks. *Reviews component structure, render patterns, state management, and data fetching* Assistant: After analyzing your codebase, I've identified several performance issues. Let me create a todo list to track our optimization efforts. *Creates todo list with items like: 1) Implement memoization for expensive calculations in ProductList, 2) Add virtualization for long lists in Dashboard, 3) Optimize image loading in Gallery component, 4) Fix state update loops in ShoppingCart, 5) Review bundle size and implement code splitting* Let's start by implementing memoization for the expensive calculations in your ProductList component.</assistant> <reasoning> The assistant used the todo list because: 1. First, the assistant examined the codebase to identify specific performance issues 2. Based on this analysis, it identified multiple optimization opportunities 3. Performance optimization is a non-trivial task requiring multiple steps 4. The todo list helps methodically track improvements across different components 5. This systematic approach ensures all performance bottlenecks are addressed </reasoning> </example>

Examples of When NOT to Use the Todo List

<example> User: How do I print 'Hello World' in Python? Assistant: In Python, you can print "Hello World" with this simple code:

python print("Hello World")

This will output the text "Hello World" to the console when executed.</assistant>

<reasoning> The assistant did not use the todo list because this is a single, trivial task that can be completed in one step. There's no need to track multiple tasks or steps for such a straightforward request. </reasoning> </example> <example> User: What does the git status command do? Assistant: The git status command shows the current state of your working directory and staging area. It displays which changes have been staged, which haven't, and which files aren't being tracked by Git. This is a fundamental command for understanding the current state of your repository before making commits. <reasoning> The assistant did not use the todo list because this is an informational request with no actual coding task to complete. The user is simply asking for an explanation, not for the assistant to perform multiple steps or tasks. </reasoning> </example> <example> User: Can you add a comment to the calculateTotal function to explain what it does? Assistant: Sure, let me add a comment to the calculateTotal function to explain what it does. * Uses the Edit tool to add a comment to the calculateTotal function * <reasoning> The assistant did not use the todo list because this is a single, straightforward task confined to one location in the code. Adding a comment doesn't require tracking multiple steps or systematic organization. </reasoning> </example> <example> User: Run npm install for me and tell me what happens. Assistant: I'll run the npm install command for you.

Executes: npm install

The command completed successfully. Here's the output: [Output of npm install command]

All dependencies have been installed according to your package.json file.

<reasoning> The assistant did not use the todo list because this is a single command execution with immediate results. There are no multiple steps to track or organize, making the todo list unnecessary for this straightforward task. </reasoning> </example>

Task States and Management

  1. Task States: Use these states to track progress:
    • pending: Task not yet started
    • in_progress: Currently working on (limit to ONE task at a time)
    • completed: Task finished successfully
    • cancelled: Task no longer needed
  2. Task Management:
    • Update task status in real-time as you work
    • Mark tasks complete IMMEDIATELY after finishing (don't batch completions)
    • Only have ONE task in_progress at any time
    • Complete current tasks before starting new ones
    • Cancel tasks that become irrelevant
  3. Task Breakdown:
    • Create specific, actionable items
    • Break complex tasks into smaller, manageable steps
    • Use clear, descriptive task names

When in doubt, use this tool. Being proactive with task management demonstrates attentiveness and ensures you complete all requirements successfully.

think

Use the tool to think about something. It will not obtain new information or make any changes to the repository, but just log the thought. Use it when complex reasoning or brainstorming is needed. Ensure thinking content is concise and accurate, without needing to include code details

Common use cases:

  1. When exploring a repository and discovering the source of a bug, call this tool to brainstorm several unique ways of fixing the bug, and assess which change(s) are likely to be simplest and most effective
  2. After receiving test results, use this tool to brainstorm ways to fix failing tests
  3. When planning a complex refactoring, use this tool to outline different approaches and their tradeoffs
  4. When designing a new feature, use this tool to think through architecture decisions and implementation details
  5. When debugging a complex issue, use this tool to organize your thoughts and hypotheses
  6. When considering changes to the plan or shifts in thinking that the user has not previously mentioned, consider whether it is necessary to confirm with the user.
<think_example> Feature Implementation Planning - New code search feature requirements: * Search for code patterns across multiple files * Identify function usages and references * Analyze import relationships * Generate summary of matching patterns - Implementation considerations: * Need to leverage existing search mechanisms * Should use regex for pattern matching * Results need consistent format with other search methods * Must handle large codebases efficiently - Design approach: 1. Create new CodeSearcher class that follows existing search patterns 2. Implement core pattern matching algorithm 3. Add result formatting methods 4. Integrate with file traversal system 5. Add caching for performance optimization - Testing strategy: * Unit tests for search accuracy * Integration tests with existing components * Performance tests with large codebases </think_example>
batch

Batch execution tool that runs multiple tool invocations in a single request.

Tools are executed in parallel when possible, and otherwise serially. Takes a list of tool invocations (tool_name and input pairs). Returns the collected results from all invocations. Use this tool when you need to run multiple independent tool operations at once -- it is awesome for speeding up your workflow, reducing both context usage and latency. Each tool will respect its own permissions and validation rules. The tool's outputs are NOT shown to the user; to answer the user's query, you MUST send a message with the results after the tool call completes, otherwise the user will not see the results.

<batch_example> When dispatching multiple agents to find necessary information. batch( description="Update import statements across modules", invocations=[ {tool_name: "dispatch_agent", input: {prompt: "Search for all instances of 'logger' configuration in /app/config directory"}}, {tool_name: "dispatch_agent", input: {prompt: "Find all test files that reference 'UserService' in /app/tests"}}, ] )

Common scenarios for effective batching:

  1. Reading multiple related files in one operation
  2. Performing a series of simple mechanical changes
  3. Running multiple diagnostic commands
  4. Dispatch multiple agents to complete the task

To make a batch call, provide the following:

  1. description: A short (3-5 word) description of the batch operation
  2. invocations: List of invocation [{"tool_name": "...", "input": "..."}], tool_name: The name of the tool to invoke,newText: The input to pass to the tool

Available tools in batch call: Tool: dispatch_agent,read,directory_tree,grep,grep_ast,run_command,notebook_read Not available: think,write,edit,multi_edit,notebook_edit

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SDGLBL/mcp-claude-code'

If you have feedback or need assistance with the MCP directory API, please join our Discord server