Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}
prompts
{}
resources
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
create_browser_sessionA

Create a new browser session with intelligent auto-close and session management

navigate_and_scrapeA

Navigate to a URL and optionally scrape content in one operation. Auto-creates session if needed.

interact_with_pageB

Perform multiple interactions with a page: click, type, hover, select, screenshot, wait, scroll

manage_browser_sessionsB

Manage browser sessions: list, close, cleanup idle sessions, get status

navigate_to_urlB

[LEGACY] Navigate to a URL in an existing browser session. Use navigate_and_scrape instead.

scrape_contentA

[LEGACY] Scrape content from the current page. Use navigate_and_scrape instead.

take_screenshotA

[LEGACY] Take a screenshot of the current page. Use interact_with_page instead.

execute_browser_scriptA

[LEGACY] Execute JavaScript in the browser context. Use interact_with_page instead.

interact_with_elementA

[LEGACY] Interact with a page element. Use interact_with_page instead.

close_browser_sessionA

[LEGACY] Close a browser session. Use manage_browser_sessions instead.

list_browser_sessionsA

[LEGACY] List all browser sessions. Use manage_browser_sessions instead.

analyze_dom_structureC

AI-guided exploration and analysis of DOM structure using goal-oriented patterns. Analyzes stored DOM JSON to identify interactive elements, content areas, and navigation patterns.

navigate_dom_pathB

Navigate to specific elements in DOM JSON using dot notation paths (e.g., 'body.main.article[0].paragraphs[2]'). Extracts content and provides element information.

search_dom_elementsA

Search for DOM elements by type, content, keywords, or attributes. Returns matching elements with their paths for further navigation.

get_page_screenshotA

Retrieve stored screenshot for a page. Returns file path or base64 encoded image data for AI visual analysis.

analyze_screenshotC

AI-powered analysis of page screenshots with custom prompts. Can focus on specific regions and provide contextual insights.

scrape_documentationC

Scrape documentation from a website using intelligent sub-agents. Jobs are queued and processed automatically by the background worker. Supports plain string selectors for content extraction.

get_scraping_statusB

Get status of active and recent scraping jobs (worker runs automatically)

cancel_scrape_jobC

Cancel an active or pending scraping job

force_unlock_jobB

Force unlock a stuck scraping job - useful for debugging and recovery

force_unlock_stuck_jobsA

Force unlock all stuck scraping jobs (jobs that haven't been updated recently)

list_documentation_sourcesB

List all configured documentation sources

delete_pages_by_patternB

Delete website pages matching URL patterns (useful for cleaning up version URLs, static assets)

delete_pages_by_idsB

Delete specific pages by their IDs

delete_all_website_pagesA

Delete all pages for a website (useful for clean slate before re-scraping)

analyze_project_structureC

Analyze project structure and generate a comprehensive overview

generate_project_summaryC

Generate AI-optimized project overview and analysis

analyze_file_symbolsB

Extract and analyze symbols (functions, classes, etc.) from code files

list_filesB

List files in a directory with smart ignore patterns

find_filesC

Search for files by pattern with optional content matching

easy_replaceC

Fuzzy string replacement in files with smart matching

cleanup_orphaned_projectsB

Clean up orphaned or unused project directories

store_knowledge_memoryC

Store a knowledge graph memory with entity creation

create_knowledge_relationshipB

Create a relationship between two entities in the knowledge graph

search_knowledge_graphC

Search the knowledge graph using semantic or basic search

find_related_entitiesA

Find related entities through relationship traversal

update_file_analysisB

Update or create analysis data for a specific file in the TreeSummary system

remove_file_analysisA

Remove analysis data for a deleted file from the TreeSummary system

update_project_metadataC

Update project metadata in the TreeSummary system

get_project_overviewC

Get comprehensive project overview from TreeSummary analysis

cleanup_stale_analysesB

Clean up stale analysis files older than specified days

join_roomC

Join communication room for coordination

send_messageB

Send message to coordination room

wait_for_messagesC

Wait for messages in a room

close_roomA

Close a communication room (soft delete - marks as closed but keeps data)

delete_roomA

Permanently delete a communication room and all its messages

list_roomsB

List communication rooms with filtering and pagination

list_room_messagesB

List messages from a specific room with pagination

create_delayed_roomC

Create a delayed room for coordination when agents realize they need it

analyze_coordination_patternsC

Analyze coordination patterns and suggest improvements

broadcast_message_to_agentsB

Broadcast a message to multiple agents with auto-resume functionality

create_execution_planB

Create a high-level execution plan that generates coordinated Tasks for implementation

get_execution_planB

Get an execution plan with progress derived from linked Tasks

execute_with_planC

Execute a plan by creating Tasks and spawning coordinated agents

list_execution_plansC

List execution plans for discovery and monitoring

delete_execution_planB

Delete an execution plan by ID

update_execution_planB

Update an execution plan's status, priority, title, description, objectives, acceptanceCriteria, constraints, sections array, or metadata

orchestrate_objectiveB

Spawn architect agent to coordinate multi-agent objective completion

orchestrate_objective_structuredB

Execute structured phased orchestration with intelligent model selection (Research → Plan → Execute → Monitor → Cleanup)

spawn_agentC

Spawn fully autonomous Claude agent with complete tool access

create_taskC

Create and assign task to agents with enhanced capabilities

list_agentsC

Get list of active agents

terminate_agentC

Terminate one or more agents

monitor_agentsC

Monitor agents with real-time updates using EventBus system

continue_agent_sessionB

Continue an agent session using stored conversation session ID with additional instructions

cleanup_stale_agentsC

Clean up stale agents with enhanced options and optional room cleanup

cleanup_stale_roomsC

Clean up stale rooms based on activity and participant criteria

run_comprehensive_cleanupB

Run comprehensive cleanup for both agents and rooms with detailed reporting

get_cleanup_configurationB

Get current cleanup configuration and settings for agents and rooms

create_execution_planB

Create comprehensive execution plan using sequential thinking before spawning agents

get_execution_planB

Retrieve a previously created execution plan

execute_with_planC

Execute an objective using a pre-created execution plan with well-defined agent tasks

report_progressB

Report progress updates for agent tasks and status changes

Prompts

Interactive templates invoked by user choice

NameDescription
multi-agent-bug-fixCoordinate multiple agents to investigate and fix a bug with comprehensive testing
issue-fixerSingle-agent focused problem solver for specific issues
code-reviewComprehensive code review with security, performance, and quality analysis
feature-implementerSystematic feature implementation with planning and testing
documentation-writerCreate comprehensive documentation for code, APIs, and features
performance-optimizerAnalyze and optimize code performance with benchmarking
semantic-searchPerform semantic search across documentation using vector embeddings
vector-db-setupSet up and configure ChromaDB vector database for semantic search
api-documenterGenerate comprehensive API documentation with examples and schemas
db-migrationPlan and execute database schema migrations safely
sequential-thinking-architectUse sequential thinking for complex objective decomposition and planning
sequential-thinking-problem-solverUse sequential thinking for complex problem analysis and solution development
sequential-thinking-analysisUse sequential thinking for step-by-step analysis of complex topics
sequential-thinking-decisionUse sequential thinking for complex decision making processes
architect-orchestrationArchitect agent for complex multi-agent orchestration with sequential thinking
knowledge-graph-integrationIntegrate knowledge graph memory for better context and decision making
task-breakdown-specialistSpecialized agent for hierarchical task breakdown and planning
architect-planning-templateTemplate for architect agents to use sequential thinking for planning

Resources

Contextual data attached and managed by the client

NameDescription
Agent ListList of all active agents with their status and metadata (use ?limit=50&cursor=token&status=active&type=backend)
Communication RoomsList of all active communication rooms (use ?limit=50&cursor=token&search=text)
Room MessagesRecent messages from communication rooms (use ?room=name&limit=50)
Scraper JobsList of web scraping jobs and their status (use ?limit=50&cursor=token&status=active&search=text)
Documentation SourcesList of scraped documentation sources (use ?limit=50&cursor=token&sourceType=api&search=text)
Documentation WebsitesList of all scraped websites (use ?limit=50&cursor=token&search=text)
Website PagesList of pages for a specific website (use docs://{websiteId}/pages?limit=50&cursor=token&search=text)
Agent InsightsAggregated insights and learnings from agents (use ?limit=100&cursor=token&memoryType=insight&agentId=id&search=text)
Vector CollectionsList of LanceDB vector collections and their statistics (use ?limit=50&cursor=token&search=text)
Vector SearchSemantic search across vector collections (use ?query=text&collection=name&limit=10)
Vector Database StatusChromaDB connection status and health information
Documentation SearchSearch documentation content (use ?query=text&source_id=id&limit=10)
Logs DirectoryList directories and files in ~/.mcptools/logs/
Log FilesList files in a specific log directory (use logs://{dirname}/files)
Log File ContentRead content of a specific log file (use logs://{dirname}/content?file=filename)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ZachHandley/ZMCPTools'

If you have feedback or need assistance with the MCP directory API, please join our Discord server