Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
search_documents

Hybrid (semantic + keyword) search across indexed workspace documents (PRDs, decision logs, session logs, etc.). Call this tool first when the user asks about project-related content.

Filter by project ID or doc_type (prd, session_log, decision_log, document).

list_sources

List all indexed source files.

read_file

Read file contents by absolute path.

organize_files

Organize files in the workspace. action: 'move', 'archive', 'rename', 'list'. Always call suggest_cleanup first and get user confirmation before organizing.

suggest_cleanup

Generate cleanup suggestions for the workspace. Detects root-level files, backup files, large files, empty directories. Call this tool first when cleanup is requested.

project_status

Get project status including HANDOFF.md, recent changes, and file statistics. Call automatically when asked about project status.

extract_decisions

Extract decisions from session logs and decision logs. Call automatically when asked about past decisions.

audit_prd

Audit a PRD file for quality and completeness against a 13-section structure. Checks section coverage, Mermaid syntax, wireframes, versioning, and changelog.

check_sprawl=True: Detect multiple versions of the same PRD (suggest archiving old ones) check_consistency=True: Check cross-PRD consistency for period selectors and tiers

remember

Save a piece of knowledge for cross-session persistence. Use this when the user says 'remember this' or when important decisions, preferences, or facts should be preserved across conversations.

recall

Search past memories from previous sessions. Call this when the user asks 'what did I say about...', 'do you remember...', or references past conversations.

learn

Auto-learn: save new knowledge and immediately index it for search. Use this to capture insights, patterns, or facts discovered during conversation.

knowledge_graph

Build a knowledge graph from indexed documents showing relationships between concepts, decisions, and entities. Returns a Mermaid diagram of the document relationships.

scope: 'project' (single project) or 'all' (entire workspace) max_nodes: limit the number of nodes in the graph (default 30)

explore_connections

Show connections for a specific document or concept in the knowledge graph. Returns related documents, shared topics, and a focused Mermaid subgraph.

ingest_documents

Index (or re-index) all workspace documents into the vector store. Run this when setting up Tessera for the first time, or when you want to rebuild the entire index from scratch. Optionally pass specific directory paths to index only those.

sync_documents

Incrementally sync the index with your workspace. Only processes new, changed, or deleted files since the last sync. Much faster than full ingestion. Run this when you've updated some documents.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
document_indexProvide a browsable index of all indexed documents.
workspace_statusProvide current workspace status across all projects.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/besslframework-stack/project-tessera'

If you have feedback or need assistance with the MCP directory API, please join our Discord server