Trailmark MCP Server
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Trailmark MCP ServerOpen repository /workspace/project and show its graph summary"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Trailmark MCP Server
Trailmark MCP Server is a standalone MCP wrapper around .
While I do understand the ToB's usage with , my usecase requires an MCP server that can analyze and server multiple graphs. The server can scan multiple repositories and the LLM can request information from each separately.
Mostly created with OpenAI GPT-5.5 via Github Copilot in VS Code. Point your LLM to the ai-docs directory for documentation and development support.
Requirements
Python 3.12+
uv
Project metadata:
package name:
trailmark-mcpCLI command:
trailmark-mcp
Install
Install runtime and development dependencies:
uv sync --group devQuick Start
Start server over stdio:
uv run trailmark-mcp serve --transport stdioSmoke-test direct scan path without an MCP client:
uv run trailmark-mcp scan /path/to/repoSkip preanalysis during scan when needed:
uv run trailmark-mcp scan /path/to/repo --skip-preanalysisHow The Server Works
Primary lifecycle entrypoint is open_repository(...).
Behavior summary:
if no snapshot exists, the server scans source, optionally runs preanalysis, and saves the first snapshot
if a snapshot exists and
rescan=False, the server reloads the latest snapshot into a live sessionif
rescan=True, the server rebuilds from source and saves a fresh snapshot
This means the common flow is:
call
open_repositoryuse graph tools against returned session
call
save_snapshotafter meaningful in-memory mutations when you want persistence
Session Model
session_id is MCP wrapper state, not Trailmark core state.
Current semantics:
each
open_repository(...)call creates a new session idmultiple live sessions can coexist
tools accept
session_idto target a specific graphomitted
session_iduses the most recently opened still-open sessionclosing the default session promotes the most recently opened remaining session
Use current_repository(session_id=...) to verify which repository a session points to.
Public MCP Tools
Lifecycle:
open_repositorycurrent_repositoryclose_repositorysave_snapshot
Navigation:
graph_summarydiff_graphssearch_nodescallers_ofcallees_ofancestors_ofreachable_frompaths_betweenentrypoint_paths_toattack_surfacecomplexity_hotspotsfunctions_that_raise
Context and mutation:
subgraphannotations_offindingsnodes_with_annotationrun_preanalysisannotate_nodeclear_annotationsaugment_findings
Notes:
diff_graphs(before_session_id, after_session_id)treatsafteras the new statesearch_nodessupportscontains,exact, andsuffixremoved helper surfaces like
scan_repositoryandtool_manifestare intentionally not part of the public runtime anymore
Snapshot Behavior
Snapshots are written under the analyzed repository, not under this server repository:
<target-repo>/.trailmark/snapshots/<timestamp>/Current snapshot artifacts include:
graph.jsonsummary.jsonentrypoints.jsonhotspots.jsonsubgraphs.jsonscan-metadata.json
Snapshots support reload into a live session. Use rescan=True when you explicitly need a fresh rebuild from source.
Repository Layout
Key files:
src/trailmark_mcp/cli.py: CLI entrypoint forscanandservesrc/trailmark_mcp/mcp_app.py: MCP tool registrationsrc/trailmark_mcp/tool_catalog.py: declarative metadata for exposed toolssrc/trailmark_mcp/services/registry.py: session trackingsrc/trailmark_mcp/services/runtime.py: main Trailmark-backed runtime behavior
Development
Run focused test suite:
uv run --group dev pytest tests/test_tool_catalog.py tests/test_registry.py tests/test_stdio_server.pyCurrent CI runs that same focused suite on Python 3.12.
Extension rule:
add or change runtime behavior
register tool in
mcp_app.pyupdate metadata in
tool_catalog.pyupdate tests
update docs if public behavior changed
Use In VS Code
VS Code can launch this server directly through MCP using a workspace-level mcp.json file.
Typical setup:
open this repository in VS Code
make sure dependencies are installed with
uv sync --group devkeep the server definition in
.vscode/mcp.jsonlet the MCP client start the server over
stdio
This repository already includes .vscode/mcp.json for local use.
Example mcp.json:
{
"servers": {
"trailmark-mcp": {
"type": "stdio",
"command": "uv",
"args": [
"run",
"trailmark-mcp",
"serve",
"--transport",
"stdio"
]
}
}
}If you use this server from a larger multi-project workspace, copy the same definition into that workspace root's .vscode/mcp.json and make sure the command runs in an environment where uv and this project are available.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/parsiya/trailmark-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server