Skip to main content
Glama
sgx-labs

Stateless Agent Memory Engine (SAME)

recent_activity

Read-only

Retrieve recently modified notes to track changes and orient yourself at the start of a session. Shows what's changed with titles and paths.

Instructions

Get recently modified notes. Use this to see what's changed recently or to orient yourself at the start of a session.

Args: limit: Number of recent notes (default 10, max 50)

Returns list of recently modified notes with titles and paths.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitYesNumber of recent notes (default 10, max 50)

Implementation Reference

  • Although not named "recent_activity", the tool "same search" acts as the retriever mechanism for this codebase. The user's request for "recent_activity" yielded no results in the provided files.
    def same_search(vault_dir: str, query: str, top_k: int = SEARCH_TOP_K) -> list[str]:
        """
        Run `same search` and return the top-k result texts.
        Returns a list of result strings.
        """
        try:
            result = subprocess.run(
                [SAME_BIN, "search", "--json", "--top-k", str(top_k), query],
                cwd=vault_dir,
                capture_output=True,
                text=True,
                timeout=QUESTION_TIMEOUT,
            )
        except subprocess.TimeoutExpired:
            log(f"    TIMEOUT: same search for '{query[:50]}...'")
            return []
    
        if result.returncode != 0:
            return []
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds useful context about the tool's purpose (recent modifications and session orientation) and mentions the return format ('list of recently modified notes with titles and paths'), which provides behavioral insight beyond the annotations. However, it lacks details on ordering, pagination, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by brief, relevant details. Each sentence earns its place by clarifying usage, parameters, and returns without redundancy or unnecessary elaboration, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, read-only, no output schema), the description is largely complete, covering purpose, usage, parameters, and return format. However, it could enhance completeness by specifying the order of results (e.g., most recent first) or handling of ties, which are minor gaps for a straightforward tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the 'limit' parameter with its type, default, and max. The description repeats this information in the Args section but does not add significant semantic value beyond what the schema provides, such as explaining why the limit matters or how it affects performance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get recently modified notes') and resource ('notes'), distinguishing it from siblings like search_notes or get_note by focusing on recency rather than content or single retrieval. It provides explicit context ('to see what's changed recently or to orient yourself at the start of a session') that reinforces its unique purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers clear guidance on when to use this tool ('to see what's changed recently or to orient yourself at the start of a session'), which helps differentiate it from alternatives. However, it does not explicitly state when not to use it or name specific sibling tools as alternatives, such as search_notes for content-based queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sgx-labs/statelessagent'

If you have feedback or need assistance with the MCP directory API, please join our Discord server