Skip to main content
Glama

get_all_pages

Retrieve all pages from a Logseq graph, including journal pages identified by the 'journal?' attribute and 'journalDay' in YYYYMMDD format, for comprehensive graph management.

Instructions

Gets all pages from the Logseq graph.

Journal pages can be identified by the "journal?" attribute set to true and 
will include a "journalDay" attribute in the format YYYYMMDD.

Returns:
    List of all pages in the Logseq graph.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The MCP tool handler for 'get_all_pages', decorated with @mcp.tool(). It has no input parameters and returns a list of page dictionaries. Includes documentation on journal pages. Delegates execution to the LogseqAPIClient instance.
    @mcp.tool()
    def get_all_pages() -> List[Dict]:
        """
        Gets all pages from the Logseq graph.
        
        Journal pages can be identified by the "journal?" attribute set to true and 
        will include a "journalDay" attribute in the format YYYYMMDD.
        
        Returns:
            List of all pages in the Logseq graph.
        """
        return logseq_client.get_all_pages()
  • Supporting helper method in LogseqAPIClient that calls the Logseq API endpoint 'logseq.Editor.getAllPages', handles the response format, and ensures a list of pages is returned.
    def get_all_pages(self) -> List[Dict]:
        """Get all pages in the graph"""
        response = self.call_api("logseq.Editor.getAllPages")
        if isinstance(response, list):
            return response
        return response.get("result", []) if isinstance(response, dict) else []
  • Import and exposure of the get_all_pages tool function in the tools package __init__ for registration and usage.
    from .pages import get_all_pages, get_page, create_page, delete_page, get_page_linked_references
    from .blocks import get_page_blocks, get_block, create_block, update_block, remove_block, insert_block, move_block, search_blocks
    
    __all__ = [
        "get_all_pages", 
  • Type signature and documentation defining the tool's no-input schema and output as List[Dict] with details on journal page structure.
    def get_all_pages() -> List[Dict]:
        """
        Gets all pages from the Logseq graph.
        
        Journal pages can be identified by the "journal?" attribute set to true and 
        will include a "journalDay" attribute in the format YYYYMMDD.
        
        Returns:
            List of all pages in the Logseq graph.
        """
        return logseq_client.get_all_pages()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool returns but lacks critical details: whether it's paginated for large graphs, if it requires specific permissions, potential rate limits, or error conditions. The journal attribute detail is useful but doesn't cover core behavioral traits like performance or access control.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise: it opens with the core purpose, adds a clarifying detail about journal pages, and ends with the return value. Each sentence earns its place, and there's no fluff. However, it could be slightly more front-loaded by merging the first and last sentences for immediate clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is minimally adequate. It explains what 'pages' are and hints at structure via journal attributes, but lacks output format details (e.g., list structure, fields per page) and behavioral context. For a read-only list operation, this is passable but leaves gaps an agent might need.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100% (empty schema). The description appropriately doesn't discuss parameters, focusing instead on output semantics (journal page attributes). This meets the baseline of 4 for zero-parameter tools, as it adds value by explaining what 'pages' include without redundant parameter info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Gets') and resource ('all pages from the Logseq graph'), making the purpose unambiguous. It distinguishes itself from siblings like get_page (single page) and get_page_blocks (blocks within a page) by specifying 'all pages'. However, it doesn't explicitly contrast with search_blocks or get_page_linked_references, which serve different but related purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, performance considerations for large graphs, or when to prefer get_page (for a specific page) or search_blocks (for filtered content). The journal page detail is informational but not a usage guideline.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/apw124/logseq-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server