Skip to main content
Glama

list_pages

Retrieve all pages from your Logseq knowledge graph with metadata including paths, names, tags, links, and backlinks. Filter results by pages or journals folders.

Instructions

Graph 내 모든 페이지 목록 조회. 각 페이지의 메타데이터(경로, 이름, 태그, 링크, 백링크) 반환

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
folderNo폴더 필터: pages 또는 journals

Implementation Reference

  • Core handler function in GraphService that lists all pages/journals, extracts metadata (tags, links), computes backlinks, and filters by folder
    async listPages(folder?: string): Promise<PageMetadata[]> {
      const pages: PageMetadata[] = [];
      const backlinksMap = new Map<string, string[]>();
    
      // Collect all pages first to build backlinks
      const allPages = await this.collectAllPages();
    
      // Build backlinks map
      for (const page of allPages) {
        for (const link of page.links) {
          const existing = backlinksMap.get(link) || [];
          existing.push(page.name);
          backlinksMap.set(link, existing);
        }
      }
    
      // Filter by folder if specified
      let filteredPages = allPages;
      if (folder) {
        if (folder === 'journals') {
          filteredPages = allPages.filter(p => p.isJournal);
        } else if (folder === 'pages') {
          filteredPages = allPages.filter(p => !p.isJournal);
        }
      }
    
      // Add backlinks to each page
      for (const page of filteredPages) {
        pages.push({
          ...page,
          backlinks: backlinksMap.get(page.name) || [],
        });
      }
    
      return pages;
    }
  • MCP server tool handler for 'list_pages': parses input arguments using Zod schema and delegates to GraphService.listPages
    case 'list_pages': {
      const { folder } = ListPagesSchema.parse(args);
      const pages = await graph.listPages(folder);
      return {
        content: [{ type: 'text', text: JSON.stringify(pages, null, 2) }],
      };
    }
  • Zod schema for validating input parameters (folder filter) of the list_pages tool
    const ListPagesSchema = z.object({
      folder: z.enum(['pages', 'journals']).optional().describe('폴더 필터: pages 또는 journals'),
    });
  • src/index.ts:111-120 (registration)
    Registration of the list_pages tool in the MCP tools list, including name, description, and JSON input schema
    {
      name: 'list_pages',
      description: 'Graph 내 모든 페이지 목록 조회. 각 페이지의 메타데이터(경로, 이름, 태그, 링크, 백링크) 반환',
      inputSchema: {
        type: 'object' as const,
        properties: {
          folder: { type: 'string', enum: ['pages', 'journals'], description: '폴더 필터: pages 또는 journals' },
        },
      },
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool returns metadata but doesn't disclose important behavioral traits: whether it's read-only (implied but not stated), pagination or rate limits, authentication requirements, error conditions, or what happens with large graphs. For a listing tool with zero annotation coverage, this leaves significant gaps in understanding its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - just two short sentences that efficiently communicate the core functionality and return values. Every word earns its place with no redundant information. It's appropriately sized for a simple listing tool and front-loads the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single optional parameter) and 100% schema coverage, the description is reasonably complete for basic understanding. However, with no output schema and no annotations, it should ideally provide more context about the return format structure, pagination, or limitations. The description covers the 'what' but leaves gaps in the 'how' and constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'folder' fully documented in the schema with enum values and description. The description doesn't add any parameter information beyond what the schema provides. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('목록 조회' - list retrieval) and resource ('Graph 내 모든 페이지' - all pages in the graph), and specifies the returned metadata fields. It distinguishes itself from siblings like 'search_pages' (filtered search) and 'read_page' (single page content) by emphasizing comprehensive listing of all pages with metadata. However, it doesn't explicitly contrast with 'get_graph' which might also provide page information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing a complete list of pages with metadata rather than filtered results (contrasting with 'search_pages') or single-page operations. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_graph' or provide any exclusion criteria. The context is clear but lacks explicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dearcloud09/logseq-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server