Skip to main content
Glama

save_prompt

Save learning questions or notes from AI sessions to track progress and facilitate metacognition. Automatically captures or manually stores prompts for later review and analysis.

Instructions

학습 관련 질문이나 메모를 저장합니다. 사용 방법:

  1. 자동 저장: AI가 사용자의 학습 관련 질문을 인식하여 자동으로 저장

  2. 수동 저장: 사용자가 "이거 저장해줘", "'질문내용' 저장해줘"라고 요청하면 저장 예시: "N+1 쿼리가 뭐야? 저장해줘" 또는 "방금 질문 저장해줘"

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes저장할 질문 또는 메모 내용

Implementation Reference

  • The savePrompt function implements the core logic for saving a prompt, including classifying its type and writing to a JSON file.
    function savePrompt(date: string, prompt: string): QuestionType {
      ensureDirectories();
      const prompts = loadPrompts(date);
      const type = classifyQuestion(prompt);
      prompts.push({
        prompt,
        timestamp: new Date().toISOString(),
        type,
      });
      fs.writeFileSync(getPromptFilePath(date), JSON.stringify(prompts, null, 2));
      return type;
    }
  • src/server.ts:120-138 (registration)
    The 'save_prompt' tool is registered on the McpServer using the server.tool method.
      server.tool(
        "save_prompt",
        `학습 관련 질문이나 메모를 저장합니다.
    사용 방법:
    1. 자동 저장: AI가 사용자의 학습 관련 질문을 인식하여 자동으로 저장
    2. 수동 저장: 사용자가 "이거 저장해줘", "'질문내용' 저장해줘"라고 요청하면 저장
    예시: "N+1 쿼리가 뭐야? 저장해줘" 또는 "방금 질문 저장해줘"`,
        {
          prompt: z.string().describe("저장할 질문 또는 메모 내용"),
        },
        async ({ prompt }) => {
          const today = getToday();
          const type = savePrompt(today, prompt);
          const typeDesc = TYPE_DESCRIPTIONS[type];
          return {
            content: [{ type: "text", text: `저장됨 (${typeDesc})` }],
          };
        }
      );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the saving action and usage modes but lacks details on behavioral traits such as permissions needed, whether saves are permanent or reversible, rate limits, or error handling. For a mutation tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and followed by usage instructions and examples. Each sentence adds value, with no redundant information. However, the structure could be slightly improved by separating the purpose and usage sections more clearly, but it remains efficient and easy to follow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a mutation tool with no annotations and no output schema), the description is partially complete. It covers purpose and usage well but lacks behavioral details and output information. With sibling tools for retrieval, the context is somewhat addressed, but more completeness is needed for safe and effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the parameter 'prompt' documented as '저장할 질문 또는 메모 내용' (question or note content to save). The description doesn't add meaning beyond this, as it focuses on usage rather than parameter details. With high schema coverage, the baseline score of 3 is appropriate, as the schema adequately handles parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '학습 관련 질문이나 메모를 저장합니다' (saves learning-related questions or notes). It specifies the resource (learning-related content) and verb (save), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like get_prompts_by_date, which might retrieve saved prompts, leaving some ambiguity about its unique role in the toolset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with two modes: automatic saving (when AI recognizes learning-related questions) and manual saving (when users request it with phrases like '이거 저장해줘'). It includes examples ('N+1 쿼리가 뭐야? 저장해줘') and specifies context (learning-related content), giving clear instructions on when and how to use the tool effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/YUJAEYUN/learnlog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server