Skip to main content
Glama

get_learning_stats

Retrieve daily learning statistics to analyze question type distribution and calculate learning depth scores for metacognitive progress tracking.

Instructions

특정 날짜의 학습 통계를 조회합니다. 질문 타입별 분포, 학습 깊이 점수를 확인할 수 있습니다.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dateNo조회할 날짜 (YYYY-MM-DD 형식, 기본값: 오늘)

Implementation Reference

  • The handler implementation for the get_learning_stats tool which retrieves prompts for a given date and calculates a daily summary including total questions, depth score, and type distribution.
        async ({ date }) => {
          const targetDate = date || getToday();
          const prompts = loadPrompts(targetDate);
    
          if (prompts.length === 0) {
            return {
              content: [{ type: "text", text: `${targetDate}에 저장된 질문이 없습니다.` }],
            };
          }
    
          const summary = calculateDailySummary(prompts);
    
          const typeList = Object.entries(summary.byType)
            .filter(([_, count]) => count > 0)
            .map(([type, count]) => {
              const desc = TYPE_DESCRIPTIONS[type as QuestionType];
              return `  - ${desc}: ${count}개`;
            })
            .join("\n");
    
          const result = `📊 학습 통계 (${targetDate})
    
    총 질문 수: ${summary.total}개
    학습 깊이 점수: ${summary.depthScore}점
    
    질문 유형별 분포:
    ${typeList}
    
    💡 학습 깊이 점수는 질문 유형별 가중치를 기반으로 계산됩니다.
       - 사실/정의 질문: 1점
       - 비교/적용 질문: 2점
       - 원리/연결 질문: 3점`;
    
          return {
            content: [{ type: "text", text: result }],
  • src/server.ts:235-240 (registration)
    The tool registration for 'get_learning_stats' using the server.tool method.
    server.tool(
      "get_learning_stats",
      "특정 날짜의 학습 통계를 조회합니다. 질문 타입별 분포, 학습 깊이 점수를 확인할 수 있습니다.",
      {
        date: z.string().optional().describe("조회할 날짜 (YYYY-MM-DD 형식, 기본값: 오늘)"),
      },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves statistics, implying a read-only operation, but does not address critical aspects such as authentication requirements, rate limits, error handling, or response format. The description lacks details on what '학습 깊이 점수' entails or how data is structured, leaving behavioral traits unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with two sentences that efficiently convey the tool's purpose and key data points. There is no wasted language, and it avoids redundancy. However, it could be slightly more structured by explicitly separating purpose from data details, but it remains highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving statistical data) and the absence of annotations and output schema, the description is incomplete. It mentions what data is available but does not explain the return values, data formats, or any behavioral constraints. For a tool with no structured output information, the description should provide more context on what to expect from the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'date' parameter well-documented in the schema. The description adds no additional parameter semantics beyond implying date-based filtering. Since schema coverage is high, the baseline score is 3, as the description does not compensate with extra details but doesn't detract from the schema's clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '특정 날짜의 학습 통계를 조회합니다' (retrieves learning statistics for a specific date), specifying the verb (조회/retrieve) and resource (학습 통계/learning statistics). It distinguishes from siblings by focusing on statistics rather than prompts, though it doesn't explicitly contrast with them. The additional detail about '질문 타입별 분포, 학습 깊이 점수' (distribution by question type, learning depth score) adds specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions what data is retrieved but does not indicate scenarios for its use, prerequisites, or comparisons to sibling tools like get_prompts_by_date or get_today_prompts. Usage is implied through the action described, but explicit context or exclusions are absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/YUJAEYUN/learnlog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server