Skip to main content
Glama

get_optimization_plan

Retrieves real-time analysis of test flakiness, MCP usage patterns, and AI-generated test coverage from the optimization plan after each test run.

Instructions

綜合 history/ 快照、telemetry tool-usage、analyze_url 偵測過的 modules,產出三層自我強化分析:(1) 測試套件品質:每條 test 算 outcomes 字串(PFPFP 那種)→ flake_score、再對失敗 error signature 做指紋比對,連 3 次相同 signature 升級為 broken,duration 退化超 1.5x 標記 slow_regression,否則 stable_passing;(2) MCP 使用模式:top tool、重複 args、錯誤率、常見呼叫鏈(A→B 共現);(3) AI 產測效益:generate_test 寫的 test 有沒有出現在下一次 run、analyze_url 偵測到的 module 對不對得到 test 檔(採用率 vs 覆蓋缺口)。回傳結構化 JSON 並同步寫進 PROJECT_ROOT/optimization-plan.md。每次 run_tests 結束會自動 trigger 一次、所以這個 tool 用來「即時讀」結果。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
history_limitNo選填,套件品質分析會看最近 N 次 history 快照。1-100,預設 10。flake score 至少要 5 次以上才穩,深度分析建議 30+。
telemetry_limitNo選填,MCP 使用模式分析會看 telemetry 最近 N 筆 tool-call。10-5000,預設 500。長期使用模式分析拉到 2000+,近期問題排查 100-200 就夠。
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description thoroughly explains the computational logic for each analysis layer, including flake score calculation, broken detection, and pattern identification. It also discloses the side effect of writing to a file. No annotations are provided, so the description carries full burden and meets it well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is detailed but well-structured with a clear introduction of the three layers. While not extremely concise, every sentence contributes to understanding the tool's functionality, making it appropriate for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's purpose, inputs, computational logic, and outputs (both JSON and file). However, it lacks an explicit output schema, which would be helpful given the complex structured result. Still, it provides enough information for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for both parameters, and the description does not add extra meaning beyond the schema. The default values and ranges are in the schema, so the description adds no additional semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it produces a three-layer self-improvement analysis report, specifying the data sources and the output format (structured JSON and file write). It also distinguishes itself as a read tool since it is automatically triggered after run_tests.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions the tool is automatically triggered and used for reading real-time results, but does not explicitly contrast with alternatives or state when not to use it. This provides adequate guidance in the context of related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kao273183/mk-qa-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server