Alpha (Mossland)
Server Details
Korean crypto × AI media MCP — channel stance, daily briefs, RAG, canonical store, AI personas.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- MosslandOpenDevs/alpha-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 12 of 12 tools scored. Lowest: 1.8/5.
Each tool targets a distinct resource or action: ask_alpha for Q&A, get_ for specific data retrievals, list_ for catalogs, search_alpha for cross-entity search. No two tools have overlapping purposes; even list_ vs get_ are clearly separated by generality versus specificity.
Tool names follow a consistent pattern: get_ for detail queries, list_ for enumerations, ask_alpha and search_alpha as special search/q&a verbs. All use snake_case with descriptive nouns, making the scheme predictable and readable.
With 12 tools, the server covers a broad but focused domain (crypto/macro data and analysis). Each tool serves a clear need without redundancy; the count feels well-scoped for a data retrieval and analysis server.
The server provides a complete set of retrieval operations: search, list, get details for entities/events/topics, price signals, macro snapshot, daily brief, and Q&A. No obvious gaps given its read-only, information-providing purpose.
Available Tools
12 toolsask_alphaAInspect
자연어 질문 → Alpha의 데이터 RAG 답변 + 인용. 답변 ≤ 300자. 컨텍스트에 없으면 솔직히 답변.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | 한국어/영문 자연어 질문 (5-500자) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description provides key behavioral details: answer length ≤300 characters, RAG approach, and honest response when context is missing. This adequately informs agents about limitations and behavior beyond a simple fetch.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no filler. Each sentence is essential: first defines core function, second adds constraints. Highly efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter and no output schema, the description covers input format, output length, and fallback behavior. It could optionally mention citation format, but overall complete for a simple Q&A tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds no extra meaning beyond the schema's '한국어/영문 자연어 질문 (5-500자)'. The purpose is clear but the description doesn't elaborate on parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it takes a natural language question and returns a RAG answer with citations from Alpha's data. It distinguishes itself from sibling tools like search_alpha by specifying the Q&A nature and the answer format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for natural language queries to Alpha's data, but does not explicitly exclude other scenarios or compare to alternatives like get_entity or search_alpha. The context is clear but lacks explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_active_pulsesAInspect
최근 N시간 내 활성 가격 시그널 (BTC/ETH 등 5분 윈도우 ≥1% 변동).
| Name | Required | Description | Default |
|---|---|---|---|
| hours | No | 조회 윈도우 (기본 24) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. While it discloses the time window and volatility threshold, it does not mention any behavioral traits such as rate limits, error handling, or that it returns a list of signals. The translation indicates a filter but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence in Korean with no extraneous words. It efficiently conveys the purpose and key constraints, exemplifying front-loaded content every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and no output schema, the description is adequate but incomplete. It lacks information about the return format (e.g., list of signals, fields). Given the low complexity, a score of 3 reflects that while the core purpose is clear, the agent might need more detail on what is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'hours' has a description in the schema ('조회 윈도우 (기본 24)') covering 100% of parameters. The description restates the window concept without adding new meaning, so value-added is marginal. Baseline 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: retrieving active price signals within a recent time window for BTC/ETH etc. with a 5-minute window and ≥1% change. This verb-resource combination is specific and distinguishes it from sibling tools like 'get_connections' or 'get_entity'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for monitoring recent price signals but provides no explicit guidance on when to use this tool versus alternatives like 'search_alpha' or 'get_today_brief'. No exclusions or prerequisites are mentioned, leaving the agent to infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_connectionsDInspect
Entity의 다른 entity와의 인과 가설 (AI 합성 — '~연관 가능성'). 8개 정렬.
| Name | Required | Description | Default |
|---|---|---|---|
| entity_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided and description does not disclose any behavioral traits (e.g., read-only, error handling, rate limits). The agent has no insight into side effects or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is very short but unclear and poorly structured. The single sentence in Korean does not effectively communicate tool function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter, no output schema, and no annotations, the description is critically incomplete. It omits details on return format, sorting, and error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description adds no meaning to the single parameter entity_id. It does not specify expected format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it retrieves causal hypotheses/connections of an entity, but is vague due to Korean text and unclear terms like 'AI synthesis' and '8 sorts'. It does not clearly differentiate from siblings like get_entity or get_topic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. Lacks context about prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_entityAInspect
Entity 상세 — stance 분포 + AI 합성 카드 + 관련 영상 5편. ID는 search_alpha로 먼저 찾을 것.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | entity ID (e.g., 'bitcoin', 'lee-jae-myung') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description must carry full burden. Description mentions only the returned content (stance distribution, AI card, videos) but does not disclose any behavioral traits such as read-only nature, side effects, rate limits, or permissions. Minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence in Korean that conveys purpose, content, and usage instruction. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one required parameter and no output schema, description is relatively complete. It specifies return components (stance, AI card, videos) and usage prerequisite. Lacks detail on return format or pagination, but adequate for the tool's apparent simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with a single 'id' parameter and description provides examples. The description adds meaning by stating the ID must come from search_alpha, going beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves entity details including stance distribution, AI synthesized card, and related videos. It distinguishes from sibling tools like search_alpha by specifying that search_alpha is used to find the ID first.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to find ID via search_alpha first, providing clear usage context and prerequisite. This helps the agent decide when to use this tool (after search_alpha).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_eventCInspect
Event 상세 — 사건 정리 + AI 합성 + 연결된 엔티티.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose behavioral traits like read-only status, side effects, authorization needs, or rate limits. With no annotations provided, the agent must infer safety from the description, which only covers output content, not side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that is concise but lacks a structured breakdown. While it efficiently conveys the purpose, it omits key details like parameter usage and output format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter, no output schema, and no annotations, the description is insufficient for an agent to use the tool reliably. It does not explain error handling, return format, or prerequisites, leaving significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'id' has a schema description coverage of 0% and the description adds no explanation of its format, source, or default behavior. The agent cannot determine how to obtain or format the id.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns event details including summary, AI synthesis, and connected entities. It specifies the resource (event) and the action (get details), though it does not differentiate from sibling tools like get_entity or get_topic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as list_events or search_alpha. The description does not specify prerequisites or contexts where this tool is preferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_macro_snapshotAInspect
현재 US + KR 매크로 데이터 (Fed Funds, BoK 기준금리, 미 10Y, 한국 국고채 3년, 원/달러 등).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It only states the data returned, without mentioning read-only nature, refresh frequency, data source, or any side effects. This is insufficient for a tool with no other behavioral markers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with key information, no unnecessary words. Perfectly concise for a parameterless tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no params, no output schema), the description covers the basic data set. However, it lacks information about data freshness, units, or any limitations, which would be helpful for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so baseline is 4. The description adds no parameter info but correctly implies the tool requires no input. Schema coverage is 100% trivially.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the tool's function: retrieving current US and KR macro data including specific instruments (Fed Funds, BoK rate, US 10Y, Korean 3Y, USD/KRW). This is a specific verb+resource (implied 'get' from name) and distinguishes from siblings like get_entity or get_event.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as ask_alpha or get_today_brief. The description merely lists contents without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_today_briefAInspect
어제 또는 특정 날짜의 한국 시장 일일 브리프 (AI 합성 — oneLine + why + 5 points + quotes).
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | YYYY-MM-DD 형식. 미입력 시 어제. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions 'AI 합성 (AI synthesis)' indicating the brief is generated, but does not disclose other behavioral traits like whether the operation is read-only, if there are rate limits, or if data is cached. Given no annotations, the description carries the full burden and falls short of comprehensive disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's purpose and structure. Every word is functional, with no redundancy or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one optional parameter and no output schema or annotations, the description covers the essential purpose, date scope, and content components. It lacks details on error handling or prerequisites, but is reasonably complete for a simple data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'date' with 100% description coverage. The description adds value by explicitly stating the default behavior ('미입력 시 어제' - defaults to yesterday) and the expected format, which enhances the schema's explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a daily brief for the Korean market, explicitly listing the components (oneLine, why, 5 points, quotes). It specifies the date scope (yesterday or specific date), differentiating it from sibling tools like ask_alpha or get_macro_snapshot which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving Korean market briefs, but does not provide explicit guidance on when to use this tool versus alternatives such as ask_alpha or get_macro_snapshot. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_topicAInspect
Topic 상세 — 설명 + AI 합성 + stance 분포 + 관련 영상.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears the full burden of disclosing behavioral traits. It lists the return components (description, AI synthesis, stance distribution, related videos), implying a read-only operation. However, it omits details such as authentication requirements, rate limits, or potential side effects. The description is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of a single well-structured line that uses a dash to list the key return components. It is front-loaded with the tool's purpose ('Topic 상세') and wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description provides sufficient context about the return content. It covers the main components an agent would need to know. However, it could be improved by clarifying the scope of 'AI synthesis' or the format of 'stance distribution,' but overall it is complete enough for a straightforward detail-retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one required parameter 'id' with 0% description coverage. The description does not elaborate on the parameter beyond implying it is a topic identifier (via 'Topic 상세'). Without additional semantic guidance (e.g., expected format, source of the ID), the agent may struggle to correctly populate this parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool retrieves detailed information about a topic, including description, AI synthesis, stance distribution, and related videos. It uses a specific verb ('get details') and resource ('topic'), and it distinguishes itself from sibling tools like list_topics, which lists topics rather than retrieving a single topic's details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings (e.g., list_topics, get_entity) or any preconditions. It merely states what the tool does, leaving the agent to infer usage context from the name and sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_eventsAInspect
Alpha의 모든 canonical event 목록.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only says 'list all'. It does not disclose whether the tool is read-only, safe to call repeatedly, or if there are any side effects. This is insufficient for a parameterless tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence with no redundancy. Every word adds value, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description is sufficient for a simple listing tool, it lacks details about output format, ordering, or limitations. For a list tool, additional context (e.g., 'returns event IDs and names') would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, and schema coverage is 100%. The description confirms the tool's purpose without needing parameter details, meeting the baseline for zero parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all canonical events of Alpha, using a specific verb ('list') and resource ('canonical event'). It distinguishes from siblings like get_event (singular) and search_alpha (search vs list all).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., get_event or search_alpha). The description implies a batch listing use case but lacks when-not-to-use or alternative references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_personasAInspect
Alpha의 AI 페르소나 8명 카탈로그 (커뮤니티 활동 중). 합성 캐릭터 — 1:1 실명 모방 X.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that personas are synthetic and not real, and that they are active in the community. However, it does not mention any authorization needs or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with two short sentences, front-loading key information. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter list tool without output schema, the description provides adequate context: number of personas, their nature, and community activity. It could mention the output format, but is generally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to explain parameter semantics. Baseline of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists 8 AI personas from Alpha, which are synthetic characters used in community activities. It distinguishes from sibling tools like ask_alpha or get_entity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for exploring available personas, but does not explicitly state when to use it versus alternatives. No exclusions or conditions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_topicsBInspect
Alpha의 모든 canonical topic 목록.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only states that it lists all canonical topics. No behavioral traits like pagination, limits, or side effects are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the purpose with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no behavioral details, the description lacks completeness. It does not describe the return format or any constraints, which is insufficient for a list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the description does not need to add meaning beyond the schema. Baseline score of 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list', the resource 'canonical topics', and the scope 'Alpha', distinguishing it from siblings like 'get_topic' and 'search_alpha'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as 'get_topic' for a single topic or 'search_alpha' for queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_alphaAInspect
Alpha의 entity·topic·event·creator 검색. 한국 크립토·매크로·정치·국제정세 관련 키워드로 검색.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 최대 결과 수 (기본 10) | |
| query | Yes | 검색어 (한글/영문 모두 가능) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It describes the search scope but does not disclose behavior such as read-only nature, result format, pagination, or any side effects. The lack of annotation and limited description leaves agents uncertain about safety and response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences are lean and front-loaded with the core purpose. Each sentence adds distinct value: first specifies what is searched, second restricts relevant domains. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should explain return values or behavior. It does not mention that results include multiple object types or how they are formatted. For a search tool, this omission limits an agent's ability to handle responses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. Description adds domain-specific context to 'query' parameter (Korean crypto, macro, politics, etc.), going beyond schema. This helps agents choose appropriate keywords, earning above baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb '검색' (search) and resource 'Alpha의 entity·topic·event·creator', clearly distinguishing from sibling tools like get_entity or list_topics which target single types. Also specifies domain keywords (Korean crypto, macro, politics, international affairs) for precise scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for cross-type searches across entities, topics, events, and creators, but does not explicitly state when to prefer this over siblings like ask_alpha or get_active_pulses. No when-not or alternative tools mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!