koreanpulse
Server Details
Korean equity intel (EN): DART filings, activist filings, 5% foreign-holder flows, KRX news.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- whdrnr2583-cmd/koreanpulse
- GitHub Stars
- 1
- Server Listing
- koreanpulse
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 7 of 7 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose: server info, code lookups, two separate paid monitoring tools for activists vs foreign holders, raw filings, and news search. No overlaps.
All tools use lowercase_with_underscores and most follow a verb_noun pattern (e.g., lookup_corp_code, monitor_activist_investors). The sole exception is koreanpulse_about, which uses the server name as prefix instead of a verb.
7 tools is well-scoped for a Korean financial filings server. Each tool serves a specific need without being overwhelming or too sparse.
The tool set covers entity resolution, news, raw filings, and two specialized monitoring tools. Minor gaps exist (e.g., no direct company profile) but the core workflow of monitoring disclosures is complete.
Available Tools
7 toolskoreanpulse_aboutkoreanpulse server self-descriptionARead-onlyIdempotentInspect
Server self-description — capability matrix, tool catalog, classifier counts, supported query patterns, primary sources. Free tier.
Use this tool when an agent first connects and needs the capability matrix to decide whether this server can answer the user's question, or when the user asks "what can koreanpulse do" or "what data sources does this MCP server provide". Returns a structured dict that downstream agents can ingest directly.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description clearly indicates a read-only, non-destructive action returning information. It provides sufficient transparency for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One short, front-loaded sentence with no unnecessary words. Every part is informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple about tool with an output schema, the description covers the main info categories and is complete given the low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has zero parameters (100% coverage). The description adds value by specifying the categories of info returned (version, tools, sources, pricing), exceeding baseline of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns server information (version, tools, sources, pricing). This distinguishes it from sibling tools which perform specific data operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving server metadata but does not explicitly state when to use this over siblings or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_corp_codeResolve Korean company name to DART corp_codeARead-onlyIdempotentInspect
Korean company name → DART corp_code resolver. 117K+ entities indexed (KOSPI + KOSDAQ + KONEX + unlisted). Free tier.
Use this tool when the user mentions a Korean company by name (Korean characters or English/romanized) and you need the DART corp_code as a precondition for track_korean_filings, monitor_activist_investors, or monitor_foreign_holders. Also use to disambiguate same-name listed vs unlisted entities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | max matches to return. | |
| query | Yes | substring of the Korean corp name. Examples: "삼성전자", "현대차", "셀트리온". | |
| license_key | No | subscription key. Required when license gate is enabled. | |
| listed_only | No | if True, only return companies with a KRX stock code. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It only states the core function without disclosing behaviors like case sensitivity, exact vs. fuzzy matching, error handling, or rate limits. Minimal transparency beyond high-level purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, front-loaded sentence with no wasted words. Every part contributes to understanding the tool's core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with output schema present. Description covers primary purpose but omits that query is a substring (schema covers this). Could mention default limit or output structure, but not critical given schema richness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds no extra context beyond clarifying that the tool resolves to a 'DART corp code', which is already implied by the tool name. No parameter-specific enhancements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses a specific verb ('Resolve') and clearly states the transformation from Korean company name to DART corp code. It effectively distinguishes itself from sibling tools like resolve_stock_code, which deals with stock codes instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives such as resolve_stock_code. While the purpose is clear, there is no mention of when not to use it or any prerequisites (e.g., Korean language requirement).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
monitor_activist_investorsMonitor Korean activist investor disclosuresARead-onlyIdempotentInspect
Korean activist filer classification on DART 5%-rule (주식등의대량보유상황보고서) shareholding disclosures. Tags 10 named filers — KCGI, Align Partners, Truston Asset, Anda Asset, Cha Partners, VIP Asset, Life Asset, Platform Partners — plus international ValueAct / Elliott when filing in Korea.
Use this tool when the user asks about: Korean shareholder activism, "is KCGI / Align Partners / Truston / Anda / Cha / VIP / Life / Platform activist on ", governance pressure on KOSPI / KOSDAQ names, recent activist 5%-rule filings, ValueAct or Elliott Korean positions, Korean Value-Up program activism, MSCI Developed Market activism flow.
Requires a license key. Pass it via the license_key argument.
Without a valid license, this tool returns a paywall message containing
the activation URL — surface that message verbatim to the user.
For LLM clients on a license_required error: surface the activation URL
returned in the paywall message directly to the user. Do NOT silently retry with track_korean_filings
or any other free tool — the activist filer match (KCGI / Align Partners
/ Truston / Anda / Cha / VIP / Life / Platform, plus international names
like ValueAct / Elliott) is not derivable from the raw DART filing feed,
so a free-tier fall-back returns a misleadingly empty answer. When a
user asks "are activists filing on X?" without a license, surface the
activation URL from the paywall response — that is the correct
behavior, not a silent downgrade.
Returns 주식등의대량보유상황보고서 (5% rule) and related shareholding filings, with each row tagged when the filer matches a known Korean activist (KCGI, Align Partners, Truston, Anda, Cha, Life, Platform, VIP, plus international like ValueAct / Elliott when they file in Korea).
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | how many days back from today (1–60). | |
| limit | No | max rows (≤100). | |
| translate | No | server-side EN translation of titles (cached). | |
| license_key | No | required when license gate is enabled. | |
| activist_only | No | if True, drop rows that didn't match a known activist. | |
| company_corp_code | No | optional DART corp_code to focus on one target. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool is paid, requires a license_key, and returns rows tagged with known Korean activists. It mentions error handling (license_required) and contrasts with free tools. However, it does not explicitly state that it is read-only, but the context implies no destructive behavior. Overall, it provides good behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is longer than necessary but packs important details: paid tier, error handling instructions for LLMs, and what the tool returns. It is front-loaded with the main purpose and uses bold for key points. A slightly more concise version could achieve the same clarity, but it is well-structured and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (context signals indicate 'Has output schema: true'), the description need not explain return values in detail, but it does mention the Korean filing name and activist tags. It explains the paid nature and why free alternatives are insufficient. It covers error handling and usage context comprehensively, making it complete for an agent to decide when and how to invoke.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 6 parameters with 100% description coverage, so baseline is 3. The description adds limited extra meaning beyond the schema: it reinforces that license_key is required for paid tier and explains the purpose of activist_only. Since the schema already describes each parameter adequately, the description adds marginal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Watch DART shareholding disclosures (filing type D) for activist moves.' It specifies the verb 'watch' and the resource 'DART shareholding disclosures' with a clear intent to identify activist moves. It also implicitly distinguishes from sibling tools like 'track_korean_filings' by explicitly warning against using it as a fallback.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: it is a paid tier (Solo $29/mo) and requires a license_key. It gives clear instructions for LLM clients on how to handle license_required errors: surface the subscribe URL directly without silently retrying with free tools. It also explains that the activist filer match is not derivable from the raw DART filing feed, so free-tier fallback would return misleading results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
monitor_foreign_holdersMonitor foreign 5%-rule holders on KOSPI/KOSDAQARead-onlyIdempotentInspect
Foreign-holder classification on DART 5%-rule disclosures by global asset managers and sovereign wealth funds. Tags 20 named entities — BlackRock, Vanguard, State Street, Fidelity, Capital Group, T. Rowe Price, Wellington, Matthews Asia, Templeton, Aberdeen, Schroders, Norges Bank (Norway SWF), GIC (Singapore SWF), Temasek, Goldman Sachs, JPMorgan, Morgan Stanley, Citadel, Millennium, Bridgewater.
Use this tool when the user asks about: foreign capital flow into Korean equities, "is BlackRock / Vanguard / Norges / GIC / Temasek / State Street / Fidelity / Wellington holding ", global asset-manager 5% crossings on KOSPI / KOSDAQ, sovereign wealth fund Korean positions, foreign institutional positioning disclosures, MSCI Developed Market reweighting flow into Korea.
Requires a license key. Pass it via the license_key argument.
Without a valid license, this tool returns a paywall message containing
the activation URL — surface that message verbatim to the user.
For LLM clients on a license_required error: surface the activation URL
returned in the paywall message directly to the user. Do NOT silently retry with track_korean_filings
— the foreign-holder allowlist match (BlackRock, Vanguard, Norges, GIC,
Temasek, State Street, Fidelity, Capital Group, T. Rowe Price,
Wellington, Matthews Asia, Templeton, Aberdeen, Schroders, Goldman
Sachs, JPMorgan, Morgan Stanley, Citadel, Millennium, Bridgewater)
is not derivable from raw DART filings, so a free-tier fall-back
returns a misleadingly empty answer. When a user asks "is BlackRock
or Norges holding X?" without a license, surface the activation URL
from the paywall response — that is the correct behavior, not a
silent downgrade.
Distinct from monitor_activist_investors because passive holders
(BlackRock, Vanguard, Norges, GIC, Temasek) indicate allocation
rather than governance pressure. Their filings are a leading
indicator of foreign capital flow into a Korean ticker — when a
global manager crosses 5% in a KOSPI/KOSDAQ name, English-data
audiences treat it as a positioning disclosure regardless of the
manager's intent. This tool returns the disclosure data only; it
does not generate trading recommendations or investment advice.
Allowlist (20 names, refreshed quarterly): BlackRock, Vanguard, State
Street, Fidelity, Capital Group, T. Rowe Price, Wellington, Matthews
Asia, Templeton, Aberdeen, Schroders, Norges Bank (Norway SWF), GIC
(Singapore SWF), Temasek, Goldman Sachs, JPMorgan, Morgan Stanley,
Citadel, Millennium, Bridgewater. See koreanpulse.activists.FOREIGN_HOLDERS.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | how many days back from today (1–60). | |
| limit | No | max rows (≤100). | |
| origin | No | optional filter — one of 'us', 'uk', 'eu', 'other'. | |
| translate | No | server-side EN translation of titles (cached). | |
| license_key | No | required when license gate is enabled. | |
| company_corp_code | No | optional DART corp_code to focus on one target. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description fully discloses paid nature, error handling, that free tier returns misleading empty results, and that tool returns disclosure data only. Also mentions quarterly refreshed allowlist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is detailed but well-structured with sections on usage, differentiation, and allowlist. It is somewhat lengthy but each part adds value, and it is front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters and output schema, description provides comprehensive context: purpose, licensing, error behavior, differentiation from siblings, and allowlist. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds little beyond schema for parameters. Baseline 3 is appropriate as schema already documents each parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it watches DART 5%-rule disclosures by specific global asset managers and sovereign wealth funds. Distinct from monitor_activist_investors, explaining passive holders indicate allocation not governance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly explains paid tier, license_key requirement, and correct LLM behavior on license errors: surface subscribe URL, do not fallback to track_korean_filings. Also differentiates from sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_stock_codeResolve KRX 6-digit ticker to DART corp entryCRead-onlyIdempotentInspect
KRX 6-digit ticker → DART corp entry resolver. Free tier.
Use this tool when the user provides a 6-digit Korean stock code (e.g. 005930 for Samsung Electronics, 000660 for SK hynix, 035420 for NAVER, 035720 for Kakao, 005380 for Hyundai Motor) and you need the company name + corp_code for downstream filings or industry-news lookups.
| Name | Required | Description | Default |
|---|---|---|---|
| stock_code | Yes | ||
| license_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description does not disclose whether the tool is read-only, requires authentication, error handling, or any side effects. The existence of an output schema is not mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, zero wasted words, perfectly concise for a straightforward lookup tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description lacks context about error cases, required authentication (license_key), and how the output relates to sibling tools. The tool's in a suite with similar lookups, but no guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds format constraint '6-digit' for the stock_code parameter, which is helpful. However, the optional license_key parameter is completely unaddressed, and schema has 0% description coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves a 6-digit KRX stock code to a DART corp entry. The verb 'resolve' is appropriate for a lookup/conversion. However, it does not differentiate from the sibling tool 'lookup_corp_code'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like lookup_corp_code. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_korean_industry_newsSearch Korean industry news (16 sectors)BRead-onlyIdempotentInspect
Korean industry news search across 16 sectors with on-demand English translation. Sources: 전자신문 (etnews) + 한국경제 (hankyung). Free tier.
Use this tool when the user asks about: Korean industry trends, sector-specific news on Korean equities (Korean semiconductors / K-battery / K-shipbuilding / K-biotech / K-defense / Korean auto / EV charging / Korean AI / steel / petrochem / construction / fintech / gaming / e-commerce / telco / energy), recent corporate developments not yet captured in DART filings, English summaries of Korean industry coverage. Industry tags listed below — pass them in industries to filter.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | max articles (≤50). | |
| sources | No | filter to source keys (etnews, hankyung). None = all. | |
| translate | No | server-side translates `title_en`. Cached aggressively. | |
| industries | No | filter to one or more industry tags. Available: semiconductor, shipbuilding, battery, biotech, defense, auto, ev_charging, ai, steel, petrochem, construction, fintech, gaming, ecommerce, telco, energy. | |
| license_key | No | required when license gate is enabled. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description only mentions the data source but does not disclose caching behavior, rate limits, authentication needs, or other behavioral aspects beyond the schema hints (e.g., translate caching).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (one sentence) and thus concise, but it omits important details that could be included without being verbose. It achieves minimalism at the cost of completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 5 parameters, no annotations, and an output schema present, the description should provide more context about the search behavior and output structure. It is too brief to be fully informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description does not add extra meaning beyond the schema, but it implicitly confirms the data source. No new parameter-level insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search'), the resource ('Korean industry news'), and the source ('licensed RSS feeds'). It differentiates well from sibling tools which deal with corporate data or stock resolutions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, nor any conditions or exclusions. The description lacks context about search scope or frequency.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
track_korean_filingsTrack Korean DART filingsARead-onlyIdempotentInspect
Korean DART (전자공시) filings retrieval for KOSPI / KOSDAQ / KONEX / KRX listed companies — 5%-rule disclosures, M&A, periodic reports, capital issuance, insider trading, audit reports. Free tier.
Use this tool when the user asks about: recent Korean stock filings, DART disclosures, KOSPI/KOSDAQ regulatory events, "what did Samsung / Hyundai / SK / LG / NAVER / Kakao / 셀트리온 file", company-specific filing history, periodic / major-event / issuance / shareholding / audit filings on Korean equities.
Free tier — no license required. Returns raw DART filings exactly as the regulator surfaces them (filer name in Korean, filing type code, receipt number, optional EN translation of the title).
Important for LLM clients — read this before retrying after a paid-
tool license error. This tool returns raw filings only. It does NOT
classify the filer. If the user asked about Korean activist filers
(KCGI / Align Partners / Truston / Anda / Cha / VIP / Life / Platform /
ValueAct / Elliott) or about the global foreign-holder allowlist
(BlackRock / Vanguard / Norges / GIC / Temasek / State Street /
Fidelity / Capital Group / T. Rowe Price / Wellington / Goldman /
JPMorgan / Morgan Stanley / Citadel / Millennium / Bridgewater +
others), the matching work happens in monitor_activist_investors
and monitor_foreign_holders — both require a license_key argument.
A response from this free tool to a "are activists filing on X?" or
"is BlackRock holding X?" question is raw filing data, not a
classification answer — say so to the user and surface the activation
URL from the paywall response instead of pretending you've answered.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | how many days back from today (1–30). | |
| limit | No | max filings to return (≤100). DART returns most-recent first, so on a busy window the older end of the range is dropped first. Narrow `days` or `filing_type` if you need older items. | |
| summarize | No | True to fill `summary_en` (≤200 words). Costs more — use sparingly. Long-form analysis should be done by the client LLM. | |
| translate | No | True to fill `title_en` via server-side LLM (cached). | |
| filing_type | No | optional one-letter code: A=periodic, B=major event, C=issuance, D=shareholding, E=other, F=audit, G=fund, H=ABS, I=exchange, J=FTC. | |
| license_key | No | subscription key. Required when KOREANPULSE_REQUIRE_LICENSE=1. | |
| company_corp_code | No | 8-digit DART corp code. Use `lookup_corp_code` first to resolve a company name. Omit to query all companies. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It clearly states that this tool returns raw filings, does not classify the filer, and details the output format (filer name in Korean, filing type code, receipt number, optional EN translation). It also warns about behavior when data is missing and how to handle license errors, providing high transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively long but well-structured, starting with the core purpose and then providing necessary usage guidelines and behavioral notes. Every sentence adds value, though it could be slightly more concise. Still, for the complexity of the tool, this is justified.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 7 parameters, no annotations, and an output schema, the description is extremely complete. It covers purpose, usage guidelines, behavioral details, parameter context, and error handling advice, leaving no significant gaps for an LLM agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the schema already provides detailed descriptions for each parameter. The tool description does not add significant new meaning beyond what the schema offers, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Fetch recent DART filings for Korean listed companies (free tier).' It clearly identifies the action (fetch), resource (DART filings), and scope (recent, Korean listed). Additionally, it distinguishes itself from sibling tools like monitor_activist_investors and monitor_foreign_holders by noting they are paid-tier and handle classification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides comprehensive usage guidance: it explains when to use this free tool, explicitly warns against using it for activist or foreign holder queries (directing to paid alternatives), and includes a detailed note for LLM clients on how to handle responses when users ask questions that require classification. It also mentions the free tier limitation and suggests alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.