Korean News Hub
Server Details
Korean news aggregator - Naver, Google News, Daum trends in real-time
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SongT-50/korean-news-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 6 of 6 tools scored. Lowest: 2.6/5.
Each tool has a clearly distinct purpose: daily_briefing combines multiple sources, korean_news focuses on Korean news by category, news_search is a general keyword search, read_article extracts content from a URL, tech_news covers global AI/tech topics, and trending provides current headlines. No two tools have overlapping functionality.
Tool names follow a mix of patterns: verb_noun (read_article, daily_briefing), noun_noun (tech_news, korean_news, news_search), and gerund (trending). While each name is descriptive, the lack of a consistent convention (e.g., all verb_noun) reduces predictability.
With 6 tools, the server is well-scoped for a news hub. Each tool covers a distinct need (browsing, searching, reading, trending) without being too few or too many. The count feels appropriate for the domain.
The tool set covers core news consumption tasks: browsing by category, searching, reading full articles, and getting trending topics. Minor gaps like filtering by date or user preferences exist, but the current surface is functional and unlikely to cause agent failures.
Available Tools
6 toolsdaily_briefingAInspect
Generate a comprehensive daily news briefing. Combines Korean headlines + AI/tech news + Claude/Anthropic news.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description is straightforward about its aggregating behavior. It doesn't disclose details like freshness or update frequency, but for a simple briefing tool, the purpose is transparent. No contradictions with annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, immediately stating the tool's purpose and components. Every word contributes meaning; no filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and a simple purpose, the description is sufficient. It names the three key content areas. An output schema exists, so return values need not be explained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, so schema coverage is 100% trivially. Following the guideline, baseline is 4. The description adds no parameter info, but none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates a comprehensive daily news briefing and specifies the components: Korean headlines, AI/tech news, and Claude/Anthropic news. This distinguishes it from sibling tools like korean_news, tech_news, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for a combined briefing but does not explicitly state when to use it versus alternatives like reading individual news sources or searching. No when-not or direct comparison is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
korean_newsAInspect
Get Korean news by category.
Args:
category: News category. Options: 속보, 정치, 경제, 사회, IT, 세계, 연예, 스포츠
count: Number of articles (default 10, max 20)| Name | Required | Description | Default |
|---|---|---|---|
| count | No | ||
| category | No | 속보 |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only states 'Get,' implying a read-only operation. It does not disclose behavioral traits like caching, freshness, auth needs, or rate limits. Standard behavior is assumed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (5 lines), front-loaded with the main purpose, and every sentence provides necessary information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple two-parameter tool and presence of an output schema, the description covers all necessary information (category options and count limits) completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by listing category options in Korean and specifying count defaults and max. This adds significant meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves Korean news by category, using a specific verb ('Get') and resource. It effectively distinguishes from siblings like 'news_search' (search) and 'read_article' (specific article).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains parameters and their options, but lacks explicit when-to-use or when-not-to-use guidance compared to sibling tools. Usage is implied but not contrasted.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
news_searchAInspect
Search news by keyword.
Args:
query: Search keyword (e.g. "Samsung AI", "Claude Code", "MCP server")
language: "ko" (Korean) or "en" (English)
count: Number of articles (default 10, max 20)| Name | Required | Description | Default |
|---|---|---|---|
| count | No | ||
| query | Yes | ||
| language | No | ko |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description provides key behavioral details like allowed language values ('ko' or 'en'), default count (10), and max count (20). It also includes example queries, adding context beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a clear Args list, no wasted words, and front-loaded purpose. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While an output schema exists (not shown), the description covers inputs well. It could mention output format briefly, but overall adequate for a simple search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description fully compensates. It explains 'query' as a search keyword with examples, 'language' with allowed values, and 'count' with default and max constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Search news by keyword' which is a specific verb+resource. It distinguishes from siblings like 'daily_briefing' or 'korean_news' by focusing on keyword search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists arguments but does not explicitly state when to use this tool versus alternatives. There is no exclusion or comparison with siblings, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_articleCInspect
Read and extract article content from a URL.
Args:
url: The article URL to read| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavior. It only states 'read and extract' without mentioning read-only nature, authentication needs, or what happens with errors. Minimal disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short and to the point, but it lacks structure like a summary line or key details. Acceptable conciseness, but missing important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and an output schema exists, the description should explain what 'extract article content' means (e.g., returns text, metadata). It does not, leaving the agent uninformed about return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must add meaning. The description only repeats the parameter name and type ('url: The article URL to read'), providing no added value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it reads and extracts article content from a URL, which is clear. However, it does not differentiate from sibling tools like news_search or trending, which might also read articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs alternatives such as news_search or korean_news. It does not specify context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tech_newsAInspect
Get global AI/tech news by topic.
Args:
topic: Tech topic. Options: AI, Claude, OpenAI, MCP, OpenClaw, 스타트업, 개발, 클라우드
count: Number of articles (default 10, max 20)| Name | Required | Description | Default |
|---|---|---|---|
| count | No | ||
| topic | No | AI |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states it 'gets' news, implying a read-only operation. It does not disclose any behavioral traits such as rate limits, authentication needs, or side effects beyond the default/max count hint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the purpose, followed by a clean 'Args' listing. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 simple params) and the presence of an output schema, the description adequately covers the parameters and general purpose. A minor gap is the lack of mentions of time range (e.g., 'recent news') or comparison to siblings, but overall it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, but the description compensates well by listing topic options (AI, Claude, etc.) and specifying count default (10) and max (20). This adds clear meaning beyond the schema, which only provides type and default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'global AI/tech news by topic,' using a specific verb and resource. It distinguishes itself from sibling tools like 'daily_briefing' (which likely covers broader daily news) and 'korean_news' (by language).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching news on a specific topic but provides no comparison to sibling tools like 'news_search' or 'trending.' It lacks explicit when-to-use or when-not-to-use guidance, making it minimally adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trendingBInspect
Get current trending/headline news.
Args:
scope: "korea" (Korean headlines) or "tech" (global tech trends)| Name | Required | Description | Default |
|---|---|---|---|
| scope | No | korea |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description offers no behavioral traits beyond the basic read operation (e.g., no mention of read-only nature, authentication needs, rate limits, or side effects). The description carries the full burden but fails to add transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, using only two sentences including the argument list. Every word adds value, and the key information is front-loaded in the first sentence. No unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single optional parameter with default) and the presence of an output schema (which covers return format), the description is largely complete. It lacks details on update frequency or behavior when scope is unsupported, but these are minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With no schema descriptions (0% coverage), the description explains the single parameter 'scope' by listing the two allowed values and their meanings ('korea' for Korean headlines, 'tech' for global tech trends). This adds value beyond the schema, though it does not mention case sensitivity or validation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves current trending/headline news, with specifics on scope options (Korea or tech). However, it does not explicitly differentiate from sibling tools like korean_news and tech_news, which cover overlapping domains, missing a chance to clarify when to use this tool over those.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., daily_briefing, news_search). The scope parameter hints at usage contexts, but lacks direct comparisons or 'when not to use' advice, making it less helpful for an AI agent deciding between tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!