Korean Stock Market Data
Server Details
Korean stock market data - prices, dividends, short selling, financial disclosures
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SongT-50/korean-stock-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 7 of 7 tools scored. Lowest: 3.4/5.
Most tools have distinct purposes (compare, price, trend, dividend, index, search, popular list). However, get_stock_price and get_price_trend both provide price information, and compare_stocks also compares prices, leading to slight overlap. Descriptions help differentiate but some ambiguity remains.
All tools follow a consistent verb_noun pattern in snake_case (e.g., get_stock_price, compare_stocks, search_stock). No mixing of conventions, making it predictable for agents.
7 tools is well-scoped for a stock market data server. It covers essential operations without being overwhelming. Each tool serves a clear function.
The tool set covers core stock data needs: price, trend, comparison, dividends, indices, and search. Minor gaps include limited historical depth (max 30 days for trend) and no sector or foreign data, but overall complete for typical use.
Available Tools
7 toolscompare_stocksBInspect
여러 종목의 시세를 비교합니다.
Args:
stock_names: 비교할 종목명 (쉼표 구분, 예: "삼성전자,SK하이닉스,NAVER")
date: 조회일 (YYYY-MM-DD). 빈 문자열이면 최근 영업일.
Returns:
종목별 시세 비교 (종가, 등락률, 거래량, 시가총액)| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| stock_names | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes a read-like operation (comparison) but does not explicitly state it is read-only, nor does it mention any behavioral traits such as rate limits, data freshness, or the impact of requesting many stocks. The lack of side effects is implied but not confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a docstring with Args and Returns sections, front-loading the purpose. It is concise, with each part adding value. However, the Returns section may be partially redundant given the output schema exists, but it still provides helpful summaries.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters and an output schema, the description covers purpose, parameter semantics, and return values. It lacks error handling or performance notes, but overall it provides sufficient context for an AI agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must compensate. It adds meaning: stock_names expects comma-separated values with an example ('삼성전자,SK하이닉스,NAVER'), and date uses YYYY-MM-DD format with empty string default meaning recent business day. This provides crucial context not present in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it compares stock prices of multiple stocks using the verb '비교' (compare) and specifies the resource ('여러 종목의 시세'). It distinguishes from siblings like get_stock_price (single stock) and get_price_trend (trend) through the concept of multiple stocks, but could be more explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like get_stock_price for single stock or search_stock for finding symbols. There is no mention of prerequisites, when not to use, or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dividend_infoAInspect
주식 배당 정보를 조회합니다.
Args:
company_name: 회사명 (예: "삼성전자", "SK하이닉스")
year: 결산년도 (예: "2025"). 빈 문자열이면 최근.
num_results: 조회 건수 (기본 20, 최대 100)
Returns:
배당률, 배당금, 배당기준일, 배당지급일| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| num_results | No | ||
| company_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not explicitly state that the tool is read-only or safe, though '조회' implies retrieval. It lists output fields but lacks detail on side effects or prerequisites.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loads the purpose, and uses a clear docstring format without unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple three-parameter, no-required-parameter nature and the presence of an output schema, the description covers purpose, parameters, and return fields comprehensively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing examples, defaults, and constraints for all three parameters, adding significant semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves stock dividend information, differentiating it from sibling tools like get_stock_price or get_price_trend.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, leaving the agent to infer context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_indexAInspect
KOSPI, KOSDAQ 등 주요 시장 지수를 조회합니다.
Args:
index_name: 지수명 (예: "코스피", "코스닥"). 빈 문자열이면 주요 지수 전체.
date: 조회일 (YYYY-MM-DD). 빈 문자열이면 최근 영업일.
num_results: 조회 건수 (기본 20, 최대 100)
Returns:
지수 종가, 등락률, 거래량, 거래대금, 상장시가총액| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| index_name | No | ||
| num_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It states returns but does not disclose behavioral traits such as permissions, side effects, or rate limits. It is a read operation, but that is inferred rather than stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured with Args and Returns sections, clear and front-loaded. It is concise but includes necessary examples, earning its sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description covers all necessary parameter details and return fields. It provides a complete picture for an index query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds extensive meaning beyond the input schema: example values for index_name, date format, and bounds for num_results. Schema coverage is 0%, so the description fully compensates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool queries major market indices like KOSPI and KOSDAQ, using a specific verb '조회' (query). It distinguishes from sibling stock tools, which focus on individual stocks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for index data but does not explicitly state when to use this tool versus alternatives. Given siblings are all stock-related, the context is clear but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_popular_stocksAInspect
주요 인기 종목의 코드 목록을 조회합니다. 종목 코드를 모를 때 참고하세요.
Returns:
주요 종목명과 6자리 코드 목록| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only states that it returns a list of popular stock codes and names. No behavioral details (e.g., data freshness, rate limits) are mentioned, but for a simple list retrieval, this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two short sentences plus a returns line. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, single output), the description fully explains purpose and usage. The presence of an output schema fills any structural gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so baseline is 4. The description does not need to add parameter information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a list of popular stock codes, and explicitly says to use it when you don't know the code, distinguishing it from sibling tools like compare_stocks or get_stock_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance: 'Refer when you don't know the stock code.' This implies use before other tools. No explicit exclusions or alternatives are given, but the straightforward nature of the tool makes this adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_price_trendAInspect
종목의 최근 N일간 주가 추이를 조회합니다.
Args:
stock_name: 종목명 (예: "삼성전자", "NAVER")
stock_code: 종목 단축코드 6자리 (예: "005930"). stock_name과 둘 중 하나.
days: 조회 기간 (기본 7일, 최대 30일)
Returns:
일별 종가, 등락률, 거래량 추이 및 기간 수익률| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| stock_code | No | ||
| stock_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully bears the burden. It correctly characterizes the tool as a read operation and details the return format. However, it lacks mention of authentication, rate limits, or behavior on missing data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections for args and returns. It is front-loaded with the purpose. Minor redundancy (Korean and English) slightly reduces conciseness, but it is still efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description does not need to detail returns but does so anyway. It covers parameters, defaults, and return fields. Missing error handling or edge cases slightly reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Since schema description coverage is 0%, the description's parameter descriptions add significant value: explaining stock_name/stock_code are alternatives (either one) and days default/max. This compensates for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves recent N-day stock price trends for a given stock, and specifies return fields (daily close, change rate, volume, period return). It distinguishes from siblings like get_stock_price (single price) and compare_stocks (comparison).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for trend analysis but does not explicitly state when to use this tool versus alternatives like get_stock_price. No 'when not to use' or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stock_priceAInspect
한국 주식 시세를 조회합니다. 종목명 또는 종목코드로 검색할 수 있습니다. 데이터는 전일 종가 기준입니다 (당일 실시간 아님).
Args:
stock_name: 종목명 (예: "삼성전자", "NAVER", "카카오")
stock_code: 종목 단축코드 6자리 (예: "005930"). stock_name과 둘 중 하나만 입력.
date: 조회일 (YYYY-MM-DD). 빈 문자열이면 최근 영업일.
market: 시장 구분 ("KOSPI", "KOSDAQ", "KONEX"). 빈 문자열이면 전체.
num_results: 조회 건수 (기본 20, 최대 100)
Returns:
종가, 시가, 고가, 저가, 거래량, 등락률, 시가총액 등| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| market | No | ||
| stock_code | No | ||
| stock_name | No | ||
| num_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explicitly discloses that data is based on the previous trading day's close (당일 실시간 아님), which is a key behavioral trait. It also lists return fields (종가, 시가, 고가, 저가, 거래량, 등락률, 시가총액). Given no annotations, the description carries the full burden and does so well, though it omits details on error handling or edge cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured: a brief purpose sentence, followed by a flagged note on data freshness, then a clean Args list, and finally Returns. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 optional parameters, no annotations, and an output schema exists, the description provides sufficient context: it covers the main use case, input constraints, data latency, and return fields. No critical gaps are apparent for a query tool of this nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining each parameter: stock_name (with examples), stock_code (with example and mutual exclusivity), date (format YYYY-MM-DD), market (enum values KOSPI/KOSDAQ/KONEX), and num_results (default 20, max 100). This adds crucial meaning beyond the schema's titles and defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool queries Korean stock prices (한국 주식 시세를 조회합니다) and specifies that search can be by stock name or code. It immediately distinguishes itself from related tools like compare_stocks or get_price_trend by focusing on retrieving current price data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: it explains the tool's purpose and how to use parameters (e.g., mutual exclusivity of stock_name and stock_code). However, it does not explicitly state when not to use this tool or mention alternatives like compare_stocks for comparison tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_stockAInspect
종목명 키워드로 KRX 상장종목을 검색합니다.
Args:
keyword: 검색 키워드 (예: "삼성", "바이오", "에너지")
market: 시장 구분 ("KOSPI", "KOSDAQ", "KONEX"). 빈 문자열이면 전체.
num_results: 조회 건수 (기본 20, 최대 100)
Returns:
종목코드, 종목명, 시장구분, 법인명, 법인등록번호| Name | Required | Description | Default |
|---|---|---|---|
| market | No | ||
| keyword | Yes | ||
| num_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose any behavioral traits such as data modification, authentication needs, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise and front-loaded with purpose, though the Args/Returns sections add moderate length. It is well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the return value explanation is sufficient. The description covers all three parameters adequately, but lacks annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description adds significant value by detailing each parameter with examples and defaults, clarifying their meaning and default values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (검색/search) and resource (KRX 상장종목), distinguishing it from siblings like compare_stocks or get_stock_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for keyword-based stock search but does not explicitly state when to use or avoid it, nor does it mention alternatives or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!