e-Stat MCP
Server Details
政府統計 (e-Stat) の API を通じて、統計データやメタ情報を取得するためのサービスです。
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolse-stat-get-data-cataloge-Stat データカタログ情報取得 / e-Stat Data Catalog InfoARead-onlyInspect
e-Statに登録されている統計データベースのカタログ情報(統計名、説明、提供状況、関連リンクなど)を取得する。統計データの全体像把握や一覧表示用途に使用する。
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | 言語 (J: 日本語, E: 英語) | |
| limit | No | データ取得件数 (省略時は20件、最大50件) | |
| dataType | No | 検索データ形式 (XLS, CSV, PDF, XML, DBなどカンマ区切り) | |
| catalogId | No | カタログID | |
| openYears | No | 公開年月 (Format: 単年検索: yyyy, 単月検索: yyyymm, 範囲検索: yyyymm-yyyymm) | |
| statsCode | No | 政府統計コード: 内閣官房=00000, 内閣法制局=00010, 人事院=00020, 内閣府=00100, 宮内庁=00110, 公正取引委員会=00120, 国家公安委員会・警察庁=00130, 防衛庁=00140, 防衛施設庁=00141, 金融庁=00150, 消費者庁=00160, こども家庭庁=00170, デジタル庁=00180, 個人情報保護委員会=00190, 総務省=00200, 公害等調整委員会=00201, 消防庁=00202, 法務省=00250, 公安調査庁=00251, 外務省=00300, 財務省=00350, 国税庁=00351, 文部科学省=00400, 文化庁=00401, スポーツ庁=00402, 厚生労働省=00450, 社会保険庁=00451, 中央労働委員会=00452, 農林水産省=00500, 林野庁=00501, 水産庁=00502, 経済産業省=00550, 資源エネルギー庁=00551, 特許庁=00552, 中小企業庁=00553, 国土交通省=00600, 観光庁=00601, 気象庁=00602, 運輸安全委員会=00603, 海上保安庁=00604, 環境省=00650, 防衛省=00700 | |
| resourceId | No | カタログリソースID | |
| searchWord | No | 検索キーワード (AND, OR, NOT使用可能) | |
| statsField | No | 統計分野 大分類: 国土・気象=01, 人口・世帯=02, 労働・賃金=03, 農林水産業=04, 鉱工業=05, 商業・サービス業=06, 企業・家計・経済=07, 住宅・土地・建設=08, エネルギー・水=09, 運輸・観光=10, 情報通信・科学技術=11, 教育・文化・スポーツ・生活=12, 行財政=13, 司法・安全・環境=14, 社会保障・衛生=15, 国際=16, その他=99 小分類: 国土=0101, 気象=0102, 人口=0201, 世帯=0202, 人口動態=0203, 人口移動=0204, 労働力=0301, 賃金・労働条件=0302, 雇用=0303, 労使関係=0304, 労働災害=0305, 農業=0401, 畜産業=0402, 林業=0403, 水産業=0404, 鉱業=0501, 製造業=0502, 商業=0601, 需給流通=0602, サービス業=0603, 企業活動=0701, 金融・保険・通貨=0702, 物価=0703, 家計=0704, 国民経済計算=0705, 景気=0706, 住宅・土地=0801, 建設=0802, 電気=0901, ガス=0902, エネルギー需給=0903, 水=0904, 運輸=1001, 倉庫=1002, 観光=1003, 情報通信・放送=1101, 科学技術=1102, 知的財産=1103, 学校教育=1201, 社会教育=1202, 文化・スポーツ・生活=1203, 行政=1301, 財政=1302, 公務員=1303, 選挙=1304, 司法=1401, 犯罪=1402, 災害=1403, 事故=1404, 環境=1405, 社会保障=1501, 社会保険=1502, 社会福祉=1503, 保険衛生=1504, 医療=1505, 貿易・国際収支=1601, 国際協力=1602, その他=9999 | |
| collectArea | No | 集計地域区分 (1:全国, 2:都道府県, 3:市区町村) | |
| surveyYears | No | 調査年月 (Format: 単年検索: yyyy, 単月検索: yyyymm, 範囲検索: yyyymm-yyyymm) | |
| updatedDate | No | 更新日付 | |
| startPosition | No | データ取得開始位置 (1から始まる行番号) | |
| explanationGetFlg | No | 解説情報有無 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, establishing this as a safe, non-destructive read operation with potentially large result sets. The description adds valuable context about the tool's purpose being for 'overview and listing' rather than detailed data retrieval, which helps the agent understand appropriate use cases beyond what annotations provide. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve distinct purposes: the first defines what the tool retrieves, the second specifies its intended use cases. There's no wasted language, repetition, or unnecessary elaboration, making it highly efficient for agent comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (14 parameters, no output schema) and rich annotations, the description provides adequate context about the tool's purpose and usage. However, without an output schema, some additional guidance about return format or result structure would be helpful, though the annotations (openWorldHint) suggest potentially large result sets.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already provides comprehensive documentation for all 14 parameters including detailed enum descriptions. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('取得する' - retrieve/get) and resource ('統計データベースのカタログ情報' - statistical database catalog information), including what information is retrieved (statistical names, descriptions, availability, related links). It explicitly distinguishes the purpose from data retrieval by stating this is for 'overview understanding and listing purposes' rather than actual data access.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('統計データの全体像把握や一覧表示用途に使用する' - for understanding the overall picture of statistical data and listing purposes). However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools (e-stat-get-meta-info, e-stat-get-stats-list), though the purpose differentiation implies usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
e-stat-get-meta-infoe-Stat 統計表メタ情報取得 / e-Stat Statistics Table Meta InfoARead-onlyInspect
e-stat-get-stats-list ツールで取得した統計表IDに基づいて、メタ情報(分類項目、地域区分、時間軸、表章事項などの構造情報)を取得する。統計データの項目構成や取得可能な条件値を把握するために使用する。
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | 言語 (J: 日本語, E: 英語) | |
| statsDataId | Yes | 統計表情報取得ツール(e-stat-get-stats-list)で取得した統計表ID | |
| explanationGetFlg | No | 解説情報有無 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe read operation with potentially incomplete information. The description adds valuable context about what kind of meta information is retrieved ('分類項目、地域区分、時間軸、表章事項などの構造情報' - structural information like classification items, regional divisions, time axis, presentation items), which helps the agent understand the nature of the returned data beyond just knowing it's read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly concise with two sentences: the first states what the tool does and its input dependency, the second explains its purpose. Every word earns its place with zero redundancy. The description is front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and full schema coverage, the description provides excellent context about when to use it and what information it returns. The main gap is the lack of output schema, but the description gives a good sense of what meta information to expect. It could be slightly more specific about the format/structure of the returned meta information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate since the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('取得する' - retrieve/acquire) and resource ('メタ情報' - meta information) with explicit scope ('統計表IDに基づいて' - based on statistics table ID). It distinguishes from sibling tools by specifying it's for meta information after using e-stat-get-stats-list, unlike e-stat-get-data-catalog which presumably gets catalog data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('e-stat-get-stats-list ツールで取得した統計表IDに基づいて' - based on statistics table ID obtained from e-stat-get-stats-list tool) and the purpose ('統計データの項目構成や取得可能な条件値を把握するために使用する' - used to understand item structure and available condition values of statistical data). It clearly positions this as a follow-up to e-stat-get-stats-list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
e-stat-get-stats-liste-Stat 統計表一覧取得 / e-Stat Statistics Table ListARead-onlyInspect
指定した条件に基づき、e-Statに登録されている統計表の一覧および基本情報(統計表ID、統計名、調査年月、公開日など)を取得する。実際の統計数値は含まれず、統計データの検索・特定用途に使用する。
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | 言語 (J: 日本語, E: 英語) | |
| limit | No | データ取得件数 (省略時は20件、最大50件) | |
| openYears | No | 公開年月 (Format: 単年検索: yyyy, 単月検索: yyyymm, 範囲検索: yyyymm-yyyymm) | |
| statsCode | No | 政府統計コード: 内閣官房=00000, 内閣法制局=00010, 人事院=00020, 内閣府=00100, 宮内庁=00110, 公正取引委員会=00120, 国家公安委員会・警察庁=00130, 防衛庁=00140, 防衛施設庁=00141, 金融庁=00150, 消費者庁=00160, こども家庭庁=00170, デジタル庁=00180, 個人情報保護委員会=00190, 総務省=00200, 公害等調整委員会=00201, 消防庁=00202, 法務省=00250, 公安調査庁=00251, 外務省=00300, 財務省=00350, 国税庁=00351, 文部科学省=00400, 文化庁=00401, スポーツ庁=00402, 厚生労働省=00450, 社会保険庁=00451, 中央労働委員会=00452, 農林水産省=00500, 林野庁=00501, 水産庁=00502, 経済産業省=00550, 資源エネルギー庁=00551, 特許庁=00552, 中小企業庁=00553, 国土交通省=00600, 観光庁=00601, 気象庁=00602, 運輸安全委員会=00603, 海上保安庁=00604, 環境省=00650, 防衛省=00700 | |
| searchKind | No | 検索データ種別 (1:統計情報, 2:小地域・地域メッシュ) | |
| searchWord | No | 検索キーワード (AND, OR, NOT使用可能) | |
| statsField | No | 統計分野 大分類: 国土・気象=01, 人口・世帯=02, 労働・賃金=03, 農林水産業=04, 鉱工業=05, 商業・サービス業=06, 企業・家計・経済=07, 住宅・土地・建設=08, エネルギー・水=09, 運輸・観光=10, 情報通信・科学技術=11, 教育・文化・スポーツ・生活=12, 行財政=13, 司法・安全・環境=14, 社会保障・衛生=15, 国際=16, その他=99 小分類: 国土=0101, 気象=0102, 人口=0201, 世帯=0202, 人口動態=0203, 人口移動=0204, 労働力=0301, 賃金・労働条件=0302, 雇用=0303, 労使関係=0304, 労働災害=0305, 農業=0401, 畜産業=0402, 林業=0403, 水産業=0404, 鉱業=0501, 製造業=0502, 商業=0601, 需給流通=0602, サービス業=0603, 企業活動=0701, 金融・保険・通貨=0702, 物価=0703, 家計=0704, 国民経済計算=0705, 景気=0706, 住宅・土地=0801, 建設=0802, 電気=0901, ガス=0902, エネルギー需給=0903, 水=0904, 運輸=1001, 倉庫=1002, 観光=1003, 情報通信・放送=1101, 科学技術=1102, 知的財産=1103, 学校教育=1201, 社会教育=1202, 文化・スポーツ・生活=1203, 行政=1301, 財政=1302, 公務員=1303, 選挙=1304, 司法=1401, 犯罪=1402, 災害=1403, 事故=1404, 環境=1405, 社会保障=1501, 社会保険=1502, 社会福祉=1503, 保険衛生=1504, 医療=1505, 貿易・国際収支=1601, 国際協力=1602, その他=9999 | |
| collectArea | No | 集計地域区分 (1:全国, 2:都道府県, 3:市区町村) | |
| surveyYears | No | 調査年月 (Format: 単年検索: yyyy, 単月検索: yyyymm, 範囲検索: yyyymm-yyyymm) | |
| updatedDate | No | 更新日付 (Format: 単年検索: yyyy, 単月検索: yyyymm, 単日検索: yyyymmdd, 範囲検索: yyyymmdd-yyyymmdd) | |
| startPosition | No | データ取得開始位置 (1から始まる行番号) | |
| explanationGetFlg | No | 解説情報有無 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, covering safety and data scope. The description adds useful context: it clarifies that the output contains metadata (not actual statistical values) and is for search/identification purposes. However, it does not disclose additional behavioral traits like rate limits, authentication needs, pagination behavior (implied by 'startPosition' but not explained), or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently conveys purpose, scope, inclusions/exclusions, and usage context. It is front-loaded with the core action ('取得する') and avoids redundancy. Every part earns its place by adding value beyond the tool name/title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (12 parameters, no output schema) and rich annotations, the description is mostly complete. It clarifies the tool's role in a workflow (search/identification) and output nature (metadata only). However, without an output schema, the description could better hint at the response structure or pagination behavior, leaving a minor gap for a fully informed agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for all 12 parameters including enums and formats. The description does not add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't explain how parameters interact or provide usage examples). Baseline is 3 when schema coverage is high, as the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '取得する' (retrieve) a list of statistical tables and basic information from e-Stat. It specifies what is included (statistical table ID, statistical name, survey date, publication date) and explicitly excludes actual statistical values, distinguishing it from potential data-fetching siblings like 'e-stat-get-data-catalog'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: '統計データの検索・特定用途に使用する' (used for searching/identifying statistical data). It implies this is for discovery/metadata purposes, not for retrieving actual data values. However, it does not explicitly state when to use this tool versus its siblings (e-stat-get-data-catalog, e-stat-get-meta-info), which would be needed for a score of 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!