Entra News MCP Server
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v0.1.3
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 4 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return format ('complete text of the newsletter with section headings preserved'), which compensates for missing output schema. Does not mention rate limits or auth, but return structure is the critical behavioral trait for this read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence covers purpose and identification methods; second covers return value. Front-loaded with action verb and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters with 100% schema coverage and no output schema, the description appropriately explains return values ('complete text... section headings preserved'). No complex nested objects or annotations require additional explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by framing the parameters as alternatives ('issue number or publication date'), clarifying the OR relationship that the schema (with 0 required fields) doesn't explicitly convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Retrieve' with resource 'Entra.news issue' and scope 'full content'. The word 'specific' effectively distinguishes this from sibling tools list_issues (plural/listing) and search_entra_news (searching).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage by stating you retrieve 'by issue number or publication date', suggesting identifier-based lookup. However, lacks explicit when-to-use guidance contrasting with siblings (e.g., 'use this instead of search_entra_news when you need complete text').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It compensates by disclosing the return structure ('Returns a list of issues with title, date, and URL') which is critical without an output schema. It notes the optional filtering behavior. Does not explicitly state read-only nature, though implied by 'Browse'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: (1) states purpose and filtering, (2) discloses return values, (3) provides workflow guidance. Efficiently front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 optional parameters with full schema coverage and no output schema, the description adequately covers what is returned. The workflow context with sibling tools completes the picture. Minor gap: could explicitly mention pagination behavior, though offset/limit parameters are self-documenting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed descriptions for year, month, limit, and offset. The description mentions 'optional year/month filtering' which reinforces the schema but does not add syntax details beyond what the schema already provides. Baseline 3 is appropriate given comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Browse') and resource ('Entra.news archive') and explicitly distinguishes from siblings by stating this is for discovering what issues exist before using get_issue or search_entra_news.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Use this to discover what issues exist before using get_issue or search_entra_news.' This clearly sequences the tool's use relative to alternatives and defines its role in the discovery phase.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Effectively discloses return structure ('Returns tool names, descriptions, GitHub URLs, and the issue context') compensating for missing output schema. Explains optional filtering behavior. Does not mention rate limits or auth requirements, but 'Find' implies safe read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences. First establishes purpose and return values; second explains optional filtering. Zero redundancy—every clause provides actionable information about scope or behavior. Appropriately front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a low-complexity tool (2 params, no nested objects, 100% schema coverage) lacking output schema, the description is complete. It explicitly documents the return payload structure (tool names, URLs, context) that would normally appear in an output schema, eliminating the need for additional behavioral description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by connecting the filtering concept to semantic use cases ('find tools related to a specific technology or capability'), helping agents understand the query parameter's purpose beyond syntax. Could explicitly name the 'limit' parameter, but schema handles this adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Find' plus precise resource scope 'community tools, GitHub projects, and open-source resources mentioned in Entra.news'. Clearly distinguishes from siblings (get_issue, list_issues, search_entra_news) by focusing exclusively on tool mentions rather than general newsletter content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual differentiation by specifying the tool searches for 'community tools, GitHub projects, and open-source resources', implicitly guiding users to select this when seeking tooling recommendations rather than general news. Lacks explicit 'use X instead for general searches' exclusion, but the scope is distinct enough to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and succeeds well. It explicitly states the return format ('sourced excerpts from past issues with issue number, date, and URL'), explains the hybrid search capability, and warns about the OPENAI_API_KEY dependency for semantic mode. Missing only minor details like rate limits or empty-result behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: sentence 1 states purpose, sentence 2 describes output, sentence 3 explains search modes with constraint, sentence 4 defines scope. Front-loaded with the action verb and contains no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a three-parameter search tool without output schema, the description is comprehensive. It covers purpose, return structure, search methodology, authentication requirements, and temporal boundaries. The agent has sufficient information to invoke the tool correctly and interpret results despite lacking formal output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by elaborating that 'semantic' mode specifically requires an OPENAI_API_KEY (critical context for the mode parameter) and clarifying that the query accepts 'natural language or keywords' (reinforcing the query parameter's dual purpose).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Search') and clear resource ('full Entra.news archive'), establishing exactly what the tool does. It distinguishes itself from siblings like get_issue and list_issues by emphasizing 'full archive' and 'natural language or keywords' rather than specific issue retrieval or enumeration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage constraints including the temporal scope ('mid-2023 to present') and the API key requirement for semantic search mode. While it doesn't explicitly name sibling tools as alternatives, the 'full archive' scope clearly signals when to use this versus specific-issue retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/darrenjrobinson/EntraNewsMCPServer'
If you have feedback or need assistance with the MCP directory API, please join our Discord server