HN Pulse
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 8 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide only title, so description carries full burden. It successfully discloses the external Algolia dependency and scope (stories AND comments), but omits safety profile (read-only vs destructive), rate limits, or pagination behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence front-loaded with the action verb. Every word earns its place—'Algolia' signals external service, 'full-text' clarifies search type, and 'stories and comments' defines scope with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate given the presence of an output schema and complete input schema documentation. However, with minimal annotations (no hints), the description should ideally disclose auth requirements or rate limits for the Algolia integration to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description adds context that this is 'full-text search' (clarifying query behavior) and mentions 'comments' (relating to tags parameter), but does not elaborate on specific parameter syntax beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Search'), clear resource ('Hacker News stories and comments'), and distinguishes from siblings by specifying 'Algolia full-text search'—indicating this is a query-based tool versus the 'Get' siblings that fetch specific feeds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus the sibling fetch tools (e.g., GetTopStories, GetNewStories). The 'Search' naming provides implicit contrast to 'Get', but no explicit 'when to use' or 'alternatives' guidance is present.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With minimal annotations provided (only title), the description carries the full burden of behavioral disclosure. It mentions 'recent' implying a temporal filter but fails to specify the time window, rate limits, authentication requirements, default behavior when the optional 'count' parameter is omitted, or confirmation that this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action verb. The em-dash efficiently clarifies the domain without redundancy. Every word earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (documenting return values) and 100% input schema coverage, the description provides sufficient context for this simple retrieval tool. It adequately identifies the resource scope, though it could benefit from clarifying the optional nature of the parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting the 'count' parameter with its valid range (1-20). The description adds no parameter-specific context, but with complete schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Fetch' with clear resource 'Ask HN posts' and clarifies the domain with 'questions posed to the Hacker News community.' This effectively distinguishes the tool from siblings like GetShowHn, GetJobListings, and GetTopStories by explicitly naming the post type and explaining what 'Ask HN' means.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives like GetNewStories or SearchStories. It does not indicate prerequisites, filtering limitations, or recommend this tool over other HN content retrieval options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No behavioral annotations are provided (empty annotations object), so the description carries the full disclosure burden. The verb 'Fetch' implies a read-only operation, but the description does not confirm idempotency, mention rate limiting, authentication requirements, or caching behavior. It does not contradict any annotations, but adds minimal behavioral context beyond the operation name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that communicates the core function without redundant phrases or repetition of the tool name. Every word serves the purpose of defining the scope (recently submitted stories) and resource (Hacker News).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, 100% schema coverage, output schema present), the description is minimally sufficient. It does not need to explain return values due to the output schema. However, with seven sibling tools available, the description could be more complete by clarifying the 'newness' concept versus other listings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for its single parameter (count). The description does not mention the parameter or provide usage examples, but given the high schema coverage, the baseline score of 3 is appropriate—the schema sufficiently documents the parameter semantics without requiring redundant description text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches (reads) the most recently submitted stories from Hacker News. The phrase 'most recently submitted' effectively distinguishes this from the sibling GetTopStories (which retrieves top-ranked stories) and from filtered endpoints like GetAskHn or GetJobListings. However, it does not explicitly differentiate from SearchStories or provide explicit comparative guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings (e.g., when to choose new stories over top stories, or when to use SearchStories instead). There are no stated prerequisites, exclusions, or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With minimal annotations (only title), description carries burden of behavioral disclosure. It reveals the hierarchical comment structure (top comments vs replies) which hints at the nested data returned, but omits safety hints (read-only), error behavior (404 for invalid ID), or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence (12 words), front-loaded with verb. No redundancy or filler. Every clause earns its place by conveying scope (full details) and key data categories.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a fetch-by-ID tool with output schema present. Covers primary return fields. Minor gap: doesn't hint at error cases (story not found, private/deleted stories) or authentication requirements implied by the API.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3. Description mentions 'top comments' which reinforces the max_comments parameter semantics, but adds no syntax guidance (e.g., integer format) or clarifications beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Get) and resource (Hacker News story), with specific fields enumerated (title, URL, score, comments). Uses singular 'story' implying lookup by ID vs siblings that return lists, though explicit differentiation from list-fetching siblings (GetTopStories, GetNewStories) is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this vs sibling tools. Missing crucial workflow context: that this requires a story_id likely obtained from GetTopStories/GetNewStories/HnPulse_SearchStories, and no mention of required parameter constraints beyond the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide only redundant title with no safety hints, so description carries the burden. It discloses the ranking algorithm (score and recency) but omits read-only safety, rate limits, or cache behavior. 'Fetch' implies a safe read operation, but explicit confirmation would improve this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 12 words. Every element earns its place: action (Fetch), subject (current top stories), source (Hacker News), and behavior (ranked by score and recency). No redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple read operation with one optional parameter and an existing output schema. Captures the essential differentiator (ranking algorithm). Could be enhanced by noting this retrieves front-page/voted content specifically, but covers the necessary context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% ('Number of top stories to return (1-30)'), fully documenting the optional count parameter. Description adds no parameter-specific semantics beyond the schema, which is appropriate given the high coverage baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Fetch), resource (stories from Hacker News), and scope (ranked by score and recency). The ranking criteria implicitly distinguish it from siblings like GetNewStories (chronological) and GetAskHn (category-specific), though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied guidance through the ranking criteria ('score and recency' vs 'new'), but lacks explicit instructions on when to choose this over GetNewStories, GetAskHn, or SearchStories. No prerequisites or error conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With only a title annotation provided, the description carries full behavioral burden. It adds valuable content context ('YC companies and community job posts') indicating data characteristics, but omits operational details like rate limits, caching behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with no waste. Front-loaded with action verb and clear scope. The parenthetical detail earns its place by clarifying content source without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple read-only tool with one optional parameter and an existing output schema. Description adequately covers intent and data domain without needing to explain return values or complex nested structures.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single 'count' parameter which is fully documented in the schema. Description provides no additional parameter semantics, warranting the baseline score of 3 for high-coverage scenarios.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Employs specific verb 'Fetch' with clear resource 'job postings' from 'Hacker News'. Parenthetical '(YC companies and community job posts)' clarifies data scope and effectively distinguishes this from story-focused siblings like GetTopStories or GetAskHn.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The resource type (jobs vs stories/users) provides implied usage context that distinguishes it from siblings, but lacks explicit when-to-use guidance or comparisons to alternatives like SearchStories that might also return job-related content.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide only title, so description carries full disclosure burden. It adds valuable semantic context that Show HN contains projects/tools, but omits operational details: no mention of what 'recent' means (time window), rate limits, caching behavior, or pagination. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action ('Fetch recent Show HN posts') followed by clarifying em-dash explaining domain semantics. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for tool complexity: output schema exists (covering return values), only 1 parameter with 100% schema coverage, and content type is clearly identified. Minor gap: 'HN' abbreviation assumes familiarity with Hacker News; explicit mention of 'Hacker News' would improve completeness for agents without that context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (count parameter fully documented with range 1-20). Description does not reference the count parameter or add syntax/format examples, so baseline 3 applies per rubric guidelines for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: uses concrete verb 'Fetch' with resource 'Show HN posts' and distinguishes from siblings (Ask HN, Job Listings, Top Stories) by defining the content type as 'projects and tools shared by the HN community.' The em-dash construction efficiently differentiates this curated category from other HN content types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through domain terminology ('Show HN' = projects/tools), helping distinguish from Ask HN (questions) or Job Listings. However, lacks explicit when-to-use guidance or named alternatives (e.g., 'use GetNewStories for chronological content').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide only a title with no behavioral hints. The description adds valuable context by listing the specific data fields returned (karma, about text, creation date), but omits other behavioral traits like error handling for non-existent users, rate limits, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no waste. Front-loaded with action verb and resource, followed by colon-delimited enumeration of return fields. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and simple 2-parameter input, the description is appropriately complete. It identifies the key return fields, which helps agent selection, though it could mention the optional submissions parameter behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters (username case-sensitivity and include_recent_submissions). The description focuses on return fields rather than parameter semantics, maintaining the baseline score appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Get'), resource ('Hacker News user's profile'), and distinguishes from siblings by targeting user data rather than stories. It enumerates the specific fields returned (karma, about text, creation date), making the scope crystal clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description clearly targets user profiles (implied distinction from story-focused siblings like GetTopStories), it lacks explicit guidance on when to use this vs. alternatives. No 'when-not' or prerequisite conditions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/AnkamAndy/hn-pulse'
If you have feedback or need assistance with the MCP directory API, please join our Discord server