Anilist
Server Details
AniList MCP — wraps AniList GraphQL API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-anilist
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 8 of 8 tools scored. Lowest: 3.3/5.
Most tools have distinct purposes (e.g., ask_pipeworx vs. search_anime), but ask_pipeworx and discover_tools both relate to selecting tools, and ask_pipeworx's description suggests it can also answer queries that might overlap with search_anime or get_anime, causing minor ambiguity.
The naming convention is inconsistent: some tools use underscores (ask_pipeworx, discover_tools, search_anime, trending_anime) while others use single verbs (forget, recall, remember). The verb-noun pattern is not consistent across the set.
With 8 tools, the count is reasonable. The set covers both memory operations and anime queries, but the ask_pipeworx tool's broad scope may make the actual surface feel more limited than suggested.
The anime-related tools (search, get, trending) cover basic retrieval but lack create/update/delete for anime (if such operations exist). The memory tools provide a complete CRUD-like pattern. There is a gap in not having a tool to list or update anime entries.
Available Tools
8 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it picks the right tool and fills arguments, abstracting away tool selection and schema learning. With no annotations provided, the description adequately conveys behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise (4 sentences), front-loaded with purpose, and includes examples. One sentence on behavior could be slightly tightened.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and no output schema, the description is sufficient to understand tool functionality. Provides clear examples for common use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description elaborates on the 'question' parameter, explaining it should be a natural language request. Adds value beyond schema with examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool accepts natural language questions and returns answers by selecting the best data source. Distinguishes itself from siblings by its role as a universal query tool, not limited to anime or memory operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to 'just describe what you need' and provides examples. However, does not specify when NOT to use this tool or mention alternatives for specific domains.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It states the tool returns 'the most relevant tools with names and descriptions,' which is adequate. However, it does not disclose if the tool has side effects, requires authentication, or any rate limits. A score of 3 is appropriate as it gives basic behavioral info but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each serving a purpose: stating the action, describing the return, and providing usage guidance. It is front-loaded and efficient with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple search with 2 params, no output schema), the description covers the core functionality and usage context. However, it could mention whether the search is fuzzy or exact, or if results are ranked, but the existing description is sufficient for an agent to use it effectively. Score 4 for being mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage: both parameters have descriptions. The tool description adds context about the query parameter ('Natural language description...') and the limit parameter (default and max values), reinforcing schema info. However, the description doesn't provide additional meaning beyond what's in the schema, so score is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It uses a specific verb ('search') and resource ('tool catalog'), and distinguishes itself from siblings by being a discovery tool for a large catalog, unlike specific data tools like search_anime or get_anime.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance and implies when not to use (when you already know the tool), setting it apart from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full behavioral burden. It states the action 'delete' which implies mutability, but doesn't disclose any side effects, authorization needs, or whether deletion is permanent. A score of 3 is given as it's minimally adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It is front-loaded with the action and resource, meeting the need for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (1 param, no output schema, no annotations), the description is adequate but not thorough. It covers the basic purpose but lacks guidance on usage and behavioral details. Could be improved by mentioning that deletion is permanent or that the key must exist.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema; it repeats 'memory key to delete' essentially. No additional context on the key format or constraints is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb 'Delete' and identifies the resource 'stored memory' and the parameter 'key'. It distinguishes itself from sibling tools like 'remember' (create) and 'recall' (retrieve) by specifying the delete action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus alternatives. For example, it doesn't mention that deletion is irreversible or that it should be used only when the key is known. With siblings like 'recall' and 'remember', explicit usage context would be beneficial.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_animeAInspect
Get full anime details by ID. Returns title, synopsis, episodes, duration, status, score, genres, studios, and season information.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | AniList media ID (e.g. 21 for One Piece, 1 for Cowboy Bebop) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the tool is a read operation that returns specific fields, but lacks details on error behavior, rate limits, or whether ID must be valid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with two sentences. First sentence states action and input, second lists output. Efficient but could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter and no output schema, the description provides enough information for basic use. It lists returned fields but could mention pagination or error states.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add significant meaning beyond the schema. The schema already describes 'id' as 'AniList media ID' with examples. Description lists returned fields but not parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves full details for an anime by its AniList ID. The verb 'Get' and specific resource 'anime' combined with the list of returned fields (title, synopsis, etc.) make the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage when you need detailed info for a known ID, but does not explicitly mention when to use alternatives like 'search_anime' or 'trending_anime'. No guidance on when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Describes dual behavior (by key vs list all) and session persistence. Could mention idempotency or side effects, but sufficient for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and conditions. No redundancy, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simplicity (1 param, no output schema, no nested objects), description covers purpose and usage. Could mention return format or error handling, but adequate for a straightforward recall tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (single parameter 'key' described). Description adds context: omit to list all. Baseline 3 appropriate as schema already explains parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves a memory by key or lists all if key omitted. Specific verb ('Retrieve') and resource ('memory') with distinct behaviors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (to retrieve context saved earlier) and how to list all (omit key). Does not explicitly mention when not to use or alternative tools, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description discloses key behavioral traits: persistent memory for authenticated users and 24-hour expiration for anonymous sessions. It also implies data storage but does not mention any limitations on storage size or potential overwrite behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loads the core action, and provides additional context without redundancy. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple input, the description adequately explains the tool's purpose, persistence behavior, and example uses. It lacks mention of return value or confirmation, but this is minor for a store operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds meaning by explaining the purpose of key-value pairs (e.g., 'subject_property', 'target_ticker') and that value is any text. The examples in the schema further clarify usage, so the description complements it well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'store' and resource 'key-value pair in your session memory'. It distinguishes from siblings like 'forget' and 'recall' by specifying that it stores data for later retrieval, but does not explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a general usage context ('save intermediate findings, user preferences, or context across tool calls'), but does not specify when not to use it or explicitly compare with alternatives like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_animeBInspect
Search for anime by title. Returns title, episode count, status, score, genres, and synopsis. Use get_anime with the ID for full details.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1–25, default 10) | |
| query | Yes | Anime title to search for, e.g. "Attack on Titan" or "Cowboy Bebop" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states it returns specific fields (title, episodes, etc.) but does not disclose rate limits, data freshness, or behavior on no results. The return format is implicit but not detailed (no output schema).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence front-loads purpose, followed by what it returns. No wasted words. Under 20 words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with 2 params and no output schema, the description covers basics. Missing: pagination behavior, error cases, but acceptable for this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both params (query and limit). Description adds no extra parameter meaning beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'search' and resource 'anime by title using AniList'. It distinguishes from sibling 'get_anime' (likely single result) and 'trending_anime' (trending vs search). However, it doesn't explicitly contrast with these siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use when needing to find anime by title. No explicit when-not-to-use or alternatives given, though siblings exist. Context signals show only one required param (query) with a default limit, so usage is straightforward.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trending_animeAInspect
Get currently trending anime ranked by popularity. Returns title, status, score, episode count, and genres. Use get_anime with the ID for full details.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1–25, default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. States it returns specific fields (title, status, score, episodes, genres) and ordering (by trending score). Does not mention rate limits, pagination, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose and ranking, second lists returned fields. No redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, description adequately covers purpose and return fields. No explicit mention of default limit or range, but schema handles that. Overall sufficient for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description need not add parameter details. The description does not mention the limit parameter, but schema already documents it adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get currently trending anime on AniList, ranked by trending score', specifying verb 'Get', resource 'currently trending anime', and ranking metric. Distinct from sibling tools like search_anime (search) and get_anime (single anime).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions what the tool does (trending) and what it returns. No explicit when-not-to-use or alternatives, but sibling differentiation is clear from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!