Skip to main content
Glama
andybrandt

MCP Simple OpenAI Assistant

by andybrandt

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Latest release: v1.0.0

  • Disambiguation5/5

    Each tool has a clearly distinct purpose targeting specific resources and actions in the OpenAI Assistant domain. For example, ask_assistant_in_thread handles messaging, create_assistant/update_assistant manage assistants, and create_new_assistant_thread/delete_thread/update_thread manage threads. There is no overlap or ambiguity between tools.

    Naming Consistency5/5

    All tools follow a consistent verb_noun naming pattern with snake_case throughout. Examples include list_assistants, create_assistant, update_thread, and delete_thread. This predictable naming makes it easy to understand each tool's function at a glance.

    Tool Count5/5

    With 9 tools, this server is well-scoped for managing OpenAI Assistants and threads. It covers core operations like creating, listing, updating, and deleting both assistants and threads, plus the essential ask_assistant_in_thread for interactions. Each tool earns its place without being excessive.

    Completeness5/5

    The tool set provides complete CRUD/lifecycle coverage for both assistants and threads. It includes create, retrieve, list, update, and delete operations for both resources, plus the ask_assistant_in_thread tool for core functionality. There are no obvious gaps that would hinder agent workflows.

  • Average 4.1/5 across 9 of 9 tools scored.

    See the Tool Scores section below for per-tool breakdowns.

  • This repository is licensed under MIT License.

  • This repository includes a README.md file.

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • If you are the author, simply .

    If the server belongs to an organization, first add glama.json to the root of your repository:

    {
      "$schema": "https://glama.ai/mcp/schemas/server.json",
      "maintainers": [
        "your-github-username"
      ]
    }

    Then . Browse examples.

  • Add related servers to improve discoverability.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate readOnlyHint=false, which aligns with the 'Create' action, so no contradiction. The description adds useful context: it creates a new assistant with configurable instructions and model, and recommends checking existing ones first. However, it doesn't disclose behavioral traits like rate limits, authentication needs, or what happens on duplicate names, which would be valuable beyond the annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately sized with three sentences: purpose, capabilities, and usage recommendation. It's front-loaded with the main action and wastes no words. However, the second sentence could be slightly more concise by combining the instruction and model points.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (3 parameters, 2 required), annotations covering read-only status, and an output schema (which handles return values), the description is fairly complete. It covers purpose, key parameters, and usage context. However, it could improve by addressing potential errors or behavioral nuances, making it slightly incomplete for full agent guidance.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, so the description carries full burden. It mentions 'instructions' and 'model' but doesn't explain the 'name' parameter or provide details like format constraints or default values (e.g., model defaults to 'gpt-4o' per schema). The description adds some meaning but doesn't fully compensate for the low schema coverage, leaving gaps in parameter understanding.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Create a new OpenAI assistant' with the ability to specify instructions and model. It distinguishes from siblings like list_assistants and update_assistant by focusing on creation rather than listing or updating. However, it doesn't explicitly contrast with create_new_assistant_thread, which might cause some confusion.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear guidance: 'It is recommended to check existing assistants with list_assistants before creating a new one.' This helps avoid duplicates and suggests when to use this tool versus list_assistants. However, it doesn't explicitly mention when NOT to use it (e.g., vs. update_assistant) or provide alternatives for similar actions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true, indicating this is a safe read operation. The description adds value by specifying that it retrieves 'detailed information' and references list_assistants for ID sourcing, but doesn't disclose additional behavioral traits like error handling, rate limits, or response format details beyond what annotations cover.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is two sentences, front-loaded with the core purpose and followed by a helpful usage note. Every sentence earns its place with no wasted words, making it appropriately sized and efficient.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (one parameter), annotations covering safety, and an output schema existing, the description is mostly complete. It explains the purpose and ID sourcing, but could benefit from mentioning what 'detailed information' includes or any prerequisites, though the output schema reduces the need for return value details.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With schema description coverage at 0% and only one parameter (assistant_id), the description adds some meaning by explaining that the ID can be retrieved from list_assistants. However, it doesn't provide format details or constraints for the assistant_id, so it partially compensates but doesn't fully document the parameter beyond the schema's basic type.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose with 'Get detailed information about a specific assistant,' which is a specific verb+resource combination. However, it doesn't explicitly differentiate from siblings like 'list_assistants' beyond implying this is for single-assistant retrieval versus listing multiple assistants.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear context by stating 'The ID required can be retrieved from the list_assistants tool,' which guides when to use this tool (after obtaining an ID from list_assistants). It doesn't explicitly mention when not to use it or name alternatives, but the context is sufficient for basic usage.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate readOnlyHint=false (mutation), which aligns with 'Modify' in the description. The description adds behavioral context: at least one optional parameter must be provided to avoid errors, and ID sourcing from list_assistants. However, it lacks details on permissions, rate limits, or mutation effects beyond what annotations provide.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core purpose, followed by critical constraints in two concise sentences. Every sentence adds essential information (modification scope, parameter requirement, ID sourcing) with zero waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given a mutation tool with annotations (readOnlyHint=false), 4 parameters (0% schema coverage), and an output schema (reducing need to describe returns), the description covers purpose, constraints, and ID sourcing adequately. It could improve by detailing parameter semantics or error cases more fully.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, so the description must compensate. It mentions parameters (name, instructions, model) and their optional nature, but doesn't explain semantics like format constraints or model options. It adds some value over the bare schema but doesn't fully address the coverage gap.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Modify') and resource ('existing assistant'), specifying the updatable fields (name, instructions, model). It distinguishes from siblings like create_assistant (creation vs. modification) and retrieve_assistant (retrieval vs. update), though not explicitly naming alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear context: use when modifying an assistant's attributes, with the ID obtainable from list_assistants. It implies usage vs. create_assistant (modify existing vs. create new) but doesn't explicitly state when not to use or compare to all siblings like update_thread.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations provide readOnlyHint=false, indicating a mutation, but the description adds crucial behavioral context: it specifies the deletion scope (both servers and local database) and explicitly warns that the action is irreversible. This goes beyond annotations by detailing the destructive nature and permanence of the operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is two sentences, front-loaded with the core action and followed by a critical warning. Every word earns its place, with no redundancy or unnecessary elaboration, making it highly efficient and easy to parse.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (destructive, irreversible operation) and the presence of an output schema (which handles return values), the description is mostly complete. It covers the action, scope, and key behavioral warning. However, it could benefit from more explicit usage guidelines or error handling context to reach a perfect score.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0% description coverage, but the description does not add any parameter-specific information beyond what is implied by the tool name and action. With one parameter (thread_id) and no schema details, the baseline is 3, as the description does not compensate for the lack of schema coverage but doesn't contradict it either.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Deletes') and resource ('a conversation thread'), with explicit scope ('from both OpenAI's servers and the local database'). It distinguishes from sibling tools like 'list_threads' or 'update_thread' by specifying a destructive deletion operation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context through the irreversible warning, suggesting this should be used only when permanent removal is intended. However, it does not explicitly state when to use this tool versus alternatives (e.g., archiving vs. deletion) or mention prerequisites like thread existence, leaving room for improvement.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate readOnlyHint=false, confirming this is a mutation tool, which aligns with the description's 'Updates' action. The description adds useful context about dual updates to 'both the local database and the OpenAI thread object,' which goes beyond annotations. However, it lacks details on permissions, error handling, or rate limits, leaving some behavioral aspects unclear.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core purpose in the first sentence, followed by additional context in a concise second sentence. Every sentence adds value: the first defines the action and scope, the second explains the dual update effect, and the third provides a usage prerequisite. No wasted words.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has an output schema (implied by 'Has output schema: true'), the description need not detail return values. It covers the mutation purpose, scope, and a usage hint adequately. However, with 0% schema coverage and no annotations beyond readOnlyHint, it could benefit from more parameter guidance or behavioral details to be fully complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, so the description carries full burden. It mentions 'name and/or description' as updatable fields, which maps to two parameters, and notes 'thread_id' as required for retrieval from 'list_threads.' However, it does not explain the optional nature of 'name' and 'description' (nullable with defaults) or provide format examples, leaving gaps in parameter understanding.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Updates'), the target resource ('name and/or description of a locally saved conversation thread'), and the scope of the update ('Both the local database and the OpenAI thread object'). It distinguishes this from sibling tools like 'delete_thread' or 'list_threads' by focusing on modification rather than deletion or listing.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear context for when to use this tool: to modify existing threads, with a prerequisite noted ('The thread ID can be retrieved from the list_threads tool'). However, it does not explicitly state when not to use it or name alternatives (e.g., vs. 'create_new_assistant_thread' for new threads), which prevents a perfect score.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations provide readOnlyHint=false, indicating a write operation, which aligns with 'creates.' The description adds valuable behavioral context beyond annotations: it discloses persistence ('stored in OpenAI's servers,' 'not deleted unless the user deletes them'), storage details ('stored in the local database'), and reusability. No contradictions with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately sized and front-loaded with key information (creation, persistence, storage). The final sentence ('Think how you can utilize threads...') is somewhat vague and could be omitted for better conciseness, but overall, most sentences earn their place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (creation with persistence), annotations cover safety, and an output schema exists (so return values need not be explained), the description is mostly complete. It covers purpose, behavior, and storage, but could improve by addressing prerequisites or error cases.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, so the description must compensate. It mentions 'user-defined name and description,' which maps to the two parameters, but does not add meaning beyond what the schema names imply (e.g., no format, length, or content guidelines). The description provides basic semantics but lacks depth, resulting in a baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb 'creates' and the resource 'new, persistent conversation thread' with specific attributes (user-defined name and description). It distinguishes from siblings like 'list_threads' (which lists) and 'delete_thread' (which removes). The purpose is specific and unambiguous.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear context for usage ('for easy identification and reuse,' 'can re-use them for future conversations'), but does not explicitly state when to use this tool versus alternatives like 'update_thread' or 'ask_assistant_in_thread.' It implies usage for starting new threads but lacks explicit exclusions or comparisons.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context beyond annotations by specifying the return format ('list of assistants with their IDs, names, and configurations') and implying it's a listing operation with no destructive effects. However, it doesn't mention behavioral aspects like pagination, rate limits, or authentication requirements, which could be relevant for an API tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured in two sentences: the first states the purpose and return value, the second provides usage guidance. Every sentence adds value without redundancy, and it's front-loaded with the core functionality. No extraneous information is included.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (1 optional parameter), rich annotations (readOnlyHint), and the presence of an output schema (which handles return values), the description is complete enough. It covers purpose, usage context, and return format, addressing the key needs for a list operation without over-explaining what's already structured.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 1 parameter with 0% description coverage (no schema descriptions), so the description carries the full burden. It doesn't explicitly mention the 'limit' parameter or its semantics, which is a gap. However, with only 1 parameter and a default value provided in the schema, the baseline is high. The description compensates somewhat by implying listing behavior, but doesn't fully explain parameter usage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('List all available OpenAI assistants') and resource ('associated with the API key configured by the user'), distinguishing it from siblings like 'retrieve_assistant' (which gets one) and 'create_assistant' (which makes a new one). It explicitly mentions what information is returned ('IDs, names, and configurations'), making the purpose unambiguous.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit guidance on when to use this tool: 'This can be used to select an assistant to use in the ask_assistant_in_thread tool instead of creating a new one.' It names a specific alternative ('ask_assistant_in_thread') and clarifies the context (selection vs. creation), offering clear usage differentiation from siblings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds value by specifying that it returns a list with specific fields (ID, name, description, last used time) and mentions the thread ID's use in 'ask_assistant_in_thread', providing useful context beyond annotations. It does not disclose behavioral traits like pagination or sorting, but with annotations covering safety, this is acceptable.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core purpose in the first sentence, followed by details on return values and usage in two additional sentences. Every sentence adds value without waste, making it efficient and well-structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (0 parameters, read-only operation), annotations provide safety context, and an output schema exists (implied by context signals), the description is complete. It explains what the tool does, what it returns, and how to use the output, covering all necessary aspects without redundancy.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing on output and usage. Baseline for 0 parameters is 4, as it avoids unnecessary details.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb 'lists' and the resource 'locally saved conversation threads from the database', specifying it returns a list with ID, name, description, and last used time. It distinguishes from siblings like 'list_assistants' by focusing on threads rather than assistants, and from 'delete_thread' or 'update_thread' by being a read operation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explicitly mentions using the thread ID with 'ask_assistant_in_thread' to continue a thread, providing clear context for when to use this tool. However, it does not specify when not to use it or compare it to alternatives like 'list_assistants', leaving some guidance gaps.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds valuable behavioral context beyond annotations: it explains the streaming nature ('provides progress updates and the final message in a single call'), clarifies that threads aren't inherently linked to assistants, and mentions the conversational context. While annotations only provide readOnlyHint=false and title, the description meaningfully expands on operational behavior without contradicting annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with a clear purpose statement first, followed by usage guidance. Every sentence adds value: the first explains the core functionality, the second provides context for use, and the third clarifies parameter relationships. No redundant or unnecessary information is included.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's conversational nature, 3 parameters with 0% schema coverage, readOnlyHint=false annotation, and presence of an output schema, the description is complete. It covers purpose, usage context, parameter semantics, and behavioral aspects like streaming. The output schema handles return values, so the description appropriately focuses on operational guidance.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description must compensate for the lack of parameter documentation in the schema. It effectively explains the purpose and source of all three required parameters (thread_id, assistant_id, message) by describing their roles in the operation and how to obtain them, though it doesn't provide format or validation details.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('sends a message to an assistant within a specific thread and streams the response'), identifies the resource (assistant, thread), and distinguishes from siblings by emphasizing streaming and thread/assistant combination. It explicitly mentions this is for continuing conversations, which differentiates it from thread creation tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit guidance on when to use this tool ('to continue a conversation with an assistant in a specific thread'), how to obtain required parameters (thread_id from list_threads, assistant_id from list_assistants), and clarifies the relationship between threads and assistants. It effectively distinguishes this from sibling tools like create_new_assistant_thread.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-simple-openai-assistant MCP server

Copy to your README.md:

Score Badge

mcp-simple-openai-assistant MCP server

Copy to your README.md:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/andybrandt/mcp-simple-openai-assistant'

If you have feedback or need assistance with the MCP directory API, please join our Discord server