Skip to main content
Glama

sense

Get personalized module recommendations for learning goals by analyzing task descriptions and matching against module metadata. Returns ranked suggestions grouped by relevance to help users find appropriate study materials.

Instructions

Get personalized module recommendations based on a goal or task description. Uses keyword matching and category ranking (no AI inference). Faster than forage for broad exploration. Use when the user describes what they want to achieve and needs guidance on which modules to study. Behavior: analyzes the goal text, matches against module metadata, returns ranked suggestions grouped by relevance. Example: sense("I want to deploy a microservices app on Kubernetes with monitoring").

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
goalYesDescribe what you want to accomplish in natural language. Be descriptive for better recommendations. Example: "build a real-time chat app with WebSocket and React"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: 'analyzes the goal text, matches against module metadata, returns ranked suggestions grouped by relevance.' It clarifies the method ('keyword matching and category ranking, no AI inference') and performance ('faster than forage'). However, it lacks details on potential limitations, error handling, or output format specifics, which would be beneficial for a tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with each sentence adding value. It front-loads the purpose, follows with usage guidelines and behavioral details, and ends with a practical example. There is no redundant or unnecessary information, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is largely complete. It covers purpose, usage, behavior, and includes an example. However, without an output schema, it could benefit from more detail on the return format (e.g., structure of 'ranked suggestions grouped by relevance') to fully guide the agent. The absence of annotations means the description must compensate, which it does adequately but not exhaustively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'goal' parameter well-documented. The description adds minimal semantic value beyond the schema, mentioning 'goal text' and providing an example usage. It does not explain parameter constraints or interactions beyond what the schema already states, so it meets the baseline for high schema coverage without significant enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get personalized module recommendations based on a goal or task description.' It specifies the verb ('Get'), resource ('personalized module recommendations'), and mechanism ('keyword matching and category ranking'). It distinguishes from sibling 'forage' by noting it's 'faster than forage for broad exploration,' establishing a clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use when the user describes what they want to achieve and needs guidance on which modules to study.' It also specifies an alternative ('faster than forage for broad exploration'), giving clear context for selection among siblings. The example further illustrates appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/terrizoaguimor/celiums-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server