Soluble(s) — Podcast de journalisme de solutions
Server Details
Solutions journalism podcast (FR): ecology, climate, society. Search, transcripts, junior versions.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- petitsolu/soluble-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 7 of 7 tools scored. Lowest: 2.2/5.
Some tools overlap in functionality, such as search_across_apis and search_solutions_concretes, and find_solutions_for_need vs. search_solutions_concretes. However, descriptions attempt to differentiate them based on scope and input format, reducing confusion.
Tool names follow a consistent verb_noun pattern (e.g., find_junior_versions, get_concrete_actions). The only minor deviation is find_solutions_for_need which includes a preposition, but overall the naming is predictable.
With 7 tools, the server covers key functionalities like search, discovery, recommendations, and actions extraction. This is well-scoped for a podcast-focused server, not too few or too many.
The tool set covers searching, filtering, recommending, and extracting actions. However, missing tools for retrieving full episode details or direct episode-by-ID access represent minor gaps that agents may need to work around.
Available Tools
7 toolsfind_junior_versionsTrouver les versions Junior (pour enfants)ARead-onlyInspect
Trouve les épisodes Soluble(s) qui existent en version Junior, c'est-à-dire adaptée pour expliquer le sujet aux enfants. Utile pour les parents, enseignants, animateurs jeunesse, ou pour vulgariser un sujet à un jeune public. Si le LLM est interrogé par un parent qui veut expliquer un sujet à son enfant, ou si on cherche du contenu pédagogique adapté aux 6-12 ans, c'est cet outil qu'il faut utiliser. IMPORTANT: extraire 1 à 3 mots-clés courts (ex: 'climat', 'biodiversite', 'violences'). Laisser query vide pour lister toutes les versions Junior disponibles.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | 1 à 3 mots-clés courts (ex: 'climat', 'oceans', 'biodiversite'). Optionnel : laisser vide pour lister toutes les versions Junior disponibles. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark it as readOnly. The description adds behavioral context: it returns a list of episodes that exist in Junior version, and instructs on query format (1-3 keywords). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that efficiently covers purpose, usage, and parameter instructions. It is front-loaded with the main action, though slightly verbose with repeated emphasis on context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with no output schema, the description covers when to use and how to query. It implies returning a list of episodes but does not explicitly state response format, which is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (only query has description in schema). The description elaborates on query ('1-3 mots-clés courts', can leave empty) but does not add to limit beyond default. Partially compensates for missing schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds 'Soluble' episodes with a Junior version for children. It distinguishes from sibling tools like 'find_solutions_for_need' or 'search_solutions_concretes' which target different content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to use this tool when a parent or teacher wants to explain a topic to a child, or for pedagogical content. It does not mention alternatives or when not to use, but provides strong contextual guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_solutions_for_needTrouver des solutions par besoinARead-onlyInspect
Recherche des solutions basées sur un besoin utilisateur. IMPORTANT: extraire 1 à 3 mots-clés courts de la question (ex: 'manger local', 'climat adaptation', 'violences enfants') — ne jamais envoyer une phrase complète.
| Name | Required | Description | Default |
|---|---|---|---|
| besoin_or_question | Yes | 1 à 3 mots-clés courts résumant le besoin (ex: 'climat adaptation', 'manger local', 'violences enfants') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description is not required to restate safety. However, it adds valuable behavioral guidance by explicitly requiring keyword extraction and prohibiting full sentences, which is beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, with no redundant information. The first sentence states the purpose, and the second sentence provides critical usage guidance. Every word is necessary, and the structure is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so the description should hint at what the return value contains (e.g., list of solution names). It does not describe the output format or structure, leaving a gap for agents that need to interpret the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the parameter description in the schema already includes examples. The tool description reinforces the instruction to use short keywords, adding contextual value that helps the agent understand how to form the query correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Recherche des solutions basées sur un besoin utilisateur', which is a specific verb+resource combination. The title and description differentiate it from sibling tools like 'find_junior_versions' and 'recommend_solutions' by focusing on need-based search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear instruction on how to extract keywords (1 to 3 short keywords), but it does not specify when to use this tool over its siblings, such as when to prefer it over 'search_solutions_concretes' or 'recommend_solutions'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_concrete_actionsExtraire les actions concrètesBRead-onlyInspect
Extrait uniquement la liste des actions concrètes liées à un sujet, au format checklist. Idéal pour créer des guides pratiques.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | 1 à 3 mots-clés courts (ex: 'biodiversité', 'violences', 'mobilité') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read. The description adds the output format (checklist), but no other behavioral traits like rate limits or pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences in French, front-loaded with purpose and output format. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with two parameters and no output schema. The description gives purpose and format, but lacks clarity on how results are structured (e.g., level of detail in actions) and differentiation from similar sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not mention any parameters. Schema coverage is 50%; the query parameter has a useful description, but limit is undocumented. The description should at least hint at the limit parameter or required query format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it extracts a list of concrete actions related to a subject in checklist format. It has a specific verb and resource, but does not differentiate from sibling tools like search_solutions_concretes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it's ideal for creating practical guides, giving a use case context. However, no explicit guidance on when not to use or alternatives, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_latest_solutionsDernières solutions publiéesCRead-onlyInspect
Affiche les épisodes les plus récents de Soluble(s).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate it's read-only and non-destructive. The description adds no behavioral details beyond stating it 'displays' episodes, which aligns with readOnlyHint. No mention of ordering, pagination, or other behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 7 words, which is concise but too minimal. It earns its place as a brief statement, but lacks substance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description does not specify what is returned (e.g., format, fields) or how 'episodes' relate to solutions. Given no output schema and a single parameter, the description remains incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'limit' with a default of 5, but the description does not mention or explain this parameter. With 0% schema description coverage, the description fails to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it displays the most recent episodes of 'Soluble(s)', but the tool name is 'get_latest_solutions' and it's unclear if 'episodes' refers to solutions. The purpose is somewhat clear (retrieve latest items) but confusing due to terminology mismatch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. siblings like 'search_solutions_concretes' or 'recommend_solutions'. No context on prerequisites or exclusions provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_solutionsRecommander des solutionsARead-onlyInspect
Recommande des épisodes Soluble(s) selon un contexte ou profil utilisateur (ex: parent, enseignant, militant, entreprise).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| context | Yes | 1 à 3 mots-clés courts résumant le contexte (ex: 'enseignant', 'entreprise RSE', 'parent climat') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, and the description is consistent with a read-only recommendation operation. However, the description adds no additional behavioral details beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is efficient and front-loaded, containing no unnecessary words. Every part of the sentence is informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity and the presence of annotations, the description adequately covers the purpose and input. However, since there is no output schema, the description could briefly mention that the tool returns a list of episodes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds value by specifying that 'context' should be 1 to 3 short keywords, providing clarification beyond the schema's description. However, 'limit' remains undocumented in both schema and description, leaving a gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool recommends Soluble episodes based on user context or profile, using a specific verb and resource. It distinguishes from siblings like search_solutions_concretes, which likely search rather than recommend.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for context-based recommendations but does not explicitly specify when to use or when to avoid. It provides no exclusion criteria or alternative tool names, though siblings are listed for context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_across_apisRecherche globaleBRead-onlyInspect
Recherche globale à travers toutes les APIs Soluble(s) — combine titres, transcriptions, actions et besoins.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | 1 à 3 mots-clés courts (ex: 'sans-abrisme', 'ecologie', 'mobilité') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read operation. The description adds that the search covers multiple content types across APIs, but lacks details on pagination, rate limits, or response structure. The added value is moderate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the core purpose. While it is not verbose, it avoids unnecessary padding. Small improvements could be made without breaking conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a cross-API search tool with no output schema, the description should clarify what the results contain or how they are structured. It mentions content types but omits practical details like result ordering or maximum results. The high-level context is present but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 50% (query has inline description, limit does not). The tool description does not explain either parameter; it merely hints at search content. The description fails to compensate for the schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a global search across all Soluble APIs, combining titles, transcriptions, actions, and needs. This verb+resource combination is specific and distinguishes it from more focused siblings like find_solutions_for_need or get_concrete_actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus its siblings. There is no mention of scenarios where a more specific search tool would be preferred, nor any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_solutions_concretesRecherche par mots-clésARead-onlyInspect
Recherche globale dans les titres et transcriptions. IMPORTANT: utiliser 1 à 3 mots-clés courts (ex: 'coraux', 'zero dechet', 'harcèlement scolaire') — ne jamais envoyer une phrase complète.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | 1 à 3 mots-clés courts (ex: 'coraux', 'zero dechet', 'harcèlement') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that it searches in titles and transcriptions, which is useful but does not disclose other behavioral traits like pagination or result limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, followed by a critical usage instruction. No superfluous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with one parameter and annotations, the description is sufficient. It covers what and how to search. No output schema, but not necessary for this type.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description repeats the same information about 1-3 keywords. No additional semantic value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The title and description clearly state it's a keyword search in titles and transcriptions. However, it does not explicitly differentiate from sibling tools like search_across_apis, which could cause ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit instruction on keyword length (1-3 short keywords) and warns against full sentences. However, it lacks guidance on when to use this tool vs. siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!