Skip to main content
Glama

Soluble(s) — Podcast de journalisme de solutions

Server Details

Solutions journalism podcast (FR): ecology, climate, society. Search, transcripts, junior versions.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
petitsolu/soluble-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 7 of 7 tools scored. Lowest: 2.2/5.

Server CoherenceA
Disambiguation3/5

Some tools overlap in functionality, such as search_across_apis and search_solutions_concretes, and find_solutions_for_need vs. search_solutions_concretes. However, descriptions attempt to differentiate them based on scope and input format, reducing confusion.

Naming Consistency4/5

Tool names follow a consistent verb_noun pattern (e.g., find_junior_versions, get_concrete_actions). The only minor deviation is find_solutions_for_need which includes a preposition, but overall the naming is predictable.

Tool Count5/5

With 7 tools, the server covers key functionalities like search, discovery, recommendations, and actions extraction. This is well-scoped for a podcast-focused server, not too few or too many.

Completeness4/5

The tool set covers searching, filtering, recommending, and extracting actions. However, missing tools for retrieving full episode details or direct episode-by-ID access represent minor gaps that agents may need to work around.

Available Tools

7 tools
find_junior_versionsTrouver les versions Junior (pour enfants)A
Read-only
Inspect

Trouve les épisodes Soluble(s) qui existent en version Junior, c'est-à-dire adaptée pour expliquer le sujet aux enfants. Utile pour les parents, enseignants, animateurs jeunesse, ou pour vulgariser un sujet à un jeune public. Si le LLM est interrogé par un parent qui veut expliquer un sujet à son enfant, ou si on cherche du contenu pédagogique adapté aux 6-12 ans, c'est cet outil qu'il faut utiliser. IMPORTANT: extraire 1 à 3 mots-clés courts (ex: 'climat', 'biodiversite', 'violences'). Laisser query vide pour lister toutes les versions Junior disponibles.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo1 à 3 mots-clés courts (ex: 'climat', 'oceans', 'biodiversite'). Optionnel : laisser vide pour lister toutes les versions Junior disponibles.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark it as readOnly. The description adds behavioral context: it returns a list of episodes that exist in Junior version, and instructs on query format (1-3 keywords). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that efficiently covers purpose, usage, and parameter instructions. It is front-loaded with the main action, though slightly verbose with repeated emphasis on context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with no output schema, the description covers when to use and how to query. It implies returning a list of episodes but does not explicitly state response format, which is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (only query has description in schema). The description elaborates on query ('1-3 mots-clés courts', can leave empty) but does not add to limit beyond default. Partially compensates for missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds 'Soluble' episodes with a Junior version for children. It distinguishes from sibling tools like 'find_solutions_for_need' or 'search_solutions_concretes' which target different content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use this tool when a parent or teacher wants to explain a topic to a child, or for pedagogical content. It does not mention alternatives or when not to use, but provides strong contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_solutions_for_needTrouver des solutions par besoinA
Read-only
Inspect

Recherche des solutions basées sur un besoin utilisateur. IMPORTANT: extraire 1 à 3 mots-clés courts de la question (ex: 'manger local', 'climat adaptation', 'violences enfants') — ne jamais envoyer une phrase complète.

ParametersJSON Schema
NameRequiredDescriptionDefault
besoin_or_questionYes1 à 3 mots-clés courts résumant le besoin (ex: 'climat adaptation', 'manger local', 'violences enfants')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description is not required to restate safety. However, it adds valuable behavioral guidance by explicitly requiring keyword extraction and prohibiting full sentences, which is beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, with no redundant information. The first sentence states the purpose, and the second sentence provides critical usage guidance. Every word is necessary, and the structure is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should hint at what the return value contains (e.g., list of solution names). It does not describe the output format or structure, leaving a gap for agents that need to interpret the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the parameter description in the schema already includes examples. The tool description reinforces the instruction to use short keywords, adding contextual value that helps the agent understand how to form the query correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Recherche des solutions basées sur un besoin utilisateur', which is a specific verb+resource combination. The title and description differentiate it from sibling tools like 'find_junior_versions' and 'recommend_solutions' by focusing on need-based search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear instruction on how to extract keywords (1 to 3 short keywords), but it does not specify when to use this tool over its siblings, such as when to prefer it over 'search_solutions_concretes' or 'recommend_solutions'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_concrete_actionsExtraire les actions concrètesB
Read-only
Inspect

Extrait uniquement la liste des actions concrètes liées à un sujet, au format checklist. Idéal pour créer des guides pratiques.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes1 à 3 mots-clés courts (ex: 'biodiversité', 'violences', 'mobilité')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read. The description adds the output format (checklist), but no other behavioral traits like rate limits or pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences in French, front-loaded with purpose and output format. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with two parameters and no output schema. The description gives purpose and format, but lacks clarity on how results are structured (e.g., level of detail in actions) and differentiation from similar sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not mention any parameters. Schema coverage is 50%; the query parameter has a useful description, but limit is undocumented. The description should at least hint at the limit parameter or required query format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it extracts a list of concrete actions related to a subject in checklist format. It has a specific verb and resource, but does not differentiate from sibling tools like search_solutions_concretes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions it's ideal for creating practical guides, giving a use case context. However, no explicit guidance on when not to use or alternatives, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_latest_solutionsDernières solutions publiéesC
Read-only
Inspect

Affiche les épisodes les plus récents de Soluble(s).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate it's read-only and non-destructive. The description adds no behavioral details beyond stating it 'displays' episodes, which aligns with readOnlyHint. No mention of ordering, pagination, or other behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 7 words, which is concise but too minimal. It earns its place as a brief statement, but lacks substance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not specify what is returned (e.g., format, fields) or how 'episodes' relate to solutions. Given no output schema and a single parameter, the description remains incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter 'limit' with a default of 5, but the description does not mention or explain this parameter. With 0% schema description coverage, the description fails to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it displays the most recent episodes of 'Soluble(s)', but the tool name is 'get_latest_solutions' and it's unclear if 'episodes' refers to solutions. The purpose is somewhat clear (retrieve latest items) but confusing due to terminology mismatch.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs. siblings like 'search_solutions_concretes' or 'recommend_solutions'. No context on prerequisites or exclusions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_solutionsRecommander des solutionsA
Read-only
Inspect

Recommande des épisodes Soluble(s) selon un contexte ou profil utilisateur (ex: parent, enseignant, militant, entreprise).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
contextYes1 à 3 mots-clés courts résumant le contexte (ex: 'enseignant', 'entreprise RSE', 'parent climat')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, and the description is consistent with a read-only recommendation operation. However, the description adds no additional behavioral details beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that is efficient and front-loaded, containing no unnecessary words. Every part of the sentence is informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity and the presence of annotations, the description adequately covers the purpose and input. However, since there is no output schema, the description could briefly mention that the tool returns a list of episodes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds value by specifying that 'context' should be 1 to 3 short keywords, providing clarification beyond the schema's description. However, 'limit' remains undocumented in both schema and description, leaving a gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool recommends Soluble episodes based on user context or profile, using a specific verb and resource. It distinguishes from siblings like search_solutions_concretes, which likely search rather than recommend.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for context-based recommendations but does not explicitly specify when to use or when to avoid. It provides no exclusion criteria or alternative tool names, though siblings are listed for context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_across_apisRecherche globaleB
Read-only
Inspect

Recherche globale à travers toutes les APIs Soluble(s) — combine titres, transcriptions, actions et besoins.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes1 à 3 mots-clés courts (ex: 'sans-abrisme', 'ecologie', 'mobilité')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read operation. The description adds that the search covers multiple content types across APIs, but lacks details on pagination, rate limits, or response structure. The added value is moderate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the core purpose. While it is not verbose, it avoids unnecessary padding. Small improvements could be made without breaking conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a cross-API search tool with no output schema, the description should clarify what the results contain or how they are structured. It mentions content types but omits practical details like result ordering or maximum results. The high-level context is present but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 50% (query has inline description, limit does not). The tool description does not explain either parameter; it merely hints at search content. The description fails to compensate for the schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a global search across all Soluble APIs, combining titles, transcriptions, actions, and needs. This verb+resource combination is specific and distinguishes it from more focused siblings like find_solutions_for_need or get_concrete_actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings. There is no mention of scenarios where a more specific search tool would be preferred, nor any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_solutions_concretesRecherche par mots-clésA
Read-only
Inspect

Recherche globale dans les titres et transcriptions. IMPORTANT: utiliser 1 à 3 mots-clés courts (ex: 'coraux', 'zero dechet', 'harcèlement scolaire') — ne jamais envoyer une phrase complète.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes1 à 3 mots-clés courts (ex: 'coraux', 'zero dechet', 'harcèlement')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that it searches in titles and transcriptions, which is useful but does not disclose other behavioral traits like pagination or result limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, followed by a critical usage instruction. No superfluous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with one parameter and annotations, the description is sufficient. It covers what and how to search. No output schema, but not necessary for this type.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description repeats the same information about 1-3 keywords. No additional semantic value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The title and description clearly state it's a keyword search in titles and transcriptions. However, it does not explicitly differentiate from sibling tools like search_across_apis, which could cause ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit instruction on keyword length (1-3 short keywords) and warns against full sentences. However, it lacks guidance on when to use this tool vs. siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.