Spain Legal by Legal Fournier
Server Details
Spain legal MCP for visas, Beckham, NIE/TIE, residency, nationality, and EU family routes.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
6 toolscheck_beckham_eligibilityCheck Beckham EligibilityARead-onlyIdempotentInspect
Screen Spain's Beckham regime using qualitative gatechecks, returning the rule trace, review level, and canonical MCP resources for follow-up.
| Name | Required | Description | Default |
|---|---|---|---|
| move_reason | Yes | Main reason for relocating to Spain. | |
| ownership_band | No | Optional ownership context for director-style cases. | |
| employment_type | Yes | Employment structure that will support the move. | |
| years_since_last_spanish_residency | Yes | Number of years since the applicant was last a Spanish tax resident. |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes | High-level Beckham eligibility outcome. |
| reasons | Yes | Positive signals supporting the result. |
| summary | Yes | One-line explanation of the result. |
| next_steps | Yes | Suggested next steps. |
| references | Yes | Secondary Legal Fournier references. |
| review_level | Yes | How much human review is still advisable before treating the result as filing-ready. |
| decision_trace | Yes | Structured trace of the main Beckham screening factors. |
| blocking_issues | Yes | Blocking or weakening issues. |
| qualifying_paths | Yes | Potential qualifying paths. |
| key_rules_applied | Yes | Stable rules applied by the tool. |
| related_resource_uris | Yes | Canonical MCP resources an agent can read next without leaving the server. |
| official_legal_sources | Yes | Official legal sources anchoring the Beckham analysis. |
| suggested_follow_up_tools | Yes | Tool calls that are likely to advance the analysis. |
| current_verification_flags | Yes | Live-verification warnings for fact-sensitive or time-sensitive Beckham points. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive hints, so the description adds valuable behavioral context beyond safety: it specifies 'qualitative gatechecks' as the method and details the return structure (rule trace, review level, canonical resources). This helps the agent understand what kind of analysis is performed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence with zero waste. Front-loaded with action (Screen), target (Beckham regime), method (qualitative gatechecks), and output (rule trace, review level, resources). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated), rich input schema with 100% coverage, and comprehensive annotations, the description appropriately focuses on high-level behavior rather than enumerating parameters or return fields. Sufficient for a specialized tax screening tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with well-documented enums (move_reason, employment_type, ownership_band). The description provides no additional parameter semantics, but baseline 3 is appropriate when the schema carries full documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Screen' with clear resource 'Spain's Beckham regime' and distinguishes from siblings like compare_tax_regimes by focusing specifically on Beckham eligibility rather than general tax comparison. It also clarifies the return values (rule trace, review level, MCP resources).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the specificity to 'Beckham regime' implies usage context, there is no explicit guidance on when to use this versus compare_tax_regimes or get_visa_options. The mention of 'canonical MCP resources for follow-up' hints at workflow but lacks explicit when/when-not directives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_tax_regimesCompare Tax RegimesARead-onlyIdempotentInspect
Compare Beckham versus standard Spanish resident taxation conceptually, returning reasoning, review level, and canonical MCP resources instead of rate tables.
| Name | Required | Description | Default |
|---|---|---|---|
| employment_type | No | Employment structure to test against the conceptual tax comparison. | |
| has_foreign_income | No | Whether foreign-source income is material to the profile. | |
| prefers_predictability | No | Whether the applicant values a simpler, more predictable regime structure. | |
| has_significant_foreign_assets | No | Whether foreign assets are materially relevant to planning. |
Output Schema
| Name | Required | Description |
|---|---|---|
| caveats | Yes | Caveats that limit the comparison. |
| summary | Yes | Short explanation of the recommendation. |
| comparison | Yes | Topic-by-topic conceptual comparison. |
| references | Yes | Secondary Legal Fournier references. |
| next_actions | Yes | Next actions that sharpen the tax answer. |
| review_level | Yes | How much human review is still advisable before treating the result as filing-ready. |
| decision_trace | Yes | Structured trace of the main tax-comparison factors. |
| recommendation | Yes | Conceptual starting recommendation. |
| likely_fit_notes | Yes | Why the profile leans toward a given regime. |
| key_rules_applied | Yes | Stable rules applied by the tool. |
| related_resource_uris | Yes | Canonical MCP resources an agent can read next without leaving the server. |
| official_legal_sources | Yes | Official tax and mobility sources anchoring the comparison. |
| suggested_follow_up_tools | Yes | Tool calls that are likely to advance the analysis. |
| current_verification_flags | Yes | Live-verification warnings for entry-path, timing, or filing issues. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true, confirming safe analytical behavior. The description adds valuable context beyond these annotations by disclosing the output format (conceptual reasoning vs. data tables) and mentioning 'canonical MCP resources,' which hints at tool relationships without contradicting the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the core action ('Compare Beckham versus standard Spanish resident taxation conceptually') and immediately follows with output specifications. There is zero waste; every clause delivers necessary scoping information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and 100% input schema coverage, the description appropriately focuses on conceptual scope and output characterization rather than repeating structural details. It successfully conveys the analytical nature of the comparison and resource-linking behavior, though it could strengthen workflow context by mentioning the eligibility check prerequisite.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all four parameters (employment_type, has_foreign_income, prefers_predictability, has_significant_foreign_assets) are already well-documented in the schema. The description does not add additional parameter-specific guidance or interaction logic, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Compare'), clear resources ('Beckham versus standard Spanish resident taxation'), and scope ('conceptually'). It effectively distinguishes from sibling 'check_beckham_eligibility' by positioning this as a conceptual comparison tool rather than an eligibility validator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context by specifying what it returns ('reasoning, review level, and canonical MCP resources') and explicitly excluding rate tables ('instead of rate tables'), guiding the agent away from using this for numerical calculations. However, it does not explicitly state the workflow relationship with 'check_beckham_eligibility' (e.g., whether to check eligibility first).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
explain_nie_processExplain NIE ProcessARead-onlyIdempotentInspect
Return the stable NIE and TIE workflow, the key procedural distinctions, and the canonical process resource for agents.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| forms | Yes | Relevant forms and administrative references. |
| steps | Yes | Ordered process steps. |
| summary | Yes | Short overview of the NIE/TIE process. |
| references | Yes | Secondary Legal Fournier references. |
| next_actions | Yes | Next actions to progress the procedure. |
| review_level | Yes | How much human review is still advisable before treating the result as filing-ready. |
| decision_trace | Yes | Structured trace of the procedural distinctions that matter. |
| common_mistakes | Yes | Common mistakes in NIE/TIE processing. |
| key_distinctions | Yes | Key distinctions that agents should preserve. |
| key_rules_applied | Yes | Stable procedural rules applied by the tool. |
| related_resource_uris | Yes | Canonical MCP resources an agent can read next without leaving the server. |
| official_legal_sources | Yes | Official legal and administrative sources anchoring the procedure. |
| suggested_follow_up_tools | Yes | Tool calls that are likely to advance the analysis. |
| current_verification_flags | Yes | Live-verification warnings for office-level or fee-level volatility. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive behavior. The description adds value by specifying the content is 'stable' and 'canonical', implying reliability and official status, but does not disclose additional behavioral traits like caching policies or response freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, information-dense sentence with no filler. Key qualifiers ('stable', 'canonical', 'for agents') each serve distinct purposes in setting expectations about content quality and intended audience.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has zero parameters and an output schema exists, the description appropriately summarizes the output content (workflow, distinctions, resource) without needing to elaborate on return values. A brief mention that this requires no filters or prerequisites would strengthen completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4. The description appropriately implies no user input is required by using the transitive verb 'Return' without referencing any parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Return') and identifies the resource clearly ('NIE and TIE workflow', 'procedural distinctions', 'canonical process resource'). However, it does not explicitly differentiate from siblings like 'get_residency_path' which might also provide process information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the target audience ('for agents') but provides no explicit guidance on when to use this tool versus alternatives like 'get_residency_path' or 'route_to_legal_fournier_help'. There are no stated prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_residency_pathGet Residency PathARead-onlyIdempotentInspect
Explain the next permanent-residency or nationality milestone from current status and time in Spain, with explicit caution flags for counting issues.
| Name | Required | Description | Default |
|---|---|---|---|
| current_status | Yes | Current Spanish immigration or nationality status. | |
| years_in_spain | Yes | Years already spent in Spain under the relevant stay or residence history. | |
| nationality_track | No | Optional nationality timeline group for a more specific nationality answer. | |
| has_absence_concerns | No | Whether absences or continuity problems may weaken the residence or nationality clock. | |
| special_nationality_basis | No | Optional basis for the one-year nationality track when that exception is being claimed. |
Output Schema
| Name | Required | Description |
|---|---|---|
| summary | Yes | Short explanation of where the person sits on the path. |
| milestones | Yes | Key milestones on the path. |
| next_steps | Yes | Immediate next steps. |
| references | Yes | Secondary Legal Fournier references. |
| review_level | Yes | How much human review is still advisable before treating the result as filing-ready. |
| caution_notes | Yes | Important cautions. |
| decision_trace | Yes | Structured trace of the main timing factors. |
| key_rules_applied | Yes | Stable rules applied by the tool. |
| nationality_status | Yes | Nationality stage given the provided track information. |
| related_resource_uris | Yes | Canonical MCP resources an agent can read next without leaving the server. |
| official_legal_sources | Yes | Official legal sources anchoring the residence and nationality timeline analysis. |
| suggested_follow_up_tools | Yes | Tool calls that are likely to advance the analysis. |
| current_verification_flags | Yes | Live-verification warnings for route, continuity, or timing issues. |
| permanent_residency_status | Yes | Long-term residence stage. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm read-only/idempotent safety, while the description adds valuable behavioral context that the tool provides 'explicit caution flags for counting issues.' This discloses a key output characteristic (warning generation for residence clock problems) beyond what annotations provide, though it doesn't mention rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded with the action ('Explain'), specifies the inputs, and adds the unique value proposition ('explicit caution flags') without redundancy. Every clause earns its place in guiding tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and comprehensive input schema with enums, the description appropriately focuses on purpose and behavioral traits rather than enumerating parameters or return values. It successfully signals the tool's complexity-handling (multiple nationality tracks) through the 'counting issues' mention, though it could briefly acknowledge the optional nationality track variations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are well-documented in the schema itself. The description references the two required parameters ('current status and time') and alludes to the absence concern parameter ('counting issues'), but does not add syntax details or semantic clarifications beyond the schema definitions, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Explain') and resources ('permanent-residency or nationality milestone') and scopes the function to users already in Spain ('from current status and time in Spain'). It clearly distinguishes from sibling tools like get_visa_options (entry) and explain_nie_process (identification documents) by focusing on status progression.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage prerequisites by referencing 'current status and time in Spain,' indicating when the tool is applicable. However, it lacks explicit guidance on when NOT to use it or direct comparisons to alternatives like route_to_legal_fournier_help for complex cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_visa_optionsGet Visa OptionsARead-onlyIdempotentInspect
Rank Spain residence routes using evergreen logic and return decision traces, next actions, and canonical MCP resources for the leading branches.
| Name | Required | Description | Default |
|---|---|---|---|
| intent | Yes | Main relocation intent. | |
| nationality | Yes | Applicant nationality as a country name or ISO-style country code. | |
| income_source | Yes | Main source of income for the move. | |
| employer_location | No | Where the main employer or client base is located, if known. | |
| has_eu_family_link | No | Whether an EU family-member route may need separate review. | |
| investment_profile | No | Whether the investment plan is passive or tied to an operating business. | |
| has_spanish_job_offer | No | Whether the applicant already has a Spanish job offer. | |
| eu_family_relationship | No | Optional relationship label when an EU-family route may be relevant. |
Output Schema
| Name | Required | Description |
|---|---|---|
| references | Yes | Secondary Legal Fournier references, demoted behind MCP-native context. |
| next_actions | Yes | Next actions to progress the analysis. |
| review_level | Yes | How much human review is still advisable before treating the result as filing-ready. |
| general_notes | Yes | General notes that apply across the route list. |
| ranked_routes | Yes | Ranked visa or residence routes. |
| decision_trace | Yes | Structured trace of the main route-selection factors. |
| profile_summary | Yes | One-line summary of the screened profile. |
| ruled_out_routes | Yes | Common routes ruled out by stable legal logic. |
| key_rules_applied | Yes | Stable rules that drove the recommendation. |
| related_resource_uris | Yes | Canonical MCP resources an agent can read next without leaving the server. |
| official_legal_sources | Yes | Official legal sources that anchor the recommendation. |
| suggested_follow_up_tools | Yes | Tool calls that are likely to advance the analysis. |
| current_verification_flags | Yes | Live-verification warnings for volatile or fact-sensitive points. |
| nationality_classification | Yes | High-level nationality bucket used by the route logic. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only/idempotent safety, but the description adds valuable behavioral context: it discloses the ranking methodology ('evergreen logic'), explains the three-component output structure (traces, actions, canonical resources), and clarifies that it returns MCP resource references. It does not mention rate limits or auth requirements, but these may not apply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence front-loaded with the core action ('Rank Spain residence routes'). Every clause delivers distinct value: methodology ('evergreen logic'), output artifacts ('decision traces, next actions'), and integration pattern ('canonical MCP resources'). Zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description appropriately summarizes the return value structure without duplicating it. For a complex 8-parameter decision tool, it adequately covers the ranking behavior and output composition. Could optionally mention that it evaluates against multiple Spanish visa programs, but 'residence routes' implies sufficient breadth.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage across all 8 parameters (including enum values and types), the schema carries the semantic burden. The description does not add supplementary parameter guidance (e.g., specific nationality code formats), but this is unnecessary given the comprehensive schema documentation. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Rank') + resource ('Spain residence routes') and clearly distinguishes from sibling 'get_residency_path' by emphasizing comparative ranking rather than singular retrieval. It specifies the methodology ('evergreen logic') and deliverables ('decision traces, next actions, and canonical MCP resources'), making the scope distinct from specialized tools like 'check_beckham_eligibility'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage scenarios through the mention of ranking logic and decision traces, it lacks explicit guidance on when to use this broad ranking tool versus siblings like 'get_residency_path' (singular path retrieval) or 'check_beckham_eligibility' (specific regime check). No 'when-not' or prerequisite guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
route_to_legal_fournier_helpRoute To Legal Fournier HelpARead-onlyIdempotentInspect
Decide whether a Spain legal matter should be escalated to Legal Fournier and return a service match, preparation checklist, and ready-to-send handoff message.
| Name | Required | Description | Default |
|---|---|---|---|
| area | Yes | Area of Spain legal help that needs human escalation. | |
| urgency | No | How quickly the user needs human help. | |
| blockers | No | Known blockers that make a self-serve answer less reliable. | |
| already_filed | No | Whether the user already has a live filing, notice, denial, or active procedure. | |
| preferred_language | No | Preferred language for the human handoff. |
Output Schema
| Name | Required | Description |
|---|---|---|
| summary | Yes | Short explanation of the handoff recommendation. |
| urgency | Yes | Urgency level used for the handoff recommendation. |
| why_now | Yes | Reasons supporting escalation. |
| references | Yes | Secondary Legal Fournier references for the handoff. |
| booking_url | Yes | Preferred consultation-booking URL when the user wants direct legal advice now. |
| intake_fields | Yes | Structured intake payload an agent can map into a contact form, CRM, or booking handoff. |
| should_escalate | Yes | Whether human escalation is recommended from the supplied facts. |
| what_to_prepare | Yes | What the agent should gather for the handoff. |
| recommended_service | Yes | |
| agent_handoff_message | Yes | Ready-to-send summary an agent can reuse when escalating to Legal Fournier. |
| related_resource_uris | Yes | Canonical MCP resources an agent can read next without leaving the server. |
| representation_notice | Yes | Legal notice explaining that contact or booking does not itself create representation. |
| suggested_follow_up_tools | Yes | Tool calls that are likely to advance the analysis. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent safety, so the description appropriately focuses on adding context about what gets returned (service match, preparation checklist, handoff message). It also clarifies the decision-making nature of the tool beyond simple data retrieval, adding valuable behavioral expectations without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficiently structured sentence front-loaded with the core action. Every clause earns its place: the decision logic ('Decide whether... escalated'), the destination ('Legal Fournier'), and the three deliverables ('service match, preparation checklist, and ready-to-send handoff message'). Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema and rich input schema (100% coverage, enums), the description provides appropriate completeness by summarizing the three key deliverables and the escalation decision logic. It appropriately delegates parameter details to the schema while conveying the tool's scope in the legal domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage and well-defined enums, the schema carries the full burden of parameter documentation. The description neither repeats parameter details nor adds semantic relationships between them (e.g., how blockers interact with area), earning the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs (decide, escalate, return) and clearly identifies the resource (Spain legal matter → Legal Fournier). It effectively distinguishes this tool from self-serve siblings (check_beckham_eligibility, compare_tax_regimes, etc.) by emphasizing human escalation and decision-making.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context that this tool is for assessing escalation to human legal help ('Decide whether... should be escalated'). While it doesn't explicitly name alternatives or exclusions, the 'whether' clause and focus on Legal Fournier routing implicitly contrast with the informational sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!