Skip to main content
Glama

Spain Legal by Legal Fournier

Server Details

Spain legal MCP for visas, Beckham, NIE/TIE, residency, nationality, and EU family routes.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
check_beckham_eligibilityCheck Beckham EligibilityA
Read-onlyIdempotent
Inspect

Screen Spain's Beckham regime using qualitative gatechecks, returning the rule trace, review level, and canonical MCP resources for follow-up.

ParametersJSON Schema
NameRequiredDescriptionDefault
move_reasonYesMain reason for relocating to Spain.
ownership_bandNoOptional ownership context for director-style cases.
employment_typeYesEmployment structure that will support the move.
years_since_last_spanish_residencyYesNumber of years since the applicant was last a Spanish tax resident.

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYesHigh-level Beckham eligibility outcome.
reasonsYesPositive signals supporting the result.
summaryYesOne-line explanation of the result.
next_stepsYesSuggested next steps.
referencesYesSecondary Legal Fournier references.
review_levelYesHow much human review is still advisable before treating the result as filing-ready.
decision_traceYesStructured trace of the main Beckham screening factors.
blocking_issuesYesBlocking or weakening issues.
qualifying_pathsYesPotential qualifying paths.
key_rules_appliedYesStable rules applied by the tool.
related_resource_urisYesCanonical MCP resources an agent can read next without leaving the server.
official_legal_sourcesYesOfficial legal sources anchoring the Beckham analysis.
suggested_follow_up_toolsYesTool calls that are likely to advance the analysis.
current_verification_flagsYesLive-verification warnings for fact-sensitive or time-sensitive Beckham points.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive hints, so the description adds valuable behavioral context beyond safety: it specifies 'qualitative gatechecks' as the method and details the return structure (rule trace, review level, canonical resources). This helps the agent understand what kind of analysis is performed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single dense sentence with zero waste. Front-loaded with action (Screen), target (Beckham regime), method (qualitative gatechecks), and output (rule trace, review level, resources). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated), rich input schema with 100% coverage, and comprehensive annotations, the description appropriately focuses on high-level behavior rather than enumerating parameters or return fields. Sufficient for a specialized tax screening tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with well-documented enums (move_reason, employment_type, ownership_band). The description provides no additional parameter semantics, but baseline 3 is appropriate when the schema carries full documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Screen' with clear resource 'Spain's Beckham regime' and distinguishes from siblings like compare_tax_regimes by focusing specifically on Beckham eligibility rather than general tax comparison. It also clarifies the return values (rule trace, review level, MCP resources).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the specificity to 'Beckham regime' implies usage context, there is no explicit guidance on when to use this versus compare_tax_regimes or get_visa_options. The mention of 'canonical MCP resources for follow-up' hints at workflow but lacks explicit when/when-not directives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_tax_regimesCompare Tax RegimesA
Read-onlyIdempotent
Inspect

Compare Beckham versus standard Spanish resident taxation conceptually, returning reasoning, review level, and canonical MCP resources instead of rate tables.

ParametersJSON Schema
NameRequiredDescriptionDefault
employment_typeNoEmployment structure to test against the conceptual tax comparison.
has_foreign_incomeNoWhether foreign-source income is material to the profile.
prefers_predictabilityNoWhether the applicant values a simpler, more predictable regime structure.
has_significant_foreign_assetsNoWhether foreign assets are materially relevant to planning.

Output Schema

ParametersJSON Schema
NameRequiredDescription
caveatsYesCaveats that limit the comparison.
summaryYesShort explanation of the recommendation.
comparisonYesTopic-by-topic conceptual comparison.
referencesYesSecondary Legal Fournier references.
next_actionsYesNext actions that sharpen the tax answer.
review_levelYesHow much human review is still advisable before treating the result as filing-ready.
decision_traceYesStructured trace of the main tax-comparison factors.
recommendationYesConceptual starting recommendation.
likely_fit_notesYesWhy the profile leans toward a given regime.
key_rules_appliedYesStable rules applied by the tool.
related_resource_urisYesCanonical MCP resources an agent can read next without leaving the server.
official_legal_sourcesYesOfficial tax and mobility sources anchoring the comparison.
suggested_follow_up_toolsYesTool calls that are likely to advance the analysis.
current_verification_flagsYesLive-verification warnings for entry-path, timing, or filing issues.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true, confirming safe analytical behavior. The description adds valuable context beyond these annotations by disclosing the output format (conceptual reasoning vs. data tables) and mentioning 'canonical MCP resources,' which hints at tool relationships without contradicting the safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence that front-loads the core action ('Compare Beckham versus standard Spanish resident taxation conceptually') and immediately follows with output specifications. There is zero waste; every clause delivers necessary scoping information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and 100% input schema coverage, the description appropriately focuses on conceptual scope and output characterization rather than repeating structural details. It successfully conveys the analytical nature of the comparison and resource-linking behavior, though it could strengthen workflow context by mentioning the eligibility check prerequisite.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all four parameters (employment_type, has_foreign_income, prefers_predictability, has_significant_foreign_assets) are already well-documented in the schema. The description does not add additional parameter-specific guidance or interaction logic, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Compare'), clear resources ('Beckham versus standard Spanish resident taxation'), and scope ('conceptually'). It effectively distinguishes from sibling 'check_beckham_eligibility' by positioning this as a conceptual comparison tool rather than an eligibility validator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context by specifying what it returns ('reasoning, review level, and canonical MCP resources') and explicitly excluding rate tables ('instead of rate tables'), guiding the agent away from using this for numerical calculations. However, it does not explicitly state the workflow relationship with 'check_beckham_eligibility' (e.g., whether to check eligibility first).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

explain_nie_processExplain NIE ProcessA
Read-onlyIdempotent
Inspect

Return the stable NIE and TIE workflow, the key procedural distinctions, and the canonical process resource for agents.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
formsYesRelevant forms and administrative references.
stepsYesOrdered process steps.
summaryYesShort overview of the NIE/TIE process.
referencesYesSecondary Legal Fournier references.
next_actionsYesNext actions to progress the procedure.
review_levelYesHow much human review is still advisable before treating the result as filing-ready.
decision_traceYesStructured trace of the procedural distinctions that matter.
common_mistakesYesCommon mistakes in NIE/TIE processing.
key_distinctionsYesKey distinctions that agents should preserve.
key_rules_appliedYesStable procedural rules applied by the tool.
related_resource_urisYesCanonical MCP resources an agent can read next without leaving the server.
official_legal_sourcesYesOfficial legal and administrative sources anchoring the procedure.
suggested_follow_up_toolsYesTool calls that are likely to advance the analysis.
current_verification_flagsYesLive-verification warnings for office-level or fee-level volatility.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds value by specifying the content is 'stable' and 'canonical', implying reliability and official status, but does not disclose additional behavioral traits like caching policies or response freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, information-dense sentence with no filler. Key qualifiers ('stable', 'canonical', 'for agents') each serve distinct purposes in setting expectations about content quality and intended audience.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and an output schema exists, the description appropriately summarizes the output content (workflow, distinctions, resource) without needing to elaborate on return values. A brief mention that this requires no filters or prerequisites would strengthen completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which establishes a baseline score of 4. The description appropriately implies no user input is required by using the transitive verb 'Return' without referencing any parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Return') and identifies the resource clearly ('NIE and TIE workflow', 'procedural distinctions', 'canonical process resource'). However, it does not explicitly differentiate from siblings like 'get_residency_path' which might also provide process information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions the target audience ('for agents') but provides no explicit guidance on when to use this tool versus alternatives like 'get_residency_path' or 'route_to_legal_fournier_help'. There are no stated prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_residency_pathGet Residency PathA
Read-onlyIdempotent
Inspect

Explain the next permanent-residency or nationality milestone from current status and time in Spain, with explicit caution flags for counting issues.

ParametersJSON Schema
NameRequiredDescriptionDefault
current_statusYesCurrent Spanish immigration or nationality status.
years_in_spainYesYears already spent in Spain under the relevant stay or residence history.
nationality_trackNoOptional nationality timeline group for a more specific nationality answer.
has_absence_concernsNoWhether absences or continuity problems may weaken the residence or nationality clock.
special_nationality_basisNoOptional basis for the one-year nationality track when that exception is being claimed.

Output Schema

ParametersJSON Schema
NameRequiredDescription
summaryYesShort explanation of where the person sits on the path.
milestonesYesKey milestones on the path.
next_stepsYesImmediate next steps.
referencesYesSecondary Legal Fournier references.
review_levelYesHow much human review is still advisable before treating the result as filing-ready.
caution_notesYesImportant cautions.
decision_traceYesStructured trace of the main timing factors.
key_rules_appliedYesStable rules applied by the tool.
nationality_statusYesNationality stage given the provided track information.
related_resource_urisYesCanonical MCP resources an agent can read next without leaving the server.
official_legal_sourcesYesOfficial legal sources anchoring the residence and nationality timeline analysis.
suggested_follow_up_toolsYesTool calls that are likely to advance the analysis.
current_verification_flagsYesLive-verification warnings for route, continuity, or timing issues.
permanent_residency_statusYesLong-term residence stage.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent safety, while the description adds valuable behavioral context that the tool provides 'explicit caution flags for counting issues.' This discloses a key output characteristic (warning generation for residence clock problems) beyond what annotations provide, though it doesn't mention rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded with the action ('Explain'), specifies the inputs, and adds the unique value proposition ('explicit caution flags') without redundancy. Every clause earns its place in guiding tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and comprehensive input schema with enums, the description appropriately focuses on purpose and behavioral traits rather than enumerating parameters or return values. It successfully signals the tool's complexity-handling (multiple nationality tracks) through the 'counting issues' mention, though it could briefly acknowledge the optional nationality track variations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameters are well-documented in the schema itself. The description references the two required parameters ('current status and time') and alludes to the absence concern parameter ('counting issues'), but does not add syntax details or semantic clarifications beyond the schema definitions, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Explain') and resources ('permanent-residency or nationality milestone') and scopes the function to users already in Spain ('from current status and time in Spain'). It clearly distinguishes from sibling tools like get_visa_options (entry) and explain_nie_process (identification documents) by focusing on status progression.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage prerequisites by referencing 'current status and time in Spain,' indicating when the tool is applicable. However, it lacks explicit guidance on when NOT to use it or direct comparisons to alternatives like route_to_legal_fournier_help for complex cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_visa_optionsGet Visa OptionsA
Read-onlyIdempotent
Inspect

Rank Spain residence routes using evergreen logic and return decision traces, next actions, and canonical MCP resources for the leading branches.

ParametersJSON Schema
NameRequiredDescriptionDefault
intentYesMain relocation intent.
nationalityYesApplicant nationality as a country name or ISO-style country code.
income_sourceYesMain source of income for the move.
employer_locationNoWhere the main employer or client base is located, if known.
has_eu_family_linkNoWhether an EU family-member route may need separate review.
investment_profileNoWhether the investment plan is passive or tied to an operating business.
has_spanish_job_offerNoWhether the applicant already has a Spanish job offer.
eu_family_relationshipNoOptional relationship label when an EU-family route may be relevant.

Output Schema

ParametersJSON Schema
NameRequiredDescription
referencesYesSecondary Legal Fournier references, demoted behind MCP-native context.
next_actionsYesNext actions to progress the analysis.
review_levelYesHow much human review is still advisable before treating the result as filing-ready.
general_notesYesGeneral notes that apply across the route list.
ranked_routesYesRanked visa or residence routes.
decision_traceYesStructured trace of the main route-selection factors.
profile_summaryYesOne-line summary of the screened profile.
ruled_out_routesYesCommon routes ruled out by stable legal logic.
key_rules_appliedYesStable rules that drove the recommendation.
related_resource_urisYesCanonical MCP resources an agent can read next without leaving the server.
official_legal_sourcesYesOfficial legal sources that anchor the recommendation.
suggested_follow_up_toolsYesTool calls that are likely to advance the analysis.
current_verification_flagsYesLive-verification warnings for volatile or fact-sensitive points.
nationality_classificationYesHigh-level nationality bucket used by the route logic.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety, but the description adds valuable behavioral context: it discloses the ranking methodology ('evergreen logic'), explains the three-component output structure (traces, actions, canonical resources), and clarifies that it returns MCP resource references. It does not mention rate limits or auth requirements, but these may not apply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, dense sentence front-loaded with the core action ('Rank Spain residence routes'). Every clause delivers distinct value: methodology ('evergreen logic'), output artifacts ('decision traces, next actions'), and integration pattern ('canonical MCP resources'). Zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately summarizes the return value structure without duplicating it. For a complex 8-parameter decision tool, it adequately covers the ranking behavior and output composition. Could optionally mention that it evaluates against multiple Spanish visa programs, but 'residence routes' implies sufficient breadth.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage across all 8 parameters (including enum values and types), the schema carries the semantic burden. The description does not add supplementary parameter guidance (e.g., specific nationality code formats), but this is unnecessary given the comprehensive schema documentation. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Rank') + resource ('Spain residence routes') and clearly distinguishes from sibling 'get_residency_path' by emphasizing comparative ranking rather than singular retrieval. It specifies the methodology ('evergreen logic') and deliverables ('decision traces, next actions, and canonical MCP resources'), making the scope distinct from specialized tools like 'check_beckham_eligibility'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage scenarios through the mention of ranking logic and decision traces, it lacks explicit guidance on when to use this broad ranking tool versus siblings like 'get_residency_path' (singular path retrieval) or 'check_beckham_eligibility' (specific regime check). No 'when-not' or prerequisite guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources