Skip to main content
Glama

Velvoite — EU Financial Regulatory Compliance

Server Details

EU financial regulatory monitoring: DORA, MiCA, MiFID II, AML, Solvency II and more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

16 tools
audit_taxonomyA
Read-only
Inspect
Audit the actor role taxonomy: compare model-defined roles vs deployed roles in the database.

Returns per-regulation analysis showing:
- model_only: roles the enrichment model can produce but aren't in the DB yet (gap)
- deployed_only: roles in the DB but not in the model (unexpected — data quality issue)
- role_counts: each deployed role with obligation count
- known_issues: overlaps, naming issues, investigation items

Use this for QA validation of the actor role taxonomy.
Requires admin API key.

No parameters needed — returns full corpus audit.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds critical behavioral context including authentication requirements ('Requires admin API key') and detailed output semantics (explaining that 'model_only' indicates gaps while 'deployed_only' indicates data quality issues). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with purpose front-loaded, followed by detailed return value documentation using bullet points for scanability, then usage context and requirements. Every sentence provides unique information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters and an output schema exists, the description appropriately focuses on explaining the audit methodology, return value semantics, and access control rather than parameter mechanics. Covers all necessary context for a specialized QA tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters. Description explicitly confirms 'No parameters needed — returns full corpus audit,' which meets the baseline expectation for parameterless tools and prevents confusion about missing arguments.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource: 'Audit the actor role taxonomy' and immediately clarifies the scope ('compare model-defined roles vs deployed roles'). Distinct from sibling 'get_actor_roles' by focusing on comparative QA analysis rather than simple retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use this for QA validation of the actor role taxonomy.' Includes prerequisite restriction ('Requires admin API key') acting as an implicit exclusion. Lacks explicit naming of alternatives (e.g., 'use get_actor_roles for standard retrieval'), but the admin requirement and QA focus provide clear contextual boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_actor_rolesA
Read-only
Inspect
Get available actor roles per regulation with obligation counts.

Actor roles define WHO an obligation applies to within a specific regulation
(e.g. 'credit_institution', 'significant_institution' for CRD/CRR, or
'ai_provider', 'high_risk_deployer' for AI Act).

Use this to discover which roles exist before filtering obligations with
get_obligations(actor_role=...).

Returns roles grouped by regulation, sorted by obligation count.
Each role includes a human-readable label and description.

Args:
    regulation: Filter to a specific regulation code (e.g. 'dora', 'ai_act'). If omitted, returns roles for all regulations.
ParametersJSON Schema
NameRequiredDescriptionDefault
regulationNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond readOnlyHint annotation: specifies that returns are 'grouped by regulation, sorted by obligation count' and include 'human-readable label and description'. Minor gap regarding pagination or rate limiting prevents a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured across four logical blocks: operation summary, domain concept explanation with examples, usage guidance, and parameter documentation. No redundant or wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description adequately covers the necessary context: domain terminology, return structure (grouping/sorting), and filtering semantics. Sufficient for correct agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the Args section fully compensates by explaining the regulation parameter filters to specific codes, providing concrete examples ('dora', 'ai_act'), and documenting the default behavior when omitted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb-resource combination ('Get available actor roles per regulation') and includes scope detail ('with obligation counts'). Distinguishes from sibling get_obligations by defining actor roles conceptually and explaining their relationship to obligations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly prescribes workflow: 'Use this to discover which roles exist before filtering obligations with get_obligations(actor_role=...)'. Names the sibling tool directly and clarifies the sequencing of calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_canonical_obligationsA
Read-only
Inspect
Get deduplicated canonical obligations with enforcement intelligence.

Returns one obligation per unique legal requirement per actor role.
Each includes compliance difficulty, guidance, and enforcement metrics.

Use this instead of get_obligations when you want a clean, deduplicated
view of what a regulated entity must comply with, enriched with
enforcement risk data.

Args:
    regulation: Filter by regulation code (e.g. 'dora', 'mica', 'aml').
    actor_role: Comma-separated actor roles (e.g. 'credit_institution,significant_institution').
    entity_type: Filter by entity type code (e.g. 'credit_institution').
    compliance_difficulty: Filter by difficulty: 'low', 'medium', 'high', 'critical'.
    min_enforcement_count: Only return obligations with at least this many enforcement actions.
    sort: Sort order. Options: 'enforcement_count_desc' (default), 'compliance_difficulty_desc', 'regulation', 'actor_role'.
    page: Page number (default 1).
    per_page: Results per page (default 20, max 100).
ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
sortNoenforcement_count_desc
per_pageNo
actor_roleNo
regulationNo
entity_typeNo
compliance_difficultyNo
min_enforcement_countNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds valuable behavioral context about deduplication logic ('Returns one obligation per unique legal requirement per actor role') and output enrichment ('compliance difficulty, guidance, and enforcement metrics'). Does not disclose rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose upfront, followed by behavioral details, explicit usage guidelines, and Args section. Every sentence serves a distinct function; no redundancy with schema titles.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema, the description appropriately hints at return contents without over-explaining. Covers the critical deduplication logic distinguishing it from siblings. Could explicitly mention total count availability or pagination limits beyond parameter defaults.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (only titles), but the Args section fully compensates with detailed descriptions for all 8 parameters including format examples ('dora', 'mica', 'aml'), syntax notes ('Comma-separated'), and valid enum values for sort options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with specific verb+resource ('Get deduplicated canonical obligations with enforcement intelligence') and immediately distinguishes from sibling get_obligations by emphasizing 'deduplicated' and 'enforcement intelligence' capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this instead of get_obligations when...' providing clear sibling differentiation and when-to-use context ('clean, deduplicated view' vs presumably raw obligations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_profileA
Read-only
Inspect
Get the calling company's regulatory posture — saved entity types,
actor roles per regulation, and active conditions.

The profile defines WHICH regulations and roles apply to this company.
Use the actor_roles to filter obligations with get_obligations(actor_role=...).

The profile is the baseline — you can extend beyond it using get_actor_roles()
to discover additional roles if your analysis suggests they may be relevant.

If the profile is empty (profile_complete=false), the company hasn't completed
onboarding yet. Guide them to set up their profile at app.velvoite.eu/account.

No parameters needed — the profile is determined by the API key.

Returns:
    company_name: Company name
    jurisdictions: Active jurisdictions (always includes 'eu')
    profile:
        entity_types: List of entity type codes (e.g. ['credit_institution'])
        actor_roles: Dict of regulation_code -> list of role codes
            (e.g. {'dora': ['financial_entity'], 'ai_act': ['ai_deployer']})
        conditions: Dict of regulation_code -> list of active conditions
            (e.g. {'dora': ['uses_ict_third_party'], 'ai_act': ['always']})
    profile_complete: Whether the company has selected at least one actor role
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint, the description adds critical behavioral context: the profile is determined by API key (not parameters), explains the onboarding state edge case, and defines profile_complete semantics. Could mention caching or error states for a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with purpose first, then usage patterns, edge case handling, and return value documentation. No redundant text; every sentence provides actionable guidance or structural clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite output schema existing, description provides comprehensive return value documentation with examples (e.g., {'dora': ['financial_entity']}) and semantic notes ('always includes eu'). Thoroughly covers the nested profile structure and completion status.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present (baseline 4). Description adds value by explaining why no parameters are needed ('determined by the API key'), which clarifies scoping behavior beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Get' and clear resource 'calling company's regulatory posture'. Distinguishes from siblings by explicitly referencing get_obligations and get_actor_roles with specific usage patterns ('Use the actor_roles to filter...', 'extend beyond it using get_actor_roles()').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: use actor_roles to filter obligations, use get_actor_roles() to discover additional roles beyond the profile. Includes clear conditional logic for empty profiles (profile_complete=false) with specific remediation action (guide to app.velvoite.eu/account).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_deadlines
Read-only
Inspect
Get upcoming regulatory deadlines for compliance calendar tracking.

Returns obligations with deadlines in chronological order, annotated
with days remaining or days overdue. Essential for compliance planning.

Args:
    entity_type: Filter by entity type code (e.g. 'credit_institution').
    regulation: Filter by regulation code (e.g. 'dora').
    days_ahead: How many days ahead to look (default 90, max 730).
    include_overdue: Include past-due obligations (default true).
ParametersJSON Schema
NameRequiredDescriptionDefault
days_aheadNo
regulationNo
entity_typeNo
include_overdueNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
get_document
Read-only
Inspect
Get full details of a specific regulatory document by its ID.

Returns the document metadata, AI summary, all classification tags,
inline obligations (up to 50 with total count), and a link to the original source.
Use the URL to access the full text on the official regulatory website (EUR-Lex, EBA, ESMA, FIN-FSA).
Get the document_id from search_regulations or list_documents results.

Args:
    document_id: The Velvoite document ID (integer from search/list results).
ParametersJSON Schema
NameRequiredDescriptionDefault
document_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
get_enforcement_decisionsA
Read-only
Inspect
Get enforcement decisions with structured penalty data.

Returns enforcement actions (fines, warnings, license withdrawals) imposed
by regulators. Each action includes penalty amount, sanctioned entity,
violation categories, and appeal status.

Use this to answer questions like:
- "What fines has FIN-FSA given to credit institutions?"
- "What are the largest penalties for AML violations?"
- "Has anyone been fined for ICT risk management failures?"
- "What's the total penalty exposure for my entity type?"

Combine with get_company_profile to find enforcement actions relevant
to the caller's entity type and regulations.

Args:
    regulation: Filter by regulation code (e.g. 'aml', 'dora', 'mifid2', 'gdpr', 'crd_crr').
    entity_type: Filter by sanctioned entity type (e.g. 'credit_institution', 'investment_firm', 'crypto_service').
    authority: Filter by sanction authority (e.g. 'FIN-FSA', 'ECB', 'Data Protection Ombudsman').
    penalty_min: Minimum penalty amount in EUR (e.g. 1000000 for fines >= EUR 1M).
    violation_category: Filter by violation type (e.g. 'aml_cdd', 'ict_risk', 'sca', 'governance', 'conduct').
    page: Page number (default 1).
    per_page: Results per page (default 20, max 100).
ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
per_pageNo
authorityNo
regulationNo
entity_typeNo
penalty_minNo
violation_categoryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, confirming safe read operations. The description adds valuable behavioral context about return content (penalty amount, sanctioned entity, violation categories, appeal status) that annotations don't cover. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical flow: purpose → return data → usage examples → cross-tool reference → parameters. Front-loaded with clear action statement. Minor verbosity in four example questions, but they serve legitimate guideline purposes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists (not shown but indicated in context), the description appropriately focuses on query capabilities and data highlights rather than return structure. Coverage is complete for a 7-parameter filtering tool with optional cross-tool integration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage (only titles present), the Args section comprehensively compensates by documenting all 7 parameters with rich examples (e.g., regulation codes like 'aml', 'dora', 'mifid2'; entity types like 'credit_institution'). Essential semantics for filtering are fully provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Get enforcement decisions with structured penalty data' and specifies the exact resource (enforcement actions including fines, warnings, license withdrawals). It distinguishes from sibling get_enforcement_intelligence by focusing on structured penalty data and specific decision attributes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage patterns through four concrete example questions (e.g., 'What fines has FIN-FSA given to credit institutions?'). Explicitly names cross-tool workflow: 'Combine with get_company_profile to find enforcement actions relevant to the caller's entity type and regulations.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_enforcement_intelligenceA
Read-only
Inspect
Get top enforced canonical obligations. Returns obligations ranked by
enforcement activity for risk prioritization.

This is a focused view of canonical obligations filtered to only those
with at least one enforcement action. Use this to identify which
obligations regulators are actively enforcing.

Args:
    regulation: Filter by regulation code (e.g. 'dora', 'mica', 'aml').
    actor_role: Comma-separated actor roles (e.g. 'credit_institution,significant_institution').
    entity_type: Filter by entity type code (e.g. 'credit_institution').
    compliance_difficulty: Filter by difficulty: 'low', 'medium', 'high', 'critical'.
    min_enforcement_count: Minimum enforcement actions (default 1 — only enforced obligations).
    sort: Sort order (default 'enforcement_count_desc').
    page: Page number (default 1).
    per_page: Results per page (default 20, max 100).
ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
sortNoenforcement_count_desc
per_pageNo
actor_roleNo
regulationNo
entity_typeNo
compliance_difficultyNo
min_enforcement_countNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only safety, allowing the description to focus on domain logic. Adds valuable context about ranking methodology ('ranked by enforcement activity'), default filtering behavior ('only enforced obligations'), and pagination constraints ('max 100').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with purpose statement first, differentiation second, use case third, followed by comprehensive Args documentation. No redundant text; every sentence provides actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately focuses on input parameters and filtering logic. Covers all 8 optional parameters with business context (e.g., explaining that min_enforcement_count defaults to 1 to enforce the 'enforced only' constraint).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting all 8 parameters with examples (e.g., 'dora', 'mica', 'aml' for regulation) and valid value ranges ('low', 'medium', 'high', 'critical' for compliance_difficulty). Essential for agent to construct valid calls.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get top enforced canonical obligations'), resource type, and business purpose ('risk prioritization'). Clearly distinguishes from sibling 'get_canonical_obligations' by noting this is a 'focused view' filtered to obligations 'with at least one enforcement action.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use ('identify which obligations regulators are actively enforcing') and implicitly distinguishes from the broader 'get_canonical_obligations' by emphasizing the enforcement filter. Lacks explicit 'when not to use' or named alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_feedbackA
Read-only
Inspect

Get user feedback for QA review. Requires admin API key.

Filters: status (new/reviewed/resolved/dismissed), feedback_type (data_quality/bug/feature_request/other),
context_type (document/obligation/general).
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
statusNo
context_typeNo
feedback_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical auth requirement not in annotations. Enumerates valid filter values (new/reviewed/resolved/dismissed, etc.) that constrain input behavior, supplementing the annotations (readOnlyHint/openWorldHint) with domain-specific constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-line structure: first line establishes purpose and auth, second line lists filters with values. No redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and safety annotations, the description adequately covers the essentials: purpose, authentication, and filter semantics. Only minor gap is omission of 'limit' parameter description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by documenting three of four parameters (status, feedback_type, context_type) including their valid enum values. Only omits the 'limit' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('user feedback') with context ('for QA review'). Clearly distinct from sibling retrieval tools like get_document or get_obligations, though it could explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides authentication prerequisite ('Requires admin API key'), but lacks explicit guidance on when to use this versus other retrieval tools or what triggers a need for QA review.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_obligations
Read-only
Inspect
Get regulatory obligations - specific requirements extracted from regulations.

Each obligation includes the requirement text, applicable article reference,
deadline, which entity types it applies to, actor roles, and current status.
Results are paginated (max 50 per page).

Supports keyword search via the query parameter (trigram + ILIKE matching on obligation text).
Combine with regulation, entity_type, and actor_role filters for precise results.

Set canonical=True to get deduplicated canonical obligations with enforcement
intelligence instead. Canonical obligations return one entry per unique legal
requirement per actor role, with compliance difficulty and enforcement metrics.

Use get_actor_roles first to discover available actor roles per regulation.

Args:
    entity_type: Filter by entity type code (e.g. 'credit_institution', 'payment_institution').
    regulation: Filter by regulation code (e.g. 'dora', 'mica', 'aml').
    status: Filter by status: 'upcoming', 'active', 'overdue', or 'expired'.
    query: Keyword search on obligation text (e.g. 'ICT risk', 'strong customer authentication').
    actor_role: Comma-separated actor roles to filter by (e.g. 'credit_institution,significant_institution'). Use get_actor_roles to see available roles.
    canonical: If True, return deduplicated canonical obligations with enforcement intelligence instead of raw obligations.
    page: Page number (default 1).
    per_page: Results per page (default 20, max 50).
ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
queryNo
statusNo
per_pageNo
canonicalNo
actor_roleNo
regulationNo
entity_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
get_obligation_summary
Read-only
Inspect
Get obligation counts grouped by regulation_code.

Returns total count and per-regulation breakdown with status counts
(active, upcoming, overdue, expired) plus verified and with_deadline counts.
No full obligation text — just counts for a quick overview.

Args:
    entity_type: Filter to obligations applying to this entity type (e.g. 'credit_institution', 'payment_institution').
    actor_role: Comma-separated actor roles to filter by (e.g. 'financial_entity,credit_institution').
        Use get_company_profile to see the company's roles, or get_actor_roles to browse all available roles.
ParametersJSON Schema
NameRequiredDescriptionDefault
actor_roleNo
entity_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
get_recent_changes
Read-only
Inspect
Get recently published or updated regulatory documents.

Shortcut for 'what is new this week' - returns documents from the last N days,
sorted by publication date (newest first). Useful for weekly regulatory briefings.

Args:
    days: Look back N days (default 7).
    entity_type: Filter by entity type code.
    regulation: Filter by regulation family code.
    urgency_max: Only include items at or above this urgency (1=critical, 2=high, etc.).
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
regulationNo
entity_typeNo
urgency_maxNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
get_stats
Read-only
Inspect

Get an overview of the Velvoite regulatory corpus.

Returns document counts by source, regulation family, entity type, urgency distribution, obligation summary, and date range.

Call this FIRST to orient yourself before running queries. No parameters needed.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
get_verification_stats
Read-only
Inspect

Get verification progress for obligations across all regulations.

Returns total, verified, unverified counts overall and per regulation, with percentage verified. Use this to track human review progress. No parameters needed.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
list_documents
Read-only
Inspect
Browse regulatory documents with filters and pagination.

Returns a paginated list of documents with summaries, tags, doc_purpose
(regulation_text, enforcement, reference, irrelevant), and doc_jurisdictions
(e.g. ['eu'], ['fi'], ['de']).
Use this for filtered browsing (e.g. all DORA documents from the last 30 days).
Use search_regulations instead when you have specific keywords to search for.

Args:
    source: Filter by data source code: eur_lex, eba, esma, eiopa, finfsa, bafin.
    regulation: Filter by regulation family code: dora, mica, aml, mifid2, crd_crr, psd, csrd, sfdr, ai_act, emir, solvency, idd, gdpr.
    entity_type: Filter by entity type: credit_institution, payment_institution, e_money, investment_firm, fund_manager, aifm, insurance, pension, crypto_service, crowdfunding, credit_servicer.
    urgency_max: Max urgency level (1=critical, 2=high, 3=medium, 4=low, 5=informational). E.g. 2 returns only critical and high urgency items.
    days: Only return documents from the last N days (1-365).
    page: Page number (default 1).
    per_page: Results per page (default 20, max 100).
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
pageNo
sourceNo
per_pageNo
regulationNo
entity_typeNo
urgency_maxNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
search_regulations
Read-only
Inspect
Search the regulatory corpus using keyword / trigram matching.

Uses PostgreSQL trigram similarity on document titles and summaries.
Returns documents ranked by relevance with summaries and classification tags.

Prefer list_documents with filters (regulation, entity_type, source) first.
Only use this for free-text keyword search when structured filters aren't sufficient.

Args:
    query: Search terms (e.g. 'strong customer authentication', 'ICT risk', 'AML reporting').
    per_page: Number of results (default 20, max 100).
ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
per_pageNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.