Skip to main content
Glama

nexus-mcp — jpi-guard & PII Guard

Ownership verified

Server Details

Two LLM security APIs for Japanese applications. (1) jpi-guard — Prompt Injection Detection Detects and blocks prompt injection attacks before they reach your LLM. Specialized for Japanese: full-width character bypass (全角バイパス), polite-language disguise (丁寧語擬装), indirect injection, Base64 obfuscation. (2) PII Guard — Japanese PII Detection & Masking Scans text for 10 PII categories.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between check_injection, sanitize_content, and validate_rag_input, which all involve injection detection and content safety. The descriptions help differentiate them, but an agent might need to carefully choose between these three for similar tasks.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear verb_noun structures, such as check_injection, get_pii_guard_key, pii_scan, sanitize_content, and validate_rag_input. This uniformity makes the set predictable and easy to navigate.

Tool Count5/5

With 6 tools, the server is well-scoped for its purpose of security and PII handling in Japanese applications. Each tool serves a specific function, such as detection, scanning, sanitization, and key management, without unnecessary redundancy or gaps.

Completeness4/5

The tool set covers key areas like injection detection, PII scanning, content sanitization, and API key management, with minor gaps such as lacking tools for updating or revoking keys. However, the core workflows for securing LLM inputs in Japanese contexts are well-supported.

Available Tools

6 tools
check_injectionAInspect

Detect prompt injection attacks in user-supplied text before passing to an LLM. Specialized for Japanese RAG applications. Detects: role impersonation, full-width character bypass (全角バイパス), polite-language disguise (丁寧語擬装), indirect injection, Base64 obfuscation. Returns is_injection, risk_level, and detection_reason.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesThe user-supplied text to check for prompt injection (max 10,000 chars).
api_keyYesYour jpi-guard API key. Get a free trial at https://jpi-guard.nexus-api-lab.workers.dev/
languageNoPrimary language. Use 'ja' for Japanese-language LLM apps. Default: auto.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool detects and what it returns, but does not cover important behavioral aspects like rate limits, authentication requirements beyond the api_key parameter, error handling, or performance characteristics. The disclosure is adequate but lacks depth for a security-focused tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and specialization, the second lists detection capabilities and return values. Every element earns its place with no wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a security detection tool with 3 parameters, no annotations, and no output schema, the description provides adequate purpose and detection scope but lacks important context. It doesn't explain the return structure in detail (what risk_level values mean, format of detection_reason), doesn't mention error cases or limitations, and doesn't provide integration guidance beyond the basic use case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description does not add any meaningful parameter semantics beyond what's in the schema - it mentions the input parameter indirectly but doesn't provide additional context about parameter usage, constraints, or interactions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('detect prompt injection attacks') and resources ('user-supplied text'), explicitly listing the types of attacks it detects. It distinguishes from siblings by specifying specialization for Japanese RAG applications, unlike generic scanning tools like pii_scan or sanitize_content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('before passing to an LLM' and 'for Japanese RAG applications'), but does not explicitly state when not to use it or name specific alternatives among the sibling tools. It implies usage scenarios without detailed exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pii_guard_keyAInspect

Register an email address and get a free PII Guard API key. Returns a key with 10,000 requests/month (permanent free tier, no credit card required). The key can be used immediately with pii_scan. If the email already has a key, the existing key is returned (new_key: false).

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesEmail address to register. Used only for key delivery and quota management.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well: it discloses key behavioral traits like the free tier quota (10,000 requests/month), permanence, no credit card requirement, immediate usability with pii_scan, and idempotent behavior (returns existing key if email already registered). It lacks details on rate limits or error handling, but covers core operational context adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key details (quota, terms, behavior). Every sentence earns its place: first states action and outcome, second specifies quota and terms, third links to usage, fourth covers idempotency. No wasted words, appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well to cover purpose, usage, behavior, and parameters. It explains the return scenario (existing key detection) but doesn't detail output format or error cases. For a single-parameter tool with straightforward operation, it's nearly complete but could mention response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so baseline is 3. The description adds value by explaining the parameter's purpose beyond the schema: 'Used only for key delivery and quota management' in the schema is expanded with context about registration, existing key checks, and quota allocation, providing richer semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Register an email address and get a free PII Guard API key') and distinguishes it from siblings like 'get_trial_key' (free vs trial) and 'pii_scan' (key acquisition vs usage). It explicitly mentions the resource (API key) and its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: for registering an email to obtain a free API key with specific quota (10,000 requests/month). It distinguishes from 'get_trial_key' by specifying 'permanent free tier, no credit card required' and mentions the alternative 'pii_scan' for key usage. It also covers the 'when-not' scenario: if email already has a key, it returns the existing one.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trial_keyAInspect

Get a free trial API key for the nexus-api-lab cleanse API. Returns a key with 2,000 requests, valid for 30 days. No credit card or signup required. The key can be used immediately with sanitize_content.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool returns a key with specific limits (2,000 requests, 30-day validity), no authentication requirements ('No credit card or signup required'), and immediate usability ('The key can be used immediately with sanitize_content'). It doesn't mention rate limits or error conditions, but covers the essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three sentences that each add distinct value: purpose statement, key specifications, and usage context. There's zero wasted language, and information is front-loaded with the core purpose immediately clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no annotations and no output schema, the description provides strong context about what the tool does, the key's specifications, and how to use it. It could be more complete by explicitly describing the return format (though 'Returns a key' implies a string) and mentioning any limitations beyond the 2,000 requests, but it covers the essential information needed to use this trial key acquisition tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and instead focuses on the tool's purpose and output characteristics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a free trial API key') and identifies the target resource ('for the nexus-api-lab cleanse API'). It distinguishes this tool from siblings like 'get_pii_guard_key' by specifying the particular API and trial nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool: when needing a trial key for the cleanse API with 2,000 requests valid for 30 days. It mentions 'No credit card or signup required' as a usage condition. However, it doesn't explicitly state when NOT to use it or name alternatives like 'get_pii_guard_key' for different API keys.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pii_scanAInspect

Scan Japanese text for personally identifiable information (PII) and return findings with masked output. Runs on regex + checksum validation + keyword proximity scoring only — no LLM involved, fully deterministic. Detects 10 categories: My Number / マイナンバー (mod-11 checksum), credit card (Luhn-validated), bank account, passport, phone, email, postal address, date of birth, driver's license, and person name. Full-width character normalization included. Returns findings[] with type/score/position, has_high_risk flag for high-severity categories, and masked_text with [NAME][PHONE][CARD] placeholders ready for downstream LLM pipelines. Free — 10,000 requests/month.

ParametersJSON Schema
NameRequiredDescriptionDefault
maskNoIf true, return masked_text with PII replaced by [TYPE] placeholders. Default: true.
textYesThe Japanese text to scan for PII (max 100 KB).
api_keyYesYour PII Guard API key (format: pii_<32hex>). Get a free key with get_pii_guard_key or at https://www.nexus-api-lab.com/pii-guard.html
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels by disclosing key behavioral traits: it specifies the deterministic methods used (regex + checksum + keyword scoring, no LLM), includes full-width character normalization, details the 10 PII categories with validation rules (e.g., My Number mod-11, credit card Luhn), describes the output structure (findings[], has_high_risk, masked_text), and mentions rate limits (10,000 requests/month) and cost (free). This covers safety, functionality, and operational constraints comprehensively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and key features. Every sentence adds value, such as detailing methods, categories, and output. It could be slightly more structured (e.g., bullet points for categories), but it avoids waste and remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (PII scanning with multiple categories and methods), no annotations, and no output schema, the description is highly complete. It explains the detection logic, categories, output format (findings[], has_high_risk, masked_text), and operational details (rate limits, cost). This compensates well for the lack of structured fields, making it sufficient for an agent to understand and use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters (text, api_key, mask) thoroughly. The description does not add meaning beyond the schema, such as explaining how 'mask' interacts with the output or providing examples. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Scan Japanese text for personally identifiable information') and resource ('PII'), distinguishing it from siblings like 'sanitize_content' or 'validate_rag_input' by focusing on detection rather than modification or validation. It explicitly lists the 10 categories detected, making the purpose highly specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for scanning Japanese text for PII using deterministic methods (regex, checksum, etc.), with a free tier of 10,000 requests/month. However, it does not explicitly state when not to use it or name alternatives among siblings (e.g., 'sanitize_content' might be for cleaning after scanning), leaving some guidance implicit rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sanitize_contentAInspect

Sanitize external content before passing it to an LLM. Detects and removes prompt injection payloads: hidden HTML instructions, zero-width character attacks, fullwidth Unicode bypasses, semantic overrides ("Ignore all previous instructions"), and encoding evasion. Specialized for Japanese-language content. Returns cleaned_content safe to pass to the model.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour nexus-api-lab API key (Bearer token). Get a free trial key at https://www.nexus-api-lab.com
contentYesThe external text content to sanitize (max 512 KB).
languageNoPrimary language of the content. Use 'ja' for Japanese-language LLM apps. Default: auto.
on_timeoutNoBehavior on timeout. fail_open: return original content unmodified (availability-first). fail_close: return error (security-first). Default: fail_open.
source_urlNoURL where content was fetched from (optional, used for audit trail).
strictnessNoDetection sensitivity. low=regex only (fastest), medium=regex+semantic, high=all stages (most thorough). Default: medium.
content_typeNoFormat of the input content. Default: plaintext.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behaviors: what gets detected (prompt injection payloads like hidden HTML, zero-width characters), the specialized focus (Japanese-language content), and what it returns (cleaned_content safe for the model). It doesn't mention rate limits, auth details beyond the API key parameter, or error handling beyond timeout behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and detection scope, the second adds specialization and return value. Every sentence adds critical information with zero waste, making it easy to parse and front-loaded with key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex security tool with 7 parameters and no annotations or output schema, the description provides good context on behavior and purpose. However, it could better address error cases beyond timeout, explain the 'cleaned_content' output format, or detail performance characteristics. It's mostly complete but has minor gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain 'strictness' levels or 'on_timeout' choices further). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('sanitize external content'), the resource ('content'), and the purpose ('before passing it to an LLM'). It distinguishes from siblings by specifying it's for prompt injection detection and specialized for Japanese-language content, unlike tools like 'pii_scan' or 'validate_rag_input'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('before passing external content to an LLM', 'specialized for Japanese-language content'), but doesn't explicitly mention when not to use it or directly compare to alternatives like 'check_injection' or 'validate_rag_input'. The guidance is helpful but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_rag_inputAInspect

Validate user input is safe before sending to your RAG pipeline. Combines prompt injection detection and content safety check. Returns safe: true if input can proceed to LLM, or safe: false with block_reason if injection detected. Use this as a gate in your RAG handler.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesThe user query to validate before passing to the RAG/LLM pipeline.
api_keyYesYour jpi-guard API key. Get a free trial at https://jpi-guard.nexus-api-lab.workers.dev/
fail_openNoIf true, return safe=true on API timeout (availability-first). Default: false (security-first).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well: it explains the return behavior ('Returns safe: true if input can proceed to LLM, or safe: false with block_reason'), mentions it's a validation gate, and implies it's a security tool. It doesn't mention rate limits, auth requirements beyond the api_key parameter, or error handling details, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first states purpose, second explains return behavior, third provides usage guidance. Each sentence earns its place by adding distinct value, and the description is appropriately sized for a validation tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a validation tool with 3 parameters, 100% schema coverage, and no output schema, the description is quite complete: it explains purpose, behavior, and usage context. The main gap is lack of output format details (what block_reason contains, response structure), but given the schema coverage and clear behavioral description, it's mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., doesn't explain input format expectations or api_key usage details), but it does provide context about the overall validation purpose that helps understand parameter roles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('validate user input is safe') and resources ('RAG pipeline'), and distinguishes it from siblings by mentioning 'prompt injection detection and content safety check' - unlike check_injection (only injection) or pii_scan/sanitize_content (different safety aspects).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: 'Use this as a gate in your RAG handler' tells when to use it, and the description distinguishes it from alternatives by mentioning it combines multiple safety checks (injection + content safety) rather than just one aspect like check_injection or pii_scan.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources