Skip to main content
Glama

NLP Tools - Toxicity, Sentiment, NER, PII, Language Detection

Server Details

NLP tools: toxicity, sentiment, NER, PII detection, language ID. CPU-optimized ONNX.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
fasuizu-br/speech-ai-examples
GitHub Stars
0

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down

Available Tools

6 tools
analyze_sentimentTry in Inspector

Analyze text sentiment.

Returns positive/negative classification with confidence scores. DistilBERT-based with sub-10ms latency. Multiple domain-specific model variants available.

Args: text: Text to analyze for sentiment (positive/negative). model: Model variant -- 'general' (default), 'financial', 'twitter'.

Returns: dict with keys: - label (str): 'positive' or 'negative' - score (float 0-1): Confidence score for the predicted label - scores (dict): All label scores (positive, negative)

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to analyze for sentiment (positive/negative)
modelNoModel variant: 'general' (default), 'financial', 'twitter'general
analyze_toxicityTry in Inspector

Analyze text for toxic content.

Returns scores for 6 categories: toxic, severe_toxic, obscene, threat, insult, identity_hate. Each score is 0.0-1.0. BERT-based classifier with sub-15ms latency on GPU.

Args: text: Text to analyze for toxicity (hate speech, insults, threats).

Returns: dict with keys: - toxic (float 0-1): Overall toxicity score - severe_toxic (float 0-1): Severe toxicity score - obscene (float 0-1): Obscenity score - threat (float 0-1): Threat score - insult (float 0-1): Insult score - identity_hate (float 0-1): Identity-based hate score - is_toxic (bool): Whether text exceeds toxicity threshold

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to analyze for toxicity (hate speech, insults, threats)
check_nlp_serviceTry in Inspector

Check health status of NLP API services and loaded models.

Returns: dict with keys: - status (str): 'healthy' or error state - models (dict): Loaded model status per capability - version (str): API version

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

detect_languageTry in Inspector

Detect the language of text.

Supports 176 languages using fastText. Sub-1ms inference latency. Returns ISO 639-1 codes with confidence scores.

Args: text: Text to identify the language of. top_k: Number of top language predictions to return (default: 3).

Returns: dict with keys: - language (str): Top predicted language ISO 639-1 code - confidence (float 0-1): Confidence for top prediction - predictions (list): Top-k predictions, each with: - language (str): ISO 639-1 code - confidence (float 0-1): Prediction confidence

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to identify the language of
top_kNoNumber of top language predictions to return
detect_piiTry in Inspector

Detect personally identifiable information (PII) in text.

Finds emails, phone numbers, SSNs, credit cards, IP addresses, and person names. Optionally returns redacted text with PII replaced by type labels (e.g. [EMAIL], [PHONE]). BERT-NER + regex ensemble.

Args: text: Text to scan for personally identifiable information. redact: If true, return redacted text with PII replaced by [TYPE].

Returns: dict with keys: - pii_found (list): Detected PII items, each containing: - text (str): The PII value found - type (str): PII type (EMAIL, PHONE, SSN, CREDIT_CARD, IP, PERSON) - start (int): Character offset start - end (int): Character offset end - score (float 0-1): Detection confidence - count (int): Total PII items found - redacted_text (str|null): Text with PII replaced (when redact=true) - has_pii (bool): Whether any PII was detected

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to scan for personally identifiable information
redactNoIf true, return redacted text with PII replaced by [TYPE]
extract_entitiesTry in Inspector

Extract named entities (NER) from text.

Identifies persons, organizations, locations, and miscellaneous entities with span offsets and confidence scores. BERT-NER based with sub-50ms latency.

Args: text: Text to extract named entities from.

Returns: dict with keys: - entities (list): Detected entities, each containing: - text (str): Entity text - label (str): Entity type (PER, ORG, LOC, MISC) - start (int): Character offset start - end (int): Character offset end - score (float 0-1): Confidence score - count (int): Total number of entities found

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to extract named entities from (persons, organizations, locations)

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.