Server Details
NLP tools: toxicity, sentiment, NER, PII detection, language ID. CPU-optimized ONNX.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- fasuizu-br/speech-ai-examples
- GitHub Stars
- 0
See and control every tool call
Available Tools
6 toolsanalyze_sentimentTry in Inspector
Analyze text sentiment.
Returns positive/negative classification with confidence scores. DistilBERT-based with sub-10ms latency. Multiple domain-specific model variants available.
Args: text: Text to analyze for sentiment (positive/negative). model: Model variant -- 'general' (default), 'financial', 'twitter'.
Returns: dict with keys: - label (str): 'positive' or 'negative' - score (float 0-1): Confidence score for the predicted label - scores (dict): All label scores (positive, negative)
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to analyze for sentiment (positive/negative) | |
| model | No | Model variant: 'general' (default), 'financial', 'twitter' | general |
analyze_toxicityTry in Inspector
Analyze text for toxic content.
Returns scores for 6 categories: toxic, severe_toxic, obscene, threat, insult, identity_hate. Each score is 0.0-1.0. BERT-based classifier with sub-15ms latency on GPU.
Args: text: Text to analyze for toxicity (hate speech, insults, threats).
Returns: dict with keys: - toxic (float 0-1): Overall toxicity score - severe_toxic (float 0-1): Severe toxicity score - obscene (float 0-1): Obscenity score - threat (float 0-1): Threat score - insult (float 0-1): Insult score - identity_hate (float 0-1): Identity-based hate score - is_toxic (bool): Whether text exceeds toxicity threshold
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to analyze for toxicity (hate speech, insults, threats) |
check_nlp_serviceTry in Inspector
Check health status of NLP API services and loaded models.
Returns: dict with keys: - status (str): 'healthy' or error state - models (dict): Loaded model status per capability - version (str): API version
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
detect_languageTry in Inspector
Detect the language of text.
Supports 176 languages using fastText. Sub-1ms inference latency. Returns ISO 639-1 codes with confidence scores.
Args: text: Text to identify the language of. top_k: Number of top language predictions to return (default: 3).
Returns: dict with keys: - language (str): Top predicted language ISO 639-1 code - confidence (float 0-1): Confidence for top prediction - predictions (list): Top-k predictions, each with: - language (str): ISO 639-1 code - confidence (float 0-1): Prediction confidence
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to identify the language of | |
| top_k | No | Number of top language predictions to return |
detect_piiTry in Inspector
Detect personally identifiable information (PII) in text.
Finds emails, phone numbers, SSNs, credit cards, IP addresses, and person names. Optionally returns redacted text with PII replaced by type labels (e.g. [EMAIL], [PHONE]). BERT-NER + regex ensemble.
Args: text: Text to scan for personally identifiable information. redact: If true, return redacted text with PII replaced by [TYPE].
Returns: dict with keys: - pii_found (list): Detected PII items, each containing: - text (str): The PII value found - type (str): PII type (EMAIL, PHONE, SSN, CREDIT_CARD, IP, PERSON) - start (int): Character offset start - end (int): Character offset end - score (float 0-1): Detection confidence - count (int): Total PII items found - redacted_text (str|null): Text with PII replaced (when redact=true) - has_pii (bool): Whether any PII was detected
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to scan for personally identifiable information | |
| redact | No | If true, return redacted text with PII replaced by [TYPE] |
extract_entitiesTry in Inspector
Extract named entities (NER) from text.
Identifies persons, organizations, locations, and miscellaneous entities with span offsets and confidence scores. BERT-NER based with sub-50ms latency.
Args: text: Text to extract named entities from.
Returns: dict with keys: - entities (list): Detected entities, each containing: - text (str): Entity text - label (str): Entity type (PER, ORG, LOC, MISC) - start (int): Character offset start - end (int): Character offset end - score (float 0-1): Confidence score - count (int): Total number of entities found
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to extract named entities from (persons, organizations, locations) |
To claim this server, publish a /.well-known/glama.json file on your server's domain with the following structure:
The email address must match the email associated with your Glama account. Once verified, the server will appear as claimed by you.
Control your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.