glin-profanity-mcp
Part of - MCP server for AI assistants
MCP (Model Context Protocol) server for glin-profanity - enables AI assistants like Claude Desktop, Cursor, Windsurf, and other MCP-compatible tools to use profanity detection and content moderation as native tools.
What is MCP?
The Model Context Protocol (MCP) is an open standard developed by Anthropic that allows AI assistants to securely access external tools and data sources. This package turns glin-profanity into an MCP server that AI assistants can use for content moderation.
Features
19 Powerful Tools for comprehensive content moderation
4 Workflow Prompts for guided AI interactions
5 Reference Resources for configuration and best practices
24 Language Support - Arabic, Chinese, English, French, German, Spanish, and more
Context-Aware Analysis - Domain-specific whitelists reduce false positives
Obfuscation Detection - Catches leetspeak (
f4ck) and Unicode tricksBatch Processing - Check multiple texts efficiently
Content Scoring - Get safety scores for moderation decisions
Installation
For Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
For Cursor
Add to your Cursor MCP settings (.cursor/mcp.json in your project or global config):
For Windsurf / Other MCP Clients
Local Installation
Available Tools (12)
Core Detection Tools
1. check_profanity
Check text for profanity with detailed results.
Parameters:
text(required): Text to checklanguages: Array of languages (default: all)detectLeetspeak: Detectf4ck,sh1tpatternsnormalizeUnicode: Detect Unicode trickscustomWords: Additional words to flagignoreWords: Words to whitelist
2. censor_text
Censor profanity by replacing with asterisks or custom characters.
Parameters:
text(required): Text to censorreplaceWith: Replacement character (default:*)preserveFirstLetter: Keep first letter (f***instead of****)
3. analyze_context
Context-aware analysis with domain-specific whitelists.
Parameters:
text(required): Text to analyzedomain:medical,gaming,technical,educational,generalcontextWindow: Words to consider around matches (1-10)confidenceThreshold: Minimum confidence to flag (0-1)
4. batch_check
Check multiple texts in one operation (up to 100).
Parameters:
texts(required): Array of texts (max 100)returnOnlyFlagged: Only return texts with profanity
5. validate_content
Comprehensive content validation with safety scoring (0-100).
Parameters:
text(required): Content to validatestrictness:low,medium,highcontext: Description of content type
Returns: Safety score, action recommendation (approve, review, edit, reject)
6. detect_obfuscation
Detect text obfuscation techniques.
Detects: Leetspeak, Unicode homoglyphs, zero-width characters, spaced characters
7. get_supported_languages
Get list of all 24 supported languages.
Advanced Analysis Tools
8. explain_match
Explain why a word was flagged with detailed reasoning.
Returns:
Detection method (direct, leetspeak, Unicode)
Detailed reasoning
Suggestions for handling
9. suggest_alternatives
Suggest clean alternatives for profane content.
Parameters:
text(required): Text with profanitytone:formal,casual,humorous,professional
10. analyze_corpus
Analyze a collection of texts for profanity statistics (up to 500 texts).
Returns:
Profanity rate statistics
Top profane words frequency
Severity distribution
Recommendations
11. compare_strictness
Compare detection results across different strictness levels.
Returns: Detection results at minimal, low, medium, high, and paranoid levels with recommendation.
12. create_regex_pattern
Generate regex patterns for custom profanity detection.
Parameters:
word(required): Base wordincludeVariants:basic,moderate,aggressive
Returns: Ready-to-use regex patterns for JavaScript and Python.
Available Prompts (4)
MCP Prompts provide guided workflows for common tasks.
1. content_moderation
Step-by-step content moderation workflow.
2. content_cleanup
Clean up content containing profanity for safe publishing.
3. audit_report
Generate a comprehensive moderation audit report.
4. filter_tuning
Tune profanity filter settings for your specific use case.
Available Resources (5)
Resources provide reference data accessible to AI assistants.
Resource | URI | Description |
Languages |
| All 24 supported languages with regional groupings |
Config Examples |
| Ready-to-use configuration templates |
Severity Levels |
| Explanation of severity scoring |
Domain Whitelists |
| Domain-specific whitelist references |
Detection Guide |
| Guide to detection techniques and recommended configs |
Example Prompts for AI Assistants
Basic Usage
Advanced Analysis
Batch Operations
Context-Aware
Workflow Automation
Use Cases
Use Case | Recommended Tools |
Chat moderation |
|
Content publishing |
|
Medical/Educational |
|
Moderation dashboards |
|
Filter tuning |
|
Custom rules |
|
Understanding flags |
|
Development
Running Locally
Testing with MCP Inspector
Supported Languages
Region | Languages |
European | English, French, German, Spanish, Italian, Dutch, Portuguese, Polish, Czech, Danish, Finnish, Hungarian, Norwegian, Swedish, Esperanto |
Asian | Chinese, Japanese, Korean, Thai, Hindi |
Middle Eastern | Arabic, Persian, Turkish |
Other | Russian |
License
MIT - See LICENSE for details.