blackmount-nlp-mcp
blackmount-nlp-mcp
NLP for MCP — zero heavy dependencies. Built by Blackmount.
45 text analysis tools as a FastMCP server. No NLTK. No spaCy. No transformers. One dependency (mcp[cli]), under 50 KB of NLP code, ready in seconds. Requires Python 3.10+.
Why this exists
blackmount-nlp-mcp | NLTK | spaCy | transformers | |
Wheel size | 42 KB | 1.5 MB | 6 MB+ (+ models) | 10 MB+ (+ models) |
Direct dependencies | 1 | many | many | many |
Tokenization | ✅ | ✅ | ✅ | ✅ |
Sentiment analysis | ✅ | ✅ | ❌ | ✅ |
Readability scores | ✅ | ❌ | ❌ | ❌ |
Keyword extraction | ✅ | ✅ | ❌ | ❌ |
Text similarity | ✅ | ✅ | ✅ | ✅ |
Language detection | ✅ (18 langs) | ❌ | ❌ | ❌ |
Everything is implemented from scratch in pure Python — Porter stemmer, TF-IDF, RAKE, Levenshtein, VADER-style sentiment, Flesch / Gunning Fog / Coleman-Liau / ARI / SMOG readability, extractive summarization, language detection — plus a built-in 2000+ word sentiment lexicon and 500+ stopword list, all baked into the package.
Quick start
pip install blackmount-nlp-mcpClaude Desktop
Add to your config file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.jsonLinux:
~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"nlp": {
"command": "blackmount-nlp-mcp"
}
}
}Cursor
Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"nlp": {
"command": "blackmount-nlp-mcp"
}
}
}Any MCP client
The server runs over stdio. Point your client at the blackmount-nlp-mcp command:
blackmount-nlp-mcpRestart your editor. All 45 NLP tools are now available — just ask in natural language.
Tool catalog
Tokenization (4 tools)
Tool | Description | Try asking |
| Split text into words, handling contractions and punctuation | "Tokenize this paragraph into words" |
| Split into sentences, handling common abbreviations | "Break this text into individual sentences" |
| Generate word-level n-grams from a token list | "Generate bigrams from these tokens" |
| Generate character-level n-grams | "Get character trigrams for this word" |
Readability (8 tools)
Tool | Description | Try asking |
| 0–100 ease score (higher = easier) | "Calculate the Flesch Reading Ease score" |
| US grade level estimate | "What grade level is this written at?" |
| Fog index based on complex word ratio | "Calculate the Fog index for this text" |
| Coleman-Liau grade-level index | "Get the Coleman-Liau score" |
| ARI grade-level index | "What's the ARI for this document?" |
| SMOG grade (recommended for healthcare text) | "Calculate the SMOG grade for this document" |
| Syllable count estimation for any word | "How many syllables in 'extraordinary'?" |
| All readability scores in one call with a plain-English label | "Give me a full readability report for this text" |
Sentiment Analysis (4 tools)
Tool | Description | Try asking |
| Compound sentiment score from −1.0 to +1.0 | "What's the sentiment of this customer review?" |
| Returns | "Is this feedback positive or negative?" |
| Per-sentence sentiment breakdown | "Show me the sentiment of each sentence" |
| Sentiment scoped to specific topics | "What's the sentiment around 'pricing' in these reviews?" |
Keyword Extraction (4 tools)
Tool | Description | Try asking |
| TF-IDF keyword ranking across a corpus | "What are the key terms across these docs?" |
| RAKE algorithm — phrase-level keyword extraction | "Extract the key phrases from this article" |
| Top words by frequency, stopwords excluded | "What are the most common words in this text?" |
| Top n-gram phrases by frequency | "What two-word phrases appear most often?" |
Text Similarity (5 tools)
Tool | Description | Try asking |
| Word-set overlap, 0–1 | "How similar are these two paragraphs?" |
| Bag-of-words cosine similarity, 0–1 | "Calculate cosine similarity between these texts" |
| Levenshtein edit distance | "How many edits to turn 'kitten' into 'sitting'?" |
| Edit distance normalized to 0–1 | "How different are these two strings?" |
| LCS length between two strings | "What's the LCS length of these two strings?" |
Text Cleaning (10 tools)
Tool | Description | Try asking |
| Strip 500+ English stopwords | "Remove stopwords from this text" |
| Remove all punctuation | "Strip the punctuation" |
| Remove numeric tokens | "Remove all numbers from this" |
| Strip URLs | "Clean out the URLs" |
| Strip email addresses | "Remove email addresses from this text" |
| Strip HTML tags | "Strip the HTML from this content" |
| Collapse and trim whitespace | "Normalize the whitespace" |
| Lowercase the text | "Convert this to lowercase" |
| Porter stemmer (pure Python, no NLTK) | "Stem the word 'running'" |
| Configurable multi-step cleaning in one call | "Clean this text: remove HTML, URLs, and stopwords" |
Detection (8 tools)
Tool | Description | Try asking |
| Identify language from 18 supported languages | "What language is this text written in?" |
| Detect script: ASCII, Latin, Cyrillic, CJK, Arabic | "What script does this text use?" |
| English confidence score, 0–1 | "Is this text in English?" |
| Word count | "How many words are in this?" |
| Sentence count | "Count the sentences" |
| Paragraph count | "How many paragraphs?" |
| Mean word length in characters | "What's the average word length?" |
| Mean sentence length in words | "How long are the sentences on average?" |
Summarization (2 tools)
Tool | Description | Try asking |
| Select the N highest-scoring sentences from a document | "Summarize this article in 3 sentences" |
| Full document stats: words, readability, language, reading time | "Give me a statistical profile of this text" |
Use as a library
The submodules are importable directly — no MCP server required:
from blackmount_nlp_mcp.sentiment import sentiment_score, sentiment_label
from blackmount_nlp_mcp.readability import reading_level
from blackmount_nlp_mcp.keywords import rake_keywords
text = "This product is absolutely amazing! The quality is excellent."
print(sentiment_score(text))
# 0.9285
print(sentiment_label(text))
# 'positive'
print(reading_level(text))
# {'grade_level': 12.39, 'label': 'college',
# 'flesch_reading_ease': 14.27, 'flesch_kincaid_grade': 12.39,
# 'gunning_fog': 19.58, 'coleman_liau': 10.94,
# 'automated_readability': 7.51, 'smog_grade': 11.21}
print(rake_keywords(text))
# [{'phrase': 'absolutely amazing', 'score': 4.0},
# {'phrase': 'product', 'score': 1.0},
# {'phrase': 'quality', 'score': 1.0},
# {'phrase': 'excellent', 'score': 1.0}]Development
git clone https://github.com/BlackMount-ai/blackmount-nlp-mcp
cd blackmount-nlp-mcp
pip install -e .
pytest tests/ -vBlackmount ecosystem
blackmount-nlp-mcp is built by Blackmount — tools for people who work with AI.
blackmount-mcp — Browser memory, AI chat search, and session analytics as an MCP server. Pair it with blackmount-nlp-mcp to analyze your saved conversations: extract keywords from chat history, score readability of AI responses, detect sentiment trends across sessions.
app.blackmount.ai — The full Blackmount platform. Search, organize, and analyze everything your AI tools produce.
License
MIT
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/BlackMount-ai/blackmount-nlp-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server