# MCP Character Tools




The last thing you need for your LLM to work with individual characters or count the number of r's in a word. This is an MCP server providing 14+ comprehensive (and pretty) character and text analysis tools to help LLMs work with individual characters - something they struggle with due to tokenization.
## See the Difference
<div align="center">
| Without MCP (Wrong) | With MCP (Correct) |
|:----------------------:|:---------------------:|
| <img src="repo_assets/req_without_mcp.png" width="700" alt="Without MCP - incorrectly claims 2 r's in garlic"> | <img src="repo_assets/req_with_mcp.png" width="775" alt="With MCP - correctly identifies 1 r in garlic"> |
| *Claims there are 2 r's in "garlic"* | *Correctly identifies 1 r in "garlic"* |
</div>
<div align="center">
Yes, your agent will be able to tell how many r's are in Strawberry/Garlic :)
</div>
## Why This Exists
First of all, why not? Second, Large Language Models tokenize text into subwords, not individual characters. For example, "strawberry" might become tokens like `["straw", "berry"]`, so the model never truly "sees" individual letters. This MCP server gives LLMs "character-level vision" through a suite of tools.
## Installation
### Via npx (recommended)
```bash
npx mcp-character-tools
```
### Via npm (global install)
```bash
npm install -g mcp-character-tools
mcp-character-tools
```
### From source
```bash
git clone https://github.com/Aaryan-Kapoor/mcp-character-tools
cd mcp-character-tools
npm install
npm run build
npm start
```
## Usage with Claude Desktop
Add to your Claude Desktop configuration (`claude_desktop_config.json`):
```json
{
"mcpServers": {
"char-tools": {
"command": "npx",
"args": ["mcp-character-tools"]
}
}
}
```
## All Tools Reference
> ## See [sample_outputs.md](sample_outputs.md) for complete examples with inputs and outputs for all 14+ tools.
| Tool | Description |
|------|-------------|
| `count_letter` | Count a specific letter |
| `count_letters` | Count multiple letters at once |
| `count_substring` | Count substring occurrences |
| `letter_frequency` | Get frequency distribution |
| `spell_word` | Break into characters |
| `char_at` | Get character at index |
| `nth_character` | Get nth character (1-based) |
| `word_length` | Get exact length |
| `reverse_text` | Reverse text, detect palindromes |
| `compare_texts` | Compare two texts |
| `analyze_sentence` | Word-by-word breakdown |
| `batch_count` | Count across multiple words |
| `get_tricky_words` | List commonly miscounted words |
| `check_tricky_word` | Check if word is tricky |
## Development
```bash
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Run tests with coverage
npm run test:coverage
# Development mode with auto-rebuild
npm run dev
```
## Testing
The project includes comprehensive tests for all tools:
```bash
npm test
```
Test files:
- `tests/counting.test.ts` - Counting tools tests
- `tests/spelling.test.ts` - Spelling tools tests
- `tests/analysis.test.ts` - Analysis tools tests
- `tests/tricky-words.test.ts` - Tricky words resource tests
- `tests/visualization.test.ts` - Visualization utility tests
## License
MIT