CoRT MCP Server
This is a Chain-of-Recursive-Thoughts (CORT) MCP server. The orignal project is as below, I appreciate so much the original work.
Original: PhialsBasement/Chain-of-Recursive-Thoughts: I made my AI think harder by making it argue with itself repeatedly. It works stupidly well.
https://github.com/PhialsBasement/Chain-of-Recursive-Thoughts
Release note
0.2.0 LLM list updated 0.1.0 Initial release
Features
- CoRT method available via MCP Server that makes AI to think harder by making it argue with itself repeatedly. It works stupidly well.
Worked check
Roo code / Cline
MCP Host Configuration
300 sec timeout recommend. (may sometime take longer time than expected) OPENROUTER_API_KEY is required. https://openrouter.ai/
Example: Logging Disabled
Example: Logging Enabled (absolute log file path required)
--log=off
: Disable all logging (no logs are written)--log=on --logfile=/absolute/path/to/logfile.log
: Enable logging and write logs to the specified absolute file path- Both arguments are required when logging is enabled. The server will exit with an error if either is missing, the path is not absolute, or if invalid values are given.
Note:
- When logging is enabled, logs are written only to the specified absolute file path. Relative paths or omission of
--logfile
will cause an error.- When logging is disabled, no logs are output.
- If the required arguments are missing or invalid, the server will not start and will print an error message.
- The log file must be accessible and writable by the MCP Server process.
- If you have trouble to run this server, it may be due to caching older version of cort-mcp. Please try to run it with the latest version (set
x.y.z
to the latest version) of cort-mcp by the below setting.
Available tools
- {toolname}.simple No details, output only final selected alternative.
- {toolname}.details Include details of LLM response history.
- {toolname}.mixed.llm Multi LLM inference.
- {toolname}.neweval New evaluation prompt.
Check the below details.
What is CoRT?
Major enhancement from the original
There are several enhancement from original CoRT methodology.
- Multi LLM inference: Each alternative is generated with a different LLM (model + provider) randomly.
- Evaluation enhancement: The prompt evaluation is updated by adding a prompt that asks the AI to explain its reasoning. (Original prompt is available by tools)
Multi LLM inference
Overview: This is a new tool that adds an exploration strategy of "randomly selecting different LLM (model + provider) for each alternative" to the conventional CoRT thinking flow. This allows you to maximize the use of the knowledge and ideas of heterogeneous models and select the optimal solution from a wider range of options.
- the function is available by mixed llm tools.
The list of LLMs
- Reasonably lighter and faster models are selected for better user experience.
mixed LLMs tool process.
- For each alternative, randomly select one LLM (model + provider) from the above list
- Always record in the log "which model and provider was used" for each generated alternative
- In details mode, explicitly include "model and provider used for each alternative" in the response history information
Evaluation enhancement
Overview: Changed the evaluation prompt richer. (Original prompt is available by tools) Use the prompt by {toolname}.neweval that asks the AI to explain its reasoning.
Original prompt
Enhanced prompt
Parameter Specification and Fallback Processing
This API determines the actual model to be used based on the specified provider
and model
parameters, with fallback processing in case of errors.
- Provider (
provider
) Resolution- When unspecified:
openrouter
is used as the default provider. - When an invalid value is specified (other than
openai
oropenrouter
): Falls back to the default provideropenrouter
.
- When unspecified:
- Model (
model
) Resolution- When unspecified:
- If the resolved provider is
openrouter
: The default modelmistralai/mistral-small-3.1-24b-instruct:free
is used. - If the resolved provider is
openai
: The default OpenAI model is used.
- If the resolved provider is
- When specified (with a valid provider):
- The specified model name is used as-is with the resolved provider.
- Important: At this stage, it is not verified whether the specified model name actually exists with the provider.
- When unspecified:
- API Call and Error Fallback
- An API call is first attempted with the provider and model combination resolved by the above rules.
- If an error occurs during the API call (e.g., the specified model does not exist with the provider, API key authentication error, etc.):
- Condition 1: The provider of the first attempted call is not
openai
. - Condition 2: The environment variable
OPENAI_API_KEY
is set in the system. - If both of the above conditions are met, the system automatically retries the process using the default model of the
openai
provider (this is the fallback processing). - If either or both of the above conditions are not met (e.g., the first attempt was with
openai
, orOPENAI_API_KEY
is not set), the initial error is returned as the final result, and this type of fallback does not occur.
- Condition 1: The provider of the first attempted call is not
Notes on Environment Variables:
OPENROUTER_API_KEY
is required to useopenrouter
.OPENAI_API_KEY
is required to useopenai
or to utilize the above fallback feature.- If the corresponding API key is not set, the API call will fail (the fallback to OpenAI will also fail depending on the fallback conditions).
License
MIT
Go wild with it
This server cannot be installed
An MCP server implementing the Chain-of-Recursive-Thoughts (CoRT) methodology that makes AI think harder by making it argue with itself repeatedly through multiple rounds of alternative generation and evaluation.