Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@warrant-mcpAnalyze the structure of this argument using the Toulmin model."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
warrant-mcp
An MCP (Model Context Protocol) server that provides formal reasoning and argument validation tools for AI agents. Built on established computational argumentation theories ā Dung, Toulmin, Walton, Pollock, Prakken, and ASPIC+.
Features
Dung's Abstract Argumentation Framework: Extensions (grounded, preferred, stable).
Toulmin Model: Structured argument validation.
Walton's Schemes: Critical questions for common reasoning patterns.
Pollock's Defeasible Reasoning: Rebutting and undercutting defeaters.
Prakken's Dialogue Protocol: Persuasion dialogue management.
ASPIC+: Disagreement diagnosis.
Gradual Semantics: Argument scoring (h-Categorizer, Counting).
Bipolar Argumentation Framework: Support + Attack relations.
Installation
This project uses uv for dependency management.
# Clone the repository
git clone https://github.com/jayden-chmod/warrant-mcp.git
cd warrant-mcp
# Install dependencies
uv syncUsage
Running the MCP Server
warrant-mcp can be run using uv run.
uv run warrant-mcpConfigure for Claude Desktop
Add this to your claude_desktop_config.json:
{
"mcpServers": {
"warrant-mcp": {
"command": "uv",
"args": [
"run",
"--directory",
"/absolute/path/to/warrant-mcp",
"warrant-mcp"
]
}
}
}š§ MCP Tools Reference
warrant-mcp exposes 10 MCP tools that AI agents can call directly. Below is the full reference for each tool.
1. build_argument ā Build Structured Argument (Toulmin)
Build a structured argument using Toulmin's model (Claim ā Data ā Warrant ā Backing ā Rebuttal ā Qualifier).
Parameters:
Parameter | Type | Required | Description |
|
| ā | The assertion to be supported |
|
| ā | Evidence supporting the claim. Each item must have |
|
| ā | Why the data supports the claim |
|
| ā | Evidence supporting the warrant |
|
| ā | Conditions under which the claim might not hold |
|
| ā | Strength modifier. Default: |
Example:
{
"claim": "We should use PostgreSQL instead of MongoDB for this project",
"data": [
{"content": "Our data has strong relational structure with foreign keys", "type": "certain"},
{"content": "Team has 5 years of PostgreSQL experience", "type": "objective"}
],
"warrant": "Relational databases excel with structured, relational data",
"backing": ["PostgreSQL consistently outperforms MongoDB in JOIN-heavy workloads (TPC-H benchmarks)"],
"rebuttal": ["If the data schema changes frequently, MongoDB's flexibility may be advantageous"],
"qualifier": "very likely"
}Returns: { argument, validation, score }
2. identify_scheme ā Identify Walton's Argumentation Scheme
Identify which Walton argumentation scheme matches a claim, or retrieve details for a specific scheme.
Parameters:
Parameter | Type | Required | Description |
|
| ā | The claim to analyze |
|
| ā | Additional context for better matching |
|
| ā | Retrieve a specific scheme by name (e.g., |
Example:
{
"claim": "We should refactor the auth module before adding OAuth support",
"context": "The auth module has high cyclomatic complexity and no tests"
}Returns: { matches, topScheme } ā Ranked scheme matches with critical questions.
3. classify_defeater ā Classify Counterargument (Pollock)
Classify a counterargument as a rebutting defeater (attacks the conclusion) or undercutting defeater (breaks the reasoning link).
Parameters:
Parameter | Type | Required | Description |
|
| ā | The argument being attacked |
|
| ā | The counterargument content |
|
| ā |
|
|
| ā |
|
Example:
{
"target": "PostgreSQL is faster for our workload",
"content": "The benchmark was run on different hardware with different data distribution",
"type": "undercutting",
"evidence_type": "objective"
}Returns: { defeater, strength, penalty }
4. create_framework ā Create Argumentation Framework
Create a Dung Argumentation Framework (AF) or a Bipolar AF with both attack and support relations.
Parameters:
Parameter | Type | Required | Description |
|
| ā | List of argument identifiers |
|
| ā | Attack relations as pairs |
|
| ā | Support relations (creates a Bipolar AF if provided) |
Example:
{
"arguments": ["A1", "A2", "A3", "A4"],
"attacks": [["A2", "A1"], ["A3", "A2"]],
"supports": [["A4", "A1"]]
}Returns: { type, arguments, attacks, supports }
5. compute_extensions ā Compute Acceptable Arguments (Dung)
Compute acceptable arguments using Dung's semantics (grounded, preferred, stable).
Parameters:
Parameter | Type | Required | Description |
|
| ā | List of argument identifiers |
|
| ā | Attack relations |
|
| ā |
|
Example:
{
"arguments": ["A", "B", "C"],
"attacks": [["B", "A"], ["C", "B"]],
"semantics": "all"
}Returns: { grounded, preferred, stable } ā Sets of acceptable arguments under each semantics.
6. score_arguments ā Score Arguments (Gradual Semantics)
Score arguments on a continuous [0, 1] scale using gradual semantics.
Parameters:
Parameter | Type | Required | Description |
|
| ā | List of argument identifiers |
|
| ā | Attack relations |
|
| ā | Support relations (used with |
|
| ā |
|
Example:
{
"arguments": ["A", "B", "C"],
"attacks": [["B", "A"], ["C", "B"]],
"method": "h-categorizer"
}Returns: { method, scores } ā Arguments sorted by score descending.
7. create_dialogue ā Start Dialogue Session (Prakken)
Start a new argumentation dialogue session using Prakken's protocol.
Parameters:
Parameter | Type | Required | Description |
|
| ā | The topic of the dialogue |
|
| ā | List of participant names |
|
| ā | Dialogue type. Default: |
Example:
{
"topic": "Should we migrate from REST to GraphQL?",
"participants": ["Proponent", "Opponent"]
}Returns: Serialized dialogue state with ID, commitment stores, and available moves.
8. dialogue_move ā Make a Dialogue Move
Make a speech act move in an active dialogue session.
Parameters:
Parameter | Type | Required | Description |
|
| ā | ID from |
|
| ā | Participant name |
|
| ā | Speech act: |
|
| ā | The content of the speech act |
|
| ā | Premises (required for |
Speech Act Protocol:
Speech Act | Meaning | Valid Responses |
| Assert Ļ is the case |
|
| Challenge: ask for reasons |
|
| Admit Ļ is the case | ā |
| Withdraw commitment to Ļ | ā |
| Provide reasons (premises) for Ļ |
|
Example:
{
"dialogue_id": "d-abc123",
"speaker": "Proponent",
"act": "claim",
"content": "GraphQL reduces over-fetching and improves frontend performance"
}Returns: Updated dialogue state with commitment stores.
9. diagnose_disagreement ā Diagnose Disagreement (ASPIC+)
Diagnose WHY two agents disagree, classifying the root cause of the disagreement.
Parameters:
Parameter | Type | Required | Description |
|
| ā | Agent A's position with |
|
| ā | Agent B's position with |
Disagreement Types:
Type | Meaning | Resolution Strategy |
Factual | Different data/evidence | Gather more data |
Inferential | Same data, different conclusions | Examine reasoning rules |
Preferential | Same conclusions, different priorities | Negotiate weights |
Goal conflict | Fundamentally incompatible objectives | Escalate for human decision |
Example:
{
"agent_a": {
"claim": "Use microservices",
"premises": ["System needs to scale independently", "Teams work in isolation"],
"rules": ["Independent scaling requires service boundaries"]
},
"agent_b": {
"claim": "Use monolith",
"premises": ["Team is small", "Deployment complexity is a risk"],
"rules": ["Small teams benefit from simple deployment"]
}
}Returns: { diagnosis, suggestedResolutions }
10. list_schemes ā List Argumentation Schemes
List all available Walton argumentation schemes with their critical question counts.
Parameters: None
Returns: { schemes: [{ name, title, criticalQuestions }] }
ā” Skill Commands (Slash Commands)
Skills are shortcut commands that trigger structured reasoning workflows. Use them directly in conversation with an AI agent that has warrant-mcp connected.
/argue ā Structured Argumentation
Build a rigorous, evidence-based argument for (or against) a technical claim.
/argue <claim>
/argue --challenge <claim>
/argue --deep <claim>Flag | Description |
(default) | Support mode ā build the strongest case FOR the claim |
| Challenge mode ā find the strongest attacks AGAINST the claim |
| Deep mode ā spawn a dedicated agent for thorough analysis |
What it does:
Parses the claim type (Causal / Evaluative / Prescriptive / Factual / Authority)
Gathers evidence from the codebase and conversation history
Builds a Toulmin argument (Claim ā Data ā Warrant ā Backing ā Rebuttal ā Qualifier)
Applies Walton's critical questions for the relevant argumentation scheme
Identifies defeaters (Pollock: Rebutting vs Undercutting)
Scores the argument using gradual semantics [0, 1]
Outputs a structured analysis with score breakdown and actionable recommendation
Example:
/argue "We should migrate from REST to GraphQL for our mobile API"
/argue --challenge "Microservices is the right architecture for our 5-person team"/debate ā Multi-Agent Adversarial Debate
Run a structured adversarial debate between virtual agents to stress-test a technical decision.
/debate <topic>
/debate <topic> --rounds 3
/debate <topic> --focus security
/debate <topic> --fullFlag | Description |
| Number of debate rounds (default: 2) |
| Focus opponent's perspective: |
| Full debate mode ā spawns 3 separate agents (PRO, OPP, MOD) for maximum diversity |
Participants:
Role | Persona | Bias |
PRO (Proponent) | Pragmatic engineer | Prefers solutions that ship fast and are easy to maintain |
OPP (Opponent) | Cautious architect | Prefers solutions that minimize risk and technical debt |
MOD (Moderator) | Senior staff engineer | None ā evaluates argument strength, not rhetoric |
What it produces:
Full debate transcript with speech acts (Prakken's protocol)
Commitment stores (what each side publicly committed to and retracted)
Argumentation framework (arguments + attack/support relations with ASCII map)
Argument scores via gradual semantics
Moderator's verdict with winner, consensus solution, and conditions for revisiting
Example:
/debate "Should we rewrite the payment service in Rust?"
/debate "Monorepo vs polyrepo for our growing team" --rounds 3
/debate "Adopting Kubernetes for our infrastructure" --focus cost/deliberate ā Collaborative Multi-Perspective Deliberation
Facilitate a cooperative multi-perspective analysis where virtual experts work together (not against each other) to find the best course of action.
/deliberate <decision question>
/deliberate <decision question> --perspectives 3
/deliberate <decision question> --perspectives "frontend,backend,data"
/deliberate <decision question> --criteria "security,cost,speed"
/deliberate <decision question> --deepFlag | Description |
| Number of perspectives (default: 4) |
| Custom named perspectives |
| Custom evaluation criteria (default: Business Value, Feasibility, Cost, Timeline, Risk, Maintainability) |
| Deep mode ā spawn a dedicated agent for complex decisions requiring extensive research |
Default Perspectives:
Role | Focus | Optimizes For |
ARCHITECT | System design, scalability, patterns | Technical excellence |
OPERATOR | DevOps, deployment, monitoring, cost | Operational reliability |
PRODUCT | Business value, user impact, timeline | Delivery & impact |
SECURITY | Threat modeling, compliance, data safety | Safety & compliance |
What it produces:
Perspective analysis with gathered evidence
Proposals with Walton's Practical Reasoning critical questions answered
Cross-evaluation with disagreement diagnosis (ASPIC+: factual / inferential / preferential / goal conflict)
Decision matrix with weighted multi-criteria scores
Consensus solution with incorporated concerns from all sides
Dissenting opinions preserved as "canary signals"
Action plan with concrete steps, checkpoints, and re-deliberation triggers
Example:
/deliberate "How should we handle authentication for our new public API?"
/deliberate "Which database should we use for the analytics pipeline?" --perspectives "data-engineer,backend,devops"
/deliberate "Should we build or buy a feature flag system?" --criteria "cost,integration,flexibility,maintenance"š¤ Agent Triggers
Agents are autonomous reasoning personas that perform deep, multi-step analysis. They are defined in .claude/agents/ and can be triggered by the AI when executing skill commands in --deep or --full mode.
argue Agent ā Structured Argumentation Agent
A rigorous evidence-based argument builder that uses Toulmin's Model, Walton's Schemes, and Pollock's Defeaters.
Triggered by: /argue --deep <claim>
Process:
Parse claim type ā Gather evidence (with quality tags:
[CERTAIN],[OBJECTIVE],[UNCERTAIN],[SUBJECTIVE],[HYPOTHETICAL]) ā Build Toulmin argument ā Apply Walton's critical questions ā Identify defeaters (Pollock) ā Calculate argument strength [0, 1]
Score interpretation:
Score | Qualifier |
0.8+ | Strongly recommended |
0.6ā0.8 | Recommended |
0.4ā0.6 | Viable but uncertain |
0.2ā0.4 | Weak ā consider alternatives |
< 0.2 | Not recommended |
debate Agent ā Multi-Agent Debate Orchestrator
Runs a structured adversarial debate using Prakken's Persuasion Dialogue Model with Dung's semantics and gradual scoring.
Triggered by: /debate --full <topic>
Process:
Setup 3 virtual debater personas (PRO, OPP, MOD) ā Information gathering ā Execute Prakken's protocol (speech acts with commitment stores) ā Build Bipolar Argumentation Framework ā Compute acceptability via gradual semantics ā Moderator verdict
deliberate Agent ā Collaborative Deliberation Facilitator
Facilitates cooperative multi-perspective analysis using Walton & Krabbe's Deliberation Dialogue model.
Triggered by: /deliberate --deep <question>
Process:
Assemble perspectives (4 domain experts) ā Information seeking phase ā Proposal generation (Walton's Practical Reasoning) ā Cross-perspective evaluation with ASPIC+ disagreement diagnosis ā Multi-criteria decision matrix ā Consensus building ā Action plan generation
š§© Choosing the Right Tool
Situation | Use |
You have a claim and want to build/validate an argument |
|
You want to stress-test a decision with adversarial scrutiny |
|
You need a collaborative, multi-perspective decision analysis |
|
You want to compare two arguments mathematically |
|
You need to classify a counterargument |
|
You want to run a step-by-step formal dialogue |
|
You need to understand why two positions conflict |
|
Development
# Run tests
uv run pytest
# Run specific test
uv run pytest tests/test_core.py -v
# Run with coverage
uv run pytest --cov=warrant_mcpProject Structure
warrant-mcp/
āāā src/warrant_mcp/
ā āāā __init__.py
ā āāā server.py # MCP server ā exposes 10 tools
ā āāā core/ # Core argumentation modules
ā āāā dung.py # Abstract Argumentation Framework
ā āāā bipolar.py # Bipolar AF (attack + support)
ā āāā gradual.py # Gradual semantics (h-Categorizer, Counting)
ā āāā toulmin.py # Toulmin argument model
ā āāā walton.py # Walton's argumentation schemes
ā āāā pollock.py # Pollock's defeasible reasoning
ā āāā prakken.py # Prakken's dialogue protocol
ā āāā aspic.py # ASPIC+ disagreement diagnosis
āāā tests/ # Test suite
āāā .claude/
ā āāā agents/ # Agent definitions (autonomous reasoning personas)
ā ā āāā argue.md # Structured argumentation agent
ā ā āāā debate.md # Multi-agent debate orchestrator
ā ā āāā deliberate.md # Collaborative deliberation facilitator
ā āāā skills/ # Skill definitions (slash commands)
ā āāā argue.md # /argue skill
ā āāā debate.md # /debate skill
ā āāā deliberate.md # /deliberate skill
āāā pyproject.toml
āāā README.mdTheoretical Background
Dung's Abstract Argumentation Framework (1995)
Models arguments and attacks as a directed graph. Semantics determine acceptable arguments:
Grounded: Skeptical, unique extension.
Preferred: Credulous, maximal admissible sets.
Stable: Conflict-free sets that attack everything outside.
Toulmin's Argument Model (1958)
Structures arguments with Claim, Data, Warrant, Backing, Rebuttal, and Qualifier.
Walton's Argumentation Schemes (1996)
Presumptive reasoning templates with critical questions (e.g., Expert Opinion, Consequences, Practical Reasoning, Analogy).
Pollock's Defeasible Reasoning (1987)
Rebutting (contradicts conclusion) vs Undercutting (breaks inference) defeaters.
Prakken's Dialogue Protocol (2006)
Formal dialogue with commitment stores and speech acts (claim, why, concede, retract, since).
ASPIC+ Disagreement Diagnosis
Classifies disagreements as Factual, Inferential, Preferential, or Goal Conflict.
Bipolar Argumentation Framework
Extended AF with both attack and support relations between arguments. Enables richer modeling of argument interactions including supported attacks and secondary attacks.
Gradual Semantics
Scores arguments on a continuous [0, 1] scale instead of binary accept/reject. Methods: h-Categorizer and Counting semantics.
License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.