Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}
resources
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
get_modelA

Retrieve detailed information about a specific HUMMBL mental model using its code (e.g., P1, IN3, CO5).

list_all_modelsB

Retrieve complete list of all 120 HUMMBL mental models with basic information.

search_modelsA

Search HUMMBL mental models by keyword across codes, names, and definitions.

get_transformationA

Retrieve information about a specific transformation type and all its models (P, IN, CO, DE, RE, SY).

search_problem_patternsB

Find pre-defined problem patterns with recommended transformations and top models based on a search query.

recommend_modelsC

Get recommended mental models based on a natural language problem description using HUMMBL REST API.

get_related_modelsC

Get all models related to a specific model with relationship details

add_relationshipC

Add a relationship between two mental models with evidence.

get_recommendation_historyA

Fetch the caller's past recommendation calls (problems submitted and the model codes that were returned), newest first. Useful for 'what did we explore last time?' and for avoiding re-recommending the same models.

get_methodologyA

Retrieve the canonical Self-Dialectical AI Systems methodology with HUMMBL Base120 mappings.

audit_model_referencesB

Audit a list of HUMMBL model references for existence, transformation alignment, and duplicates.

list_workflowsA

Get all available guided workflows for problem-solving with Base120 mental models.

start_workflowB

Begin a guided multi-turn workflow for systematic problem-solving using Base120 mental models.

continue_workflowA

Proceed to the next step of your guided workflow after completing the current step.

find_workflow_for_problemB

Discover which workflow best fits your problem type or situation.

export_modelsA

Export a curated subset of Base120 mental models as Markdown, JSON, or PDF. Pass codes for a specific list, transformation for a whole group, or neither for all 120.

Prompts

Interactive templates invoked by user choice

NameDescription
root_cause_analysisSystematically investigate problems to find root causes, not just symptoms. Uses Perspective, Inversion, and Decomposition transformations. (~20-30 minutes)
strategy_designDesign comprehensive strategies by framing the problem, combining elements creatively, and understanding system dynamics. (~30-45 minutes)
decision_makingMake high-quality decisions by framing clearly, stress-testing with inversion, and planning reversible vs. irreversible choices. (~15-25 minutes)
analyze_with_modelsOpen-ended analysis: surface the most relevant Base120 mental models for a problem and synthesise them into concrete guidance.
apply_modelApply one specific HUMMBL mental model (by code, e.g. P1, IN3, CO5) to a problem.

Resources

Contextual data attached and managed by the client

NameDescription
all-modelsComplete Base120 framework with all 120 mental models.
self-dialectical-methodologyCanonical Self-Dialectical AI Systems methodology with HUMMBL Base120 mappings.
self-dialectical-methodology-markdownHuman-readable markdown overview of the Self-Dialectical AI Systems methodology, derived from the canonical structured definition.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hummbl-dev/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server