Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
SYSTEMONOMIC_API_KEYYesYour API key from Systemonomic (starts with 'sk_sys_')
SYSTEMONOMIC_API_URLNoOptional API endpoint URL (defaults to production)https://systemonomic.com

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
list_tasksC

List all tasks in a project.

Each task has an id, name, description, mode (manual/semi-auto/auto), and links to WDA nodes.

create_taskB

Create a new task in a project.

Args: project_id: The project to add the task to name: Task name description: Optional task description mode: One of: manual, semi-auto, auto (default: manual)

generate_tasks_from_wdaB

Auto-generate tasks from the WDA Objects level.

Analyzes the Objects (lowest level) of the WDA and creates corresponding control tasks. This is the standard first step before running ATSS.

derive_task_suggestionsA

Use AI to derive detailed task suggestions from WDA objects.

More sophisticated than generate_tasks_from_wda — uses an LLM to analyze each WDA object and suggest tasks with descriptions.

Args: project_id: The project to analyze provider: LLM provider — gemini, claude, or openai (default: gemini)

list_suggestionsB

List all pending task suggestions for a project.

Suggestions are AI-generated task proposals that haven't been accepted yet.

accept_suggestionsC

Accept task suggestions, promoting them to actual project tasks.

Args: project_id: The project containing the suggestions suggestion_ids: List of suggestion IDs to accept

run_atss_batchA

Run ATSS (Automated Task Suitability Scoring) on all tasks in a project.

Each task is assessed across multiple gates (data availability, rule-base, exception handling, etc.) and scored 0-100 for automation suitability.

Args: project_id: The project whose tasks to assess provider: LLM provider — gemini, claude, or openai (default: gemini) model: Specific model name (optional, uses provider default)

Returns scored results for each task with classification (Automate / Augment / Manual) and reasoning.

get_atss_resultsB

Get stored ATSS results for a project.

Returns previously persisted assessment results, including scores, classifications, and reasoning for each task.

persist_atss_resultsC

Persist ATSS assessment results to the project.

Args: project_id: The project to save results to rows: List of ATSS result objects (from run_atss_batch output)

list_atss_runsB

List all ATSS assessment runs for a project, with timestamps and summaries.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TonyC23/systemonomic-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server