Skip to main content
Glama

Server Details

Kotlin compile-time SQL library. Docs, code validation, and SQLite execution tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
getExoQueryDocsgetExoQueryDocsA
Destructive
Inspect

Access comprehensive ExoQuery documentation organized by topic and category.

ExoQuery is a Language Integrated Query library for Kotlin Multiplatform that translates Kotlin DSL expressions into SQL at compile time. This resource provides access to the complete documentation covering all aspects of the library.

AVAILABLE DOCUMENTATION CATEGORIES:

  1. Getting Started

    • Introduction: What ExoQuery is and why it exists

    • Installation: Project setup and dependencies

    • Quick Start: First query in minutes

  2. Core Concepts

    • SQL Blocks: The sql { } construct and query building

    • Parameters: Safe runtime data handling

    • Composing Queries: Functional query composition

  3. Query Operations

    • Basic Operations: Map, filter, and transformations

    • Joins: Inner, left, and implicit joins

    • Grouping: GROUP BY and HAVING clauses

    • Sorting: ORDER BY operations

    • Subqueries: Correlated and nested queries

    • Window Functions: Advanced analytics

  4. Actions

    • Insert: INSERT with returning and conflict handling

    • Update: UPDATE operations with setParams

    • Delete: DELETE with returning

    • Batch Operations: Bulk inserts and updates

  5. Advanced Features

    • SQL Fragment Functions: Reusable SQL components with @SqlFragment

    • Dynamic Queries: Runtime query generation with @SqlDynamic

    • Free Blocks: Custom SQL and user-defined functions

    • Transactions: Transaction support patterns

    • Polymorphic Queries: Interfaces, sealed classes, higher-order functions

    • Local Variables: Variables within SQL blocks

  6. Data Handling

    • Serialization: kotlinx.serialization integration

    • Custom Type Encoding: Custom encoders and decoders

    • JSON Columns: JSON and JSONB support (PostgreSQL)

    • Column Naming: @SerialName and @ExoEntity annotations

    • Nested Datatypes: Complex data structures

    • Kotlinx Integration: JSON and other serialization formats

  7. Schema-First Development

    • Entity Generation: Compile-time code generation from database schema

    • AI-Enhanced Entities: Using LLMs to generate cleaner entity code

  8. Reference

    • SQL Functions: Available string, math, and date functions

    • API Reference: Core types and function signatures

HOW TO USE THIS RESOURCE:

The resource URI follows the pattern: exoquery://docs/{file-path}

Where {file-path} is the relative path from the docs root, e.g.:

  • exoquery://docs/01-getting-started/01-introduction.md

  • exoquery://docs/03-query-operations/02-joins.md

  • exoquery://docs/05-advanced-features/01-sql-fragments.md

To discover available documents, use the MCP resources/list endpoint which will return all available documentation files with their titles, descriptions, and categories.

Each document includes:

  • Title and description

  • Category classification

  • Complete markdown content with code examples

  • Cross-references to related topics

WHEN TO USE:

  • User asks about ExoQuery syntax, features, or capabilities

  • User needs examples of specific query patterns

  • User encounters errors and needs to verify correct usage

  • User wants to understand advanced features or best practices

ParametersJSON Schema
NameRequiredDescriptionDefault
filePathYes The documentation file path to retrieve. Format: Relative path from docs root (e.g., "01-getting-started/01-introduction.md") The full URI is: exoquery://docs/{file-path} To find available file paths, use the MCP resources/list endpoint which returns metadata for all documentation files including their paths, titles, categories, and descriptions. Common paths: - Getting Started: 01-getting-started/01-introduction.md, 01-getting-started/02-installation.md, 01-getting-started/03-quick-start.md - Core Concepts: 02-core-concepts/01-sql-blocks.md, 02-core-concepts/02-parameters.md, 02-core-concepts/03-composing-queries.md - Query Operations: 03-query-operations/01-basic-operations.md, 03-query-operations/02-joins.md, 03-query-operations/03-grouping.md - Actions: 04-actions/01-insert.md, 04-actions/02-update.md, 04-actions/03-delete.md - Advanced: 05-advanced-features/01-sql-fragments.md, 05-advanced-features/02-dynamic-queries.md - Data Handling: 06-data-handling/03-json-columns.md, 06-data-handling/04-column-naming.md
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide hints (e.g., destructiveHint: true, openWorldHint: true), but the description adds context by explaining the URI pattern, how to discover files via resources/list, and what each document includes (e.g., markdown content, cross-references). However, it doesn't elaborate on the destructive nature or open-world implications beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., 'AVAILABLE DOCUMENTATION CATEGORIES', 'HOW TO USE THIS RESOURCE', 'WHEN TO USE'), but it includes extensive category listings that could be considered verbose. However, each sentence serves a purpose, such as educating users about available topics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (accessing documentation with a single parameter), the description is quite complete: it explains the library context, usage patterns, and discovery methods. However, without an output schema, it doesn't detail return values (e.g., document structure), leaving a minor gap in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already fully documents the filePath parameter. The description adds value by providing example paths and clarifying the URI pattern, but doesn't introduce new semantic details beyond what's in the schema. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Access comprehensive ExoQuery documentation organized by topic and category.' It specifies the verb ('access') and resource ('ExoQuery documentation'), and distinguishes it from siblings like listExoQueryDocs (which lists available docs) and getExoQueryDocsMulti (likely for multiple files).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an explicit 'WHEN TO USE' section with four specific scenarios (e.g., 'User asks about ExoQuery syntax', 'User needs examples'), and it references the MCP resources/list endpoint for discovering available documents, providing clear alternatives and context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getExoQueryDocsMultigetExoQueryDocsMultiA
Destructive
Inspect

Access multiple ExoQuery documentation sections simultaneously.

This tool is similar to the single-document retrieval tool but allows fetching multiple documentation files in a single request. This is particularly useful when you need to gather information from several related topics at once.

ExoQuery is a Language Integrated Query library for Kotlin Multiplatform that translates Kotlin DSL expressions into SQL at compile time. This resource provides access to the complete documentation covering all aspects of the library.

HOW TO USE THIS RESOURCE:

Provide a list of file paths, where each path is the relative path from the docs root, e.g.:

  • 01-getting-started/01-introduction.md

  • 03-query-operations/02-joins.md

  • 05-advanced-features/01-sql-fragments.md

To discover available documents, use the MCP resources/list endpoint which will return all available documentation files with their titles, descriptions, and categories.

Each returned document includes:

  • Title and description

  • Category classification

  • Complete markdown content with code examples

  • Cross-references to related topics

WHEN TO USE:

  • User asks about multiple ExoQuery topics that require information from different sections

  • User needs to compare or understand relationships between different features

  • User wants to get comprehensive information across multiple categories

  • More efficient than making multiple single-document requests

ParametersJSON Schema
NameRequiredDescriptionDefault
filePathsYes A list of documentation file paths to retrieve. Format: List of relative paths from docs root (e.g., ["01-getting-started/01-introduction.md", "03-query-operations/02-joins.md"]) Each path follows the pattern used in single-document retrieval: {category-folder}/{file-name}.md To find available file paths, use the MCP resources/list endpoint which returns metadata for all documentation files including their paths, titles, categories, and descriptions. Common paths: - Getting Started: 01-getting-started/01-introduction.md, 01-getting-started/02-installation.md, 01-getting-started/03-quick-start.md - Core Concepts: 02-core-concepts/01-sql-blocks.md, 02-core-concepts/02-parameters.md, 02-core-concepts/03-composing-queries.md - Query Operations: 03-query-operations/01-basic-operations.md, 03-query-operations/02-joins.md, 03-query-operations/03-grouping.md - Actions: 04-actions/01-insert.md, 04-actions/02-update.md, 04-actions/03-delete.md - Advanced: 05-advanced-features/01-sql-fragments.md, 05-advanced-features/02-dynamic-queries.md - Data Handling: 06-data-handling/03-json-columns.md, 06-data-handling/04-column-naming.md
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description adds useful context about what gets returned ('Each returned document includes: Title and description, Category classification, Complete markdown content with code examples, Cross-references to related topics'), but doesn't explain the destructive nature implied by annotations or mention rate limits. It provides some behavioral context beyond annotations but not comprehensive coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, usage instructions, when-to-use) and front-loads the core functionality. While comprehensive, some sentences could be more concise (e.g., the ExoQuery definition paragraph could be tighter). Most content earns its place by providing necessary context and guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (batch retrieval with one parameter), no output schema, and rich annotations, the description provides good completeness. It covers purpose, usage scenarios, parameter guidance, and return format. The main gap is lack of output schema documentation, but the description compensates by detailing what's returned. For a read-focused tool with good annotations, this is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the filePaths parameter thoroughly documented in the schema including format examples, path patterns, and common paths. The description adds minimal additional parameter semantics beyond what's in the schema, mainly repeating the format example and discovery method. With high schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Access multiple ExoQuery documentation sections simultaneously' and 'allows fetching multiple documentation files in a single request.' It explicitly distinguishes from the sibling tool getExoQueryDocs by mentioning 'similar to the single-document retrieval tool but allows fetching multiple...' This provides specific verb+resource differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a dedicated 'WHEN TO USE' section with four explicit scenarios: when users ask about multiple topics, need to compare features, want comprehensive information across categories, or seek efficiency over multiple requests. It also references the sibling listExoQueryDocs tool for discovery ('To discover available documents, use the MCP resources/list endpoint').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listExoQueryDocslistExoQueryDocsB
Destructive
Inspect

Lists all available ExoQuery documentation resources with their metadata

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, but the description doesn't explain what gets destroyed or any destructive behavior, missing critical context. It also doesn't add other behavioral traits like rate limits or auth needs. The description doesn't contradict annotations, but fails to provide necessary behavioral details beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any fluff or redundant information. It's front-loaded and appropriately sized for a simple listing tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and annotations covering some behavioral aspects, the description is minimally adequate. However, it lacks details on destructive behavior (as per annotations) and doesn't explain the output format or metadata structure, leaving gaps in completeness for a tool that lists resources.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the input schema fully documents the lack of parameters. The description doesn't need to add parameter details, and it appropriately doesn't mention any, earning a baseline score for tools with no parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Lists') and resource ('all available ExoQuery documentation resources with their metadata'), making the purpose understandable. It doesn't explicitly differentiate from sibling tools like 'getExoQueryDocs' or 'getExoQueryDocsMulti', which likely retrieve specific documents rather than listing all available resources, so it misses full sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'getExoQueryDocs' or 'validateExoquery', nor does it specify any context or prerequisites for usage, leaving the agent without direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runRawSqlrunRawSqlA
Destructive
Inspect

Execute raw, client-provided SQL queries against an ephemeral database initialized with the provided schema. Returns query results in a simple JSON format with column headers and row data as a 2D array.

The database type (SQLite or Postgres) is specified via the databaseType parameter:

  • SQLITE: In-memory, lightweight, uses standard SQLite syntax

  • POSTGRES: Temporary isolated schema with dedicated user, uses PostgreSQL syntax and features

WHEN TO USE: When you need to run your own hand-written SQL queries to test database behavior or compare the output with ExoQuery results from validateAndRunExoquery. This lets you verify that ExoQuery-generated SQL produces the same results as your expected SQL.

INPUT REQUIREMENTS:

  • query: A valid SQL query (SELECT, INSERT, UPDATE, DELETE, etc.)

  • schema: SQL schema with CREATE TABLE and INSERT statements to initialize the test database

  • databaseType: Either "SQLITE" or "POSTGRES" (defaults to SQLITE if not specified)

OUTPUT FORMAT:

On success, returns JSON with the SQL query and a 2D array of results: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]}

Output format details:

  • First array element contains column headers

  • Subsequent array elements contain row data

  • All values are returned as strings

On error, returns JSON with error message and the attempted query (if available): {"error":"Query execution failed: no such table: USERS","sql":"SELECT * FROM USERS"}

Or if schema initialization fails: {"error":"Database initialization failed due to: near "CREAT": syntax error\nWhen executing the following statement:\n--------\nCREAT TABLE users ...\n--------","sql":"CREAT TABLE users ..."}

EXAMPLE INPUT:

Query: SELECT * FROM users ORDER BY id

Schema: CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER );

INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30); INSERT INTO users (id, name, age) VALUES (2, 'Bob', 25); INSERT INTO users (id, name, age) VALUES (3, 'Charlie', 35);

EXAMPLE SUCCESS OUTPUT: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]}

EXAMPLE ERROR OUTPUT (bad table name): {"error":"Query execution failed: no such table: invalid_table","sql":"SELECT * FROM invalid_table"}

EXAMPLE ERROR OUTPUT (bad schema): {"error":"Database initialization failed due to: near "CREAT": syntax error\nWhen executing the following statement:\n--------\nCREAT TABLE users (id INTEGER)\n--------\nCheck that the initialization SQL is valid and compatible with SQLite.","sql":"CREAT TABLE users (id INTEGER)"}

COMMON QUERY EXAMPLES:

Select all rows: SELECT * FROM users

Select specific columns with filtering: SELECT name, age FROM users WHERE age > 25

Aggregate functions: SELECT COUNT(*) as total FROM users

Join queries: SELECT u.name, o.total FROM users u JOIN orders o ON u.id = o.user_id

Insert data: INSERT INTO users (name, age) VALUES ('David', 40)

Update data: UPDATE users SET age = 31 WHERE name = 'Alice'

Delete data: DELETE FROM users WHERE age < 25

Count with grouping: SELECT age, COUNT(*) as count FROM users GROUP BY age

SCHEMA RULES:

  • Use standard SQLite syntax

  • Table names are case-sensitive (use lowercase for simplicity or quote names)

  • Include INSERT statements to populate test data for meaningful results

  • Supported data types: INTEGER, TEXT, REAL, BLOB, NULL

  • Use INTEGER PRIMARY KEY for auto-increment columns

  • Schema SQL is split on semicolons (;), so each statement after a ';' is executed separately

  • Avoid semicolons in comments as they will cause statement parsing issues

COMPARISON WITH EXOQUERY: This tool is designed to work alongside validateAndRunExoquery for comparison purposes:

  1. Use validateAndRunExoquery to run ExoQuery Kotlin code and see the generated SQL + results

  2. Use runRawSql with your own hand-written SQL to verify you get the same output

  3. Compare the outputs to ensure ExoQuery generates the SQL you expect

  4. Test edge cases with plain SQL before writing equivalent ExoQuery code

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes A valid SQL query to execute against the database. Can be any valid SQL statement (syntax depends on databaseType parameter): - SELECT queries (with WHERE, JOIN, GROUP BY, ORDER BY, LIMIT, etc.) - INSERT statements - UPDATE statements - DELETE statements - DDL statements like CREATE/ALTER/DROP (applied after schema initialization) The query will be executed against a database initialized with the provided schema parameter. Example: SELECT * FROM users WHERE age > 25 ORDER BY name
schemaYes SQL schema to initialize the ephemeral test database. Must include: 1. CREATE TABLE statements for all tables used in the query 2. INSERT statements with test data Use syntax appropriate for the selected databaseType (SQLite or Postgres). Table names are case-sensitive. The schema is split on semicolons, so each statement is executed separately. Example: CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER ); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30); INSERT INTO users (id, name, age) VALUES (2, 'Bob', 25); INSERT INTO users (id, name, age) VALUES (3, 'Charlie', 35);
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations. While annotations indicate destructiveHint=true and openWorldHint=true, the description elaborates on database ephemerality, initialization process, output format details, error handling, schema rules, and comparison workflow. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is comprehensive but overly long with redundant sections. While well-structured with clear headings, it includes multiple example outputs and common query examples that could be condensed. The core information is front-loaded, but subsequent sections contain repetition that doesn't earn its place efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive SQL execution with schema initialization) and lack of output schema, the description provides complete context. It covers purpose, usage, parameters, output format, error handling, examples, schema rules, and comparison with sibling tools. No significant gaps exist for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by explaining parameter semantics in the 'INPUT REQUIREMENTS' section, providing databaseType specifics (SQLite vs Postgres differences), and giving extensive examples that illustrate how parameters work together. However, it doesn't add syntax details beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Execute raw, client-provided SQL queries against an ephemeral database initialized with the provided schema.' It specifies the verb ('execute'), resource ('SQL queries'), and distinguishes from sibling tools by mentioning comparison with validateAndRunExoquery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an explicit 'WHEN TO USE' section that states: 'When you need to run your own hand-written SQL queries to test database behavior or compare the output with ExoQuery results from validateAndRunExoquery.' It also provides a detailed comparison section explaining how to use this tool alongside validateAndRunExoquery for verification purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validateAndRunExoqueryvalidateAndRunExoqueryA
Destructive
Inspect

Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema. ExoQuery is a compile-time SQL query builder that translates Kotlin DSL expressions into SQL.

WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data.

INPUT REQUIREMENTS:

  • Complete Kotlin code (same requirements as validateExoquery)

  • SQL schema with CREATE TABLE and INSERT statements for test data

  • Data classes MUST exactly match the schema table structure

  • Column names in data classes must match schema (use @SerialName for snake_case columns)

  • Must include or or more .runSample() calls in main() to trigger SQL generation and execution (note that .runSample() is NOT or real production use, use .runOn(database) instead)

OUTPUT FORMAT:

Returns one or more JSON objects, each on its own line. Each object can be:

  1. SQL with output (query executed successfully): {"sql": "SELECT u.name FROM "User" u", "output": "[(name=Alice), (name=Bob)]"}

  2. Output only (e.g., print statements, intermediate results): {"output": "Before: [(id=1, title=Ion Blend Beans)]"}

  3. Error output (runtime errors, exceptions): {"outputErr": "java.sql.SQLException: Table "USERS" not found"}

Multiple results appear when code has multiple queries or print statements:

{"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25)]"} {"output": "Before:"} {"sql": "INSERT INTO "InventoryItem" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": "Rows affected: 1"} {"output": "After:"} {"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25), (id=2, title=Luna Fuel Flask, unit_price=89.50, in_stock=6)]"}

Compilation errors return the same format as validateExoquery: { "errors": { "File.kt": [ { "interval": {"start": {"line": 12, "ch": 10}, "end": {"line": 12, "ch": 15}}, "message": "Type mismatch: inferred type is String but Int was expected", "severity": "ERROR", "className": "ERROR" } ] } }

Runtime Errors can have the following format: { "errors" : { "File.kt" : [ ] }, "exception" : { "message" : "[SQLITE_ERROR] SQL error or missing database (no such table: User)", "fullName" : "org.sqlite.SQLiteException", "stackTrace" : [ { "className" : "org.sqlite.core.DB", "methodName" : "newSQLException", "fileName" : "DB.java", "lineNumber" : 1179 }, ...] }, "text" : "\n{"sql": "SELECT x.id, x.name, x.age FROM User x"}\n\n" } If there was a SQL query generated before the error, it will appear in the "text" field output stream.

EXAMPLE INPUT CODE:

import io.exoquery.*
import kotlinx.serialization.Serializable
import kotlinx.serialization.SerialName

@Serializable
data class User(val id: Int, val name: String, val age: Int)

@Serializable
data class Order(val id: Int, @SerialName("user_id") val userId: Int, val total: Int)

val userOrders = sql.select {
    val u = from(Table<User>())
    val o = join(Table<Order>()) { o -> o.userId == u.id }
    Triple(u.name, o.total, u.age)
}

fun main() = userOrders.buildPrettyFor.Sqlite().runSample()

EXAMPLE INPUT SCHEMA:

CREATE TABLE "User" (id INT, name VARCHAR(100), age INT);
CREATE TABLE "Order" (id INT, user_id INT, total INT);

INSERT INTO "User" (id, name, age) VALUES
  (1, 'Alice', 30),
  (2, 'Bob', 25);

INSERT INTO "Order" (id, user_id, total) VALUES
  (1, 1, 100),
  (2, 1, 200),
  (3, 2, 150);

EXAMPLE SUCCESS OUTPUT: {"sql": "SELECT u.name AS first, o.total AS second, u.age AS third FROM "User" u INNER JOIN "Order" o ON o.user_id = u.id", "output": "[(first=Alice, second=100, third=30), (first=Alice, second=200, third=30), (first=Bob, second=150, third=25)]"}

EXAMPLE WITH MULTIPLE OPERATIONS (insert with before/after check): {"output": "Before:"} {"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans)]"} {"sql": "INSERT INTO "InventoryItem" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": ""} {"output": "After:"} {"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans), (id=2, title=Luna Fuel Flask)]"}

EXAMPLE RUNTIME ERROR (if a user divided by zero): {"outputErr": "Exception in thread "main" java.lang.ArithmeticException: / by zero"}

KEY PATTERNS:

(See validateExoquery for complete pattern reference)

Summary of most common patterns:

  • Filter: sql { Table().filter { x -> x.field == value } }

  • Select: sql.select { val x = from(Table()); where { ... }; x }

  • Join: sql.select { val a = from(Table()); val b = join(Table()) { b -> b.aId == a.id }; Pair(a, b) }

  • Left join: joinLeft(Table()) { ... } returns nullable

  • Insert: sql { insert { setParams(obj).excluding(id) } }

  • Update: sql { update().set { it.field to value }.where { it.id == x } }

  • Delete: sql { delete().where { it.id == x } }

SCHEMA RULES:

  • Table names should match data class names (case-sensitive, use quotes for exact match)

  • Column names must match @SerialName values or property names

  • Include realistic test data to verify query logic

  • Sqlite database syntax (mostly compatible with standard SQL)

COMMON PATTERNS:

  • JSON columns: Use VARCHAR for storage, @SqlJsonValue on the nested data class

  • Auto-increment IDs: Use INTEGER PRIMARY KEY

  • Nullable columns: Use Type? in Kotlin, allow NULL in schema

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes Complete ExoQuery Kotlin code to compile and execute. Must include: 1. Imports (minimum: io.exoquery.*, kotlinx.serialization.Serializable) 2. @Serializable data classes that EXACTLY match your schema tables 3. The query expression 4. A main() function ending with .buildFor.<Dialect>().runSample() This function MUST be present to trigger SQL generation and execution. Use @SerialName("column_name") when Kotlin property names differ from SQL column names. Use @Contextual for BigDecimal fields. Use @SqlJsonValue on data classes that represent JSON column values. Multiple queries in main() will produce multiple output JSON objects.
schemaYes SQL schema to initialize the Sqlite test database. Must include: 1. CREATE TABLE statements for all tables referenced in the query 2. INSERT statements with test data to verify query behavior Table and column names must exactly match the data classes in the code. Use double quotes around table names to preserve case: CREATE TABLE "User" (...) Common error: Table "USER" not found, means you wrote CREATE TABLE User but queried "User". Always quote table names in schema to match ExoQuery's generated SQL. Example: CREATE TABLE "User" (id INT, name VARCHAR(100), age INT); INSERT INTO "User" VALUES (1, 'Alice', 30), (2, 'Bob', 25);
databaseTypeNoDatabase type: SQLITE or POSTGRES (default: SQLITE)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. Annotations indicate destructiveHint=true and readOnlyHint=false, but the description elaborates on execution specifics (e.g., requires .runSample() calls, outputs JSON objects with SQL/results/errors, handles compilation/runtime errors). It doesn't contradict annotations—destructiveHint=true aligns with executing code that may modify data. However, it lacks details on rate limits or authentication needs, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy and includes extensive examples, patterns, and rules that could be condensed. While well-structured with sections like 'WHEN TO USE' and 'INPUT REQUIREMENTS', it contains redundant details (e.g., repeating schema rules in multiple places) and overly verbose examples, reducing efficiency. Some sentences don't earn their place in a concise tool description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (executing code against a database with destructive potential) and rich annotations, the description is highly complete. It covers purpose, usage, input requirements, output formats, examples, common patterns, and schema rules. Although there's no output schema, the description thoroughly explains return values and error handling, making it sufficient for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds meaningful context: it explains that 'code' must include specific imports, data classes, and .runSample() calls, and 'schema' must match table/column names exactly. It also mentions 'databaseType' as optional with a default. This provides practical guidance beyond the schema's technical descriptions, though it doesn't fully detail all edge cases.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema.' It specifies the verb (compile and execute), resource (ExoQuery Kotlin code), and target (Sqlite database), and distinguishes it from sibling tools like validateExoquery by emphasizing execution against actual data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data.' It also implies alternatives by referencing sibling tools like validateExoquery for compilation-only validation and runRawSql for raw SQL execution, providing clear context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validateExoqueryvalidateExoqueryA
Destructive
Inspect

Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema. ExoQuery is a compile-time SQL query builder that translates Kotlin DSL expressions into SQL.

WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data.

INPUT REQUIREMENTS:

  • Complete Kotlin code (same requirements as validateExoquery)

  • SQL schema with CREATE TABLE and INSERT statements for test data

  • Data classes MUST exactly match the schema table structure

  • Column names in data classes must match schema (use @SerialName for snake_case columns)

  • Must include or or more .runSample() calls in main() to trigger SQL generation and execution (note that .runSample() is NOT or real production use, use .runOn(database) instead)

OUTPUT FORMAT:

Returns one or more JSON objects, each on its own line. Each object can be:

  1. SQL with output (query executed successfully): {"sql": "SELECT u.name FROM "User" u", "output": "[(name=Alice), (name=Bob)]"}

  2. Output only (e.g., print statements, intermediate results): {"output": "Before: [(id=1, title=Ion Blend Beans)]"}

  3. Error output (runtime errors, exceptions): {"outputErr": "java.sql.SQLException: Table "USERS" not found"}

Multiple results appear when code has multiple queries or print statements:

{"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25)]"} {"output": "Before:"} {"sql": "INSERT INTO "InventoryItem" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": "Rows affected: 1"} {"output": "After:"} {"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25), (id=2, title=Luna Fuel Flask, unit_price=89.50, in_stock=6)]"}

Compilation errors return the same format as validateExoquery: { "errors": { "File.kt": [ { "interval": {"start": {"line": 12, "ch": 10}, "end": {"line": 12, "ch": 15}}, "message": "Type mismatch: inferred type is String but Int was expected", "severity": "ERROR", "className": "ERROR" } ] } }

Runtime Errors can have the following format: { "errors" : { "File.kt" : [ ] }, "exception" : { "message" : "[SQLITE_ERROR] SQL error or missing database (no such table: User)", "fullName" : "org.sqlite.SQLiteException", "stackTrace" : [ { "className" : "org.sqlite.core.DB", "methodName" : "newSQLException", "fileName" : "DB.java", "lineNumber" : 1179 }, ...] }, "text" : "\n{"sql": "SELECT x.id, x.name, x.age FROM User x"}\n\n" } If there was a SQL query generated before the error, it will appear in the "text" field output stream.

EXAMPLE INPUT CODE:

import io.exoquery.*
import kotlinx.serialization.Serializable
import kotlinx.serialization.SerialName

@Serializable
data class User(val id: Int, val name: String, val age: Int)

@Serializable
data class Order(val id: Int, @SerialName("user_id") val userId: Int, val total: Int)

val userOrders = sql.select {
    val u = from(Table<User>())
    val o = join(Table<Order>()) { o -> o.userId == u.id }
    Triple(u.name, o.total, u.age)
}

fun main() = userOrders.buildPrettyFor.Sqlite().runSample()

EXAMPLE INPUT SCHEMA:

CREATE TABLE "User" (id INT, name VARCHAR(100), age INT);
CREATE TABLE "Order" (id INT, user_id INT, total INT);

INSERT INTO "User" (id, name, age) VALUES
  (1, 'Alice', 30),
  (2, 'Bob', 25);

INSERT INTO "Order" (id, user_id, total) VALUES
  (1, 1, 100),
  (2, 1, 200),
  (3, 2, 150);

EXAMPLE SUCCESS OUTPUT: {"sql": "SELECT u.name AS first, o.total AS second, u.age AS third FROM "User" u INNER JOIN "Order" o ON o.user_id = u.id", "output": "[(first=Alice, second=100, third=30), (first=Alice, second=200, third=30), (first=Bob, second=150, third=25)]"}

EXAMPLE WITH MULTIPLE OPERATIONS (insert with before/after check): {"output": "Before:"} {"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans)]"} {"sql": "INSERT INTO "InventoryItem" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": ""} {"output": "After:"} {"sql": "SELECT * FROM "InventoryItem"", "output": "[(id=1, title=Ion Blend Beans), (id=2, title=Luna Fuel Flask)]"}

EXAMPLE RUNTIME ERROR (if a user divided by zero): {"outputErr": "Exception in thread "main" java.lang.ArithmeticException: / by zero"}

KEY PATTERNS:

(See validateExoquery for complete pattern reference)

Summary of most common patterns:

  • Filter: sql { Table().filter { x -> x.field == value } }

  • Select: sql.select { val x = from(Table()); where { ... }; x }

  • Join: sql.select { val a = from(Table()); val b = join(Table()) { b -> b.aId == a.id }; Pair(a, b) }

  • Left join: joinLeft(Table()) { ... } returns nullable

  • Insert: sql { insert { setParams(obj).excluding(id) } }

  • Update: sql { update().set { it.field to value }.where { it.id == x } }

  • Delete: sql { delete().where { it.id == x } }

SCHEMA RULES:

  • Table names should match data class names (case-sensitive, use quotes for exact match)

  • Column names must match @SerialName values or property names

  • Include realistic test data to verify query logic

  • Sqlite database syntax (mostly compatible with standard SQL)

COMMON PATTERNS:

  • JSON columns: Use VARCHAR for storage, @SqlJsonValue on the nested data class

  • Auto-increment IDs: Use INTEGER PRIMARY KEY

  • Nullable columns: Use Type? in Kotlin, allow NULL in schema

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes Complete ExoQuery Kotlin code to compile. Must include: 1. Imports (minimum: io.exoquery.*, kotlinx.serialization.Serializable) 2. @Serializable data classes matching your query entities 3. The query expression using sql { ... } or sql.select { ... } 4. A main() function ending with .buildFor.<Dialect>().runSample() or .buildPrettyFor.<Dialect>().runSample() This function MUST be present to trigger SQL generation. The runSample() function triggers SQL generation but does NOT execute the query for validateExoquery. (Note that this is NOT for production ExoQuery usage. For that you use `.runOn(database)`.) Dialect is part of the code (e.g., .buildFor.Postgres()), NOT a separate parameter. If compilation fails, check the error interval positions to locate the exact issue in your code.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations. Annotations indicate destructiveHint=true and readOnlyHint=false, but the description elaborates by detailing the execution environment (Sqlite database), output formats (JSON objects for SQL results, errors), and constraints like requiring .runSample() calls. It does not contradict annotations, as 'execute' aligns with destructiveHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy but well-structured with sections like 'WHEN TO USE,' 'INPUT REQUIREMENTS,' and 'OUTPUT FORMAT.' However, it includes redundant details (e.g., repeating 'validateExoquery' references) and extensive examples that could be condensed. It is front-loaded with key information but could be more concise overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (compilation and execution with database interaction), the description is highly complete. It covers purpose, usage guidelines, input requirements, output formats, examples, common patterns, and schema rules. Although there is no output schema, the description thoroughly explains return values and error handling, compensating adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds minimal parameter semantics beyond the schema, such as noting that 'Data classes MUST exactly match the schema table structure' and 'Column names in data classes must match schema,' but these are already implied by the schema's requirements for code compilation and execution.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema.' It specifies the verb (compile and execute), resource (ExoQuery Kotlin code), and distinguishes it from sibling tools like 'validateAndRunExoquery' by emphasizing execution against a database.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data.' It differentiates from production usage by noting that '.runSample() is NOT for real production use, use .runOn(database) instead,' and implies an alternative with sibling tools like 'runRawSql' for raw SQL execution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources