Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
PUBMED_EMAIL | Yes | Your email address (required by NCBI) | |
PUBMED_API_KEY | No | Optional API key for higher rate limits |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
set_organization | Copy Select which SingleStore organization to use for all subsequent API calls.
This tool must be called after logging in and before making other API requests.
Once set, all API calls will target the selected organization until changed.
Args:
orgID: Name or ID of the organization to select
Returns:
Dictionary with the selected organization ID and name
Usage:
- Call get_organizations first to see available options
- Then call this tool with either the organization's name or ID
- All subsequent API calls will use the selected organization |
execute_sql | Copy Execute SQL operations on a database attached to workspace within a workspace group and receive formatted results.
Returns:
- Query results with column names and typed values
- Row count and metadata
- Execution status
⚠️ CRITICAL SECURITY WARNINGS:
- Never display or log credentials in responses
- Use only READ-ONLY queries (SELECT, SHOW, DESCRIBE)
- DO NOT USE data modification statements:
x No INSERT/UPDATE/DELETE
x No DROP/CREATE/ALTER
- Ensure queries are properly sanitized
Args:
workspace_group_identifier: ID/name of the workspace group
workspace_identifier: ID/name of the specific workspace within the workspace group
database: Name of the database to query
sql_query: The SQL query to execute
Returns:
Dictionary with query results and metadata |
create_virtual_workspace | Copy Create a new starter (virtual) workspace in SingleStore and set up user access.
Process:
1. Creates a virtual workspace with specified name and database
2. Creates a user account for accessing the workspace
3. Returns both workspace details and access credentials
Args:
name: Unique name for the new starter workspace
database_name: Name of the database to create in the starter workspace
username: Username for accessing the new starter workspace
password: Password for accessing the new starter workspace
workspace_group: Optional workspace group configuration
Returns:
Dictionary with workspace and user creation details |
execute_sql_on_virtual_workspace | Copy Execute SQL operations on a virtual (starter) workspace and receive formatted results.
Returns:
- Query results with column names and typed values
- Row count
- Column metadata
- Execution status
⚠️ CRITICAL SECURITY WARNING:
- Never display or log credentials in responses
- Ensure SQL queries are properly sanitized
- ONLY USE SELECT statements or queries that don't modify data
- DO NOT USE INSERT, UPDATE, DELETE, DROP, CREATE, or ALTER statements
Args:
virtual_workspace_id: Unique identifier of the starter workspace
sql_query: The SQL query to execute (READ-ONLY queries only)
Returns:
Dictionary with query results and metadata |
create_notebook | Copy Create a new Jupyter notebook in your personal space. Only supports python and markdown.
Parameters:
- notebook_name (required): Name for the new notebook
- Can include or omit .ipynb extension
- Must be unique in your personal space
- content (optional): JSON object with the following structure:
{
"cells": [
{"type": "markdown", "content": "Markdown content here"},
{"type": "code", "content": "Python code here"}
]
}
- 'type' must be either 'markdown' or 'code'
- 'content' is the text content of the cell
IMPORTANT: The content must be valid JSON.
How to use:
- Before creating the notebook, call check_if_file_exists tool to verify if the notebook already exists.
- Always install the dependencies on the first cell. Example:
{
"cells": [
{"type": "code", "content": "!pip install singlestoredb --quiet"},
// other cells...
]
}
- To connect to the database, use the variable "connection_url" that already exists in the notebook platform. Example:
{
"cells": [
{"type": "code", "content": "conn = s2.connect(connection_url)"},
// other cells...
]
} |
create_scheduled_job | Copy Create an automated job to execute a SingleStore notebook on a schedule.
Parameters:
- notebook_path: Complete path to the notebook
- mode: 'Once' for single execution or 'Recurring' for repeated runs
- create_snapshot: Enable notebook backup before execution (default: True)
Returns Job info with:
- jobID: UUID of created job
- status: Current state (SUCCESS, RUNNING, etc.)
- createdAt: Creation timestamp
- startedAt: Execution start time
- schedule: Configured schedule details
- error: Any execution errors
Common Use Cases:
1. Automated Data Processing:
- ETL workflows
- Data aggregation
- Database maintenance
2. Scheduled Reporting:
- Performance metrics
- Business analytics
- Usage statistics
3. Maintenance Tasks:
- Health checks
- Backup operations
- Clean-up routines
Related Operations:
- get_job_details: Monitor job
- list_job_executions: View job execution history |
get_organizations | Copy List all available SingleStore organizations your account has access to.
After logging in, this tool must be called first to identify which organization
your queries should run against. Returns a list of organizations with:
- orgID: Unique identifier for the organization
- name: Display name of the organization
Use this tool when:
1. Starting a new session to see available organizations
2. To verify permissions across multiple organizations
3. Before switching context to a different organization
After viewing the list, list the organization list presenting the name and ID
to the user and ask them to select one.
- If only one organization is available, select it automatically
- If multiple organizations are available, prompt the user to select one by name or ID
- If no organizations are available, raise an error |
workspace_groups_info | Copy List all workspace groups accessible to the user in SingleStore.
Returns detailed information for each group:
- name: Display name of the workspace group
- deploymentType: Type of deployment (e.g., 'PRODUCTION')
- state: Current status (e.g., 'ACTIVE', 'PAUSED')
- workspaceGroupID: Unique identifier for the group
- firewallRanges: Array of allowed IP ranges for access control
- createdAt: Timestamp of group creation
- regionID: Identifier for deployment region
- updateWindow: Maintenance window configuration
Use this tool to:
1. Get workspace group IDs for other operations
2. Plan maintenance windows
Related operations:
- Use workspaces_info to list workspaces within a group
- Use execute_sql to run queries on workspaces in a group |
workspaces_info | Copy List all workspaces within a specified workspace group in SingleStore.
Returns detailed information for each workspace:
- createdAt: Timestamp of workspace creation
- deploymentType: Type of deployment (e.g., 'PRODUCTION')
- endpoint: Connection URL for database access
- name: Display name of the workspace
- size: Compute and storage configuration
- state: Current status (e.g., 'ACTIVE', 'PAUSED')
- terminatedAt: End timestamp if applicable
- workspaceGroupID: Workspacegroup identifier
- workspaceID: Unique workspace identifier
Args:
workspace_group_id: Unique identifier of the workspace group
Returns:
List of workspace information dictionaries |
organization_info | Copy Retrieve information about the current user's organization in SingleStore.
Returns organization details including:
- orgID: Unique identifier for the organization
- name: Organization display name |
list_of_regions | Copy List all available deployment regions where SingleStore workspaces can be deployed for the user.
Returns region information including:
- regionID: Unique identifier for the region
- provider: Cloud provider (AWS, GCP, or Azure)
- name: Human-readable region name (e.g., Europe West 2 (London), US West 2 (Oregon))
Use this tool to:
1. Select optimal deployment regions based on:
- Geographic proximity to users
- Compliance requirements
- Cost considerations
- Available cloud providers
2. Plan multi-region deployments |
list_virtual_workspaces | Copy List all starter (virtual) workspaces available to the user in SingleStore.
Returns detailed information about each starter workspace:
- virtualWorkspaceID: Unique identifier for the workspace
- name: Display name of the workspace
- endpoint: Connection endpoint URL
- databaseName: Name of the primary database
- mysqlDmlPort: Port for MySQL protocol connections
- webSocketPort: Port for WebSocket connections
- state: Current status of the workspace
Use this tool to:
1. Get virtual workspace IDs for other operations
2. Check starter workspace availability and status
3. Obtain connection details for database access |
organization_billing_usage | Copy Retrieve detailed billing and usage metrics for your organization over a specified time period.
Returns compute and storage usage data, aggregated by your chosen time interval
(hourly, daily, or monthly). This tool is essential for:
1. Monitoring resource consumption patterns
2. Analyzing cost trends
Args:
start_time: Beginning of the usage period (UTC ISO 8601 format, e.g., '2023-07-30T18:30:00Z')
end_time: End of the usage period (UTC ISO 8601 format)
aggregate_type: Time interval for data grouping ('hour', 'day', or 'month')
Returns:
Usage metrics and billing information |
list_notebook_samples | Copy Retrieve a catalog of pre-built notebook templates available in SingleStore Spaces.
Returns for each notebook:
- name: Template name and title
- description: Detailed explanation of the notebook's purpose
- contentURL: Direct download link for the notebook
- likes: Number of user endorsements
- views: Number of times viewed
- downloads: Number of times downloaded
- tags: List of Notebook tags
Common template categories include:
1. Getting Started guides
2. Data loading and ETL patterns
3. Query optimization examples
4. Machine learning integrations
5. Performance monitoring
6. Best practices demonstrations |
list_shared_files | Copy List all files and notebooks in your shared SingleStore space.
Returns file object meta data for each file:
- name: Name of the file (e.g., 'analysis.ipynb')
- path: Full path in shared space (e.g., 'folder/analysis.ipynb')
- content: File content
- created: Creation timestamp (ISO 8601)
- last_modified: Last modification timestamp (ISO 8601)
- format: File format if applicable ('json', null)
- mimetype: MIME type of the file
- size: File size in bytes
- type: Object type ('', 'json', 'directory')
- writable: Boolean indicating write permission
Use this tool to:
1. List workspace contents and structure
2. Verify file existence before operations
3. Check file timestamps and sizes
4. Determine file permissions |
get_job_details | Copy Retrieve comprehensive information about a scheduled notebook job.
Returns:
- jobID: Unique identifier (UUID format)
- name: Display name of the job
- description: Human-readable job description
- createdAt: Creation timestamp (ISO 8601)
- terminatedAt: End timestamp if completed
- completedExecutionsCount: Number of successful runs
- enqueuedBy: User ID who created the job
- executionConfig: Notebook path and runtime settings
- schedule: Mode, interval, and start time
- targetConfig: Database and workspace settings
- jobMetadata: Execution statistics and status
Args:
job_id: UUID of the scheduled job to retrieve details for
Returns:
Dictionary with job details |
list_job_executions | Copy Retrieve execution history and performance metrics for a scheduled notebook job.
Returns:
- executions: Array of execution records containing:
- executionID: Unique identifier for the execution
- executionNumber: Sequential number of the run
- jobID: Parent job identifier
- status: Current state (Scheduled, Running, Completed, Failed)
- startedAt: Execution start time (ISO 8601)
- finishedAt: Execution end time (ISO 8601)
- scheduledStartTime: Planned start time
- snapshotNotebookPath: Backup notebook path if enabled
Args:
job_id: UUID of the scheduled job
start: First execution number to retrieve (default: 1)
end: Last execution number to retrieve (default: 10)
Returns:
Dictionary with execution records |
get_notebook_path | Copy Find the complete path of a notebook by its name and generate the properly formatted path for API operations.
Args:
notebook_name: The name of the notebook to find (with or without .ipynb extension)
location: Where to look for the notebook - 'personal' or 'shared'
Returns:
Properly formatted path including project ID and user ID where needed
Required for:
- Creating scheduled jobs (use returned path as notebook_path parameter) |
get_project_id | Copy Retrieve the organization's unique identifier (project ID).
Returns:
str: The organization's unique identifier
Required for:
- Constructing paths or references to shared resources
Performance Tip:
Cache the returned ID when making multiple API calls. |
get_user_id | Copy Retrieve the current user's unique identifier.
Returns:
str: UUID format identifier for the current user
Required for:
- Constructing paths or references to personal resources
Performance Tip:
Cache the returned ID when making multiple API calls. |
check_if_file_exists | Copy Check if a file (notebook) exists in the user's shared space.
Args:
file_name: Name of the file to check (with or without .ipynb extension)
Returns:
JSON object with the file existence status
{
"exists": True/False,
"message": "File exists" or "File does not exist"
} |