Skip to main content
Glama
singlestore-labs

SingleStore MCP Server

create_notebook

Generate and customize Jupyter notebooks for Python and Markdown tasks, ensuring unique names and valid JSON content for streamlined data analysis and integration with SingleStore databases.

Instructions

Create a new Jupyter notebook in your personal space. Only supports python and markdown.

Parameters:
- notebook_name (required): Name for the new notebook
  - Can include or omit .ipynb extension
  - Must be unique in your personal space

- content (optional): JSON object with the following structure:
    {
        "cells": [
            {"type": "markdown", "content": "Markdown content here"},
            {"type": "code", "content": "Python code here"}
        ]
    }
    - 'type' must be either 'markdown' or 'code'
    - 'content' is the text content of the cell
    IMPORTANT: The content must be valid JSON.

How to use:
    - Before creating the notebook, call check_if_file_exists tool to verify if the notebook already exists.
    - Always install the dependencies on the first cell. Example: 
        {
            "cells": [
                {"type": "code", "content": "!pip install singlestoredb --quiet"},
                // other cells...
            ]
        }
    - To connect to the database, use the variable "connection_url" that already exists in the notebook platform. Example:
        {
            "cells": [
                {"type": "code", "content": "conn = s2.connect(connection_url)"},
                // other cells...
            ]
        }

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYes
ctxNo
notebook_nameYes

Implementation Reference

  • Main handler function that creates a temporary Jupyter notebook (.ipynb) file from simplified content input, validates structure and schema, and returns the temp file path.
    async def create_notebook_file(ctx: Context, content: Dict[str, Any]) -> Dict[str, Any]:
        """
        Create a Jupyter notebook file in the correct singlestore format and saves it to a temporary location.
    
        This tool validates the provided content against the Jupyter notebook schema and creates a properly
        formatted .ipynb file in a temporary location. The content is converted from the simplified format
        to the full Jupyter notebook format.
    
        Args:
            content: Notebook content in the format: {
                "cells": [
                    {"type": "markdown", "content": "Markdown content here"},
                    {"type": "code", "content": "Python code here"}
                ]
            }
    
        Returns:
            Dictionary with the temporary file path and validation status
    
        Example:
            content = {
                "cells": [
                    {"type": "markdown", "content": "# My Notebook\nThis is a sample notebook"},
                    {"type": "code", "content": "import pandas as pd\nprint('Hello World')"}
                ]
            }
        """
        settings = config.get_settings()
        user_id = config.get_user_id()
        settings.analytics_manager.track_event(
            user_id,
            "tool_calling",
            {"name": "create_notebook_file"},
        )
    
        start_time = time.time()
    
        try:
            # Validate content structure
            content_error = utils.validate_content_structure(content)
            if content_error:
                return content_error
    
            # Convert simplified format to full Jupyter notebook format
            notebook_cells, cells_error = utils.convert_to_notebook_cells(content["cells"])
            if cells_error:
                return cells_error
    
            # Create full notebook structure
            notebook_content = utils.create_notebook_structure(notebook_cells)
    
            # Validate against Jupyter notebook schema
            schema_validated, schema_error = utils.validate_notebook_schema(
                notebook_content
            )
            if schema_error:
                return schema_error
    
            # Create temporary file
            temp_file = tempfile.NamedTemporaryFile(
                mode="w",
                suffix=".ipynb",
                prefix="notebook_",
                delete=False,
            )
    
            try:
                # Write notebook content to temporary file
                json.dump(notebook_content, temp_file, indent=2)
                temp_file_path = temp_file.name
            finally:
                temp_file.close()
    
            execution_time = (time.time() - start_time) * 1000
    
            return {
                "status": "success",
                "message": "Notebook file created successfully at temporary location",
                "data": {
                    "tempFilePath": temp_file_path,
                    "cellCount": len(notebook_cells),
                    "schemaValidated": schema_validated,
                    "notebookFormat": {"nbformat": 4, "nbformat_minor": 5},
                },
                "metadata": {
                    "executionTimeMs": round(execution_time, 2),
                    "timestamp": datetime.now(timezone.utc).isoformat(),
                    "tempFileSize": os.path.getsize(temp_file_path),
                },
            }
    
        except Exception as e:
            logger.error(f"Error creating notebook file: {str(e)}")
            return {
                "status": "error",
                "message": f"Failed to create notebook file: {str(e)}",
                "errorCode": "NOTEBOOK_CREATION_FAILED",
                "errorDetails": {"exception_type": type(e).__name__},
            }
  • Central registration of all tools, including create_notebook_file at position 14 in the tools_definition list.
    tools_definition = [
        {"func": get_user_info},
        {"func": organization_info},
        {"func": choose_organization},
        {"func": set_organization},
        {"func": workspace_groups_info},
        {"func": workspaces_info},
        {"func": resume_workspace},
        {"func": list_starter_workspaces},
        {"func": create_starter_workspace},
        {"func": terminate_starter_workspace},
        {"func": list_regions},
        {"func": list_sharedtier_regions},
        {"func": run_sql},
        {"func": create_notebook_file},
        {"func": upload_notebook_file},
        {"func": create_job_from_notebook},
        {"func": get_job},
        {"func": delete_job},
    ]
  • Import of the create_notebook_file function into the central tools module.
    from src.api.tools.notebooks import (
        create_notebook_file,
        upload_notebook_file,
    )
  • Helper function called by the handler to build the full Jupyter notebook JSON structure from cells.
    def create_notebook_structure(notebook_cells: list) -> Dict[str, Any]:
        """
        Create the full Jupyter notebook structure.
    
        Returns:
            Complete notebook dictionary ready for serialization
        """
        return {
            "nbformat": 4,
            "nbformat_minor": 5,
            "metadata": {
                "kernelspec": {
                    "display_name": "Python 3",
                    "language": "python",
                    "name": "python3",
                },
                "language_info": {
                    "name": "python",
                    "version": "3.8.0",
                    "mimetype": "text/x-python",
                    "codemirror_mode": {"name": "ipython", "version": 3},
                    "pygments_lexer": "ipython3",
                    "file_extension": ".py",
                },
            },
            "cells": notebook_cells,
        }
  • Helper function to validate the generated notebook content against the Jupyter notebook schema loaded from notebook-schema.json.
    def validate_notebook_schema(
        notebook_content: Dict[str, Any],
    ) -> tuple[bool, Optional[Dict[str, Any]]]:
        """
        Validate notebook content against Jupyter notebook schema.
    
        Returns:
            Tuple of (schema_validated, error_dict). error_dict is None if successful or skipped.
        """
        try:
            # Load schema from external file
            schema = get_notebook_schema()
    
            # Validate notebook content against schema
            jsonschema.validate(notebook_content, schema)
            logger.info("Notebook content validated successfully against schema file")
            return True, None
    
        except FileNotFoundError as e:
            logger.warning(f"Schema file not found: {str(e)}")
            # Continue without validation if schema file is missing
            return False, None
        except json.JSONDecodeError as e:
            logger.error(f"Invalid JSON in schema file: {str(e)}")
            return False, {
                "status": "error",
                "message": f"Schema file contains invalid JSON: {str(e)}\nPlease call create_notebook_file tool to create a jupyter notebook in the correct format",
                "errorCode": "INVALID_SCHEMA_FILE",
                "errorDetails": {"json_error": str(e)},
            }
        except jsonschema.ValidationError as e:
            return False, {
                "status": "error",
                "message": f"Notebook content validation failed: {e.message}",
                "errorCode": "SCHEMA_VALIDATION_FAILED",
                "errorDetails": {"validation_error": str(e)},
            }
        except Exception as e:
            logger.warning(f"Schema validation failed: {str(e)}")
            # Continue without validation if schema can't be loaded
            return False, None
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is a creation/mutation operation (implied by 'Create'), specifies the uniqueness constraint ('Must be unique in your personal space'), and provides important implementation details about content validation ('The content must be valid JSON') and platform-specific variables ('use the variable "connection_url" that already exists'). It doesn't mention permissions, rate limits, or error conditions, keeping it at a 4 rather than 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, parameters, how to use) and uses bullet points effectively. While comprehensive, some sentences could be more concise (e.g., the JSON structure explanation is detailed but necessary given the 0% schema coverage). The front-loaded purpose statement is clear, and all content earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with 3 parameters, 0% schema coverage, no annotations, and no output schema, the description provides substantial context. It covers purpose, parameters, usage guidelines, and implementation examples. However, it doesn't describe the return value or error conditions, which would be helpful given the absence of output schema. The MCP context parameter documentation in the schema partially compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 3 parameters, the description fully compensates by providing rich semantic information. It explains the 'notebook_name' parameter's extension handling and uniqueness requirement, and provides detailed JSON structure, validation rules, and examples for the 'content' parameter. The 'ctx' parameter is implicitly covered through the MCP context documentation in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a new Jupyter notebook'), the target resource ('in your personal space'), and technical constraints ('Only supports python and markdown'). It distinguishes itself from siblings like 'create_scheduled_job' and 'create_virtual_workspace' by focusing specifically on notebook creation with language limitations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool, including prerequisites ('Before creating the notebook, call check_if_file_exists tool to verify if the notebook already exists') and best practices for content structure. It also distinguishes usage from other tools by specifying the notebook platform context and database connection approach.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/singlestore-labs/mcp-server-singlestore'

If you have feedback or need assistance with the MCP directory API, please join our Discord server