We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/mcmurtrya/ukraine-war-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
MCP_Resources.ipynb•26.5 kB
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MCP Resources: Beyond Tools\n",
"\n",
"## What are MCP Resources?\n",
"\n",
"While **tools** are executable functions that AI assistants can call, **resources** are data sources that provide context and information. Resources can be:\n",
"\n",
"- **Static files** (configuration, documentation, datasets)\n",
"- **Dynamic templates** that generate content based on parameters\n",
"- **Database queries** or API responses\n",
"- **File system access** with structured data\n",
"\n",
"Think of resources as \"read-only\" data that AI assistants can access to understand context before making decisions or calling tools.\n",
"\n",
"## Types of Resources\n",
"\n",
"### 1. Static Resources\n",
"Fixed content accessible via URI:\n",
"- Configuration files\n",
"- Documentation\n",
"- Static datasets\n",
"- Templates\n",
"\n",
"### 2. Resource Templates\n",
"Dynamic resources that accept parameters:\n",
"- `weather://{city}/current`\n",
"- `database://users/{user_id}`\n",
"- `papers://{topic}` (our example)\n",
"\n",
"## Real-World Example: Research Paper Server\n",
"\n",
"Let's examine a research server that demonstrates both tools and resources working together.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Research Server Implementation\n",
"\n",
"Here's a complete MCP server that manages academic papers with both tools and resources:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Research MCP Server defined with:\n",
"- 2 Tools: search_papers, extract_info\n",
"- 2 Resources: papers://folders, papers://{topic}\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/project/.venv/lib/python3.12/site-packages/fastmcp/server/server.py:255: DeprecationWarning: Providing `host` when creating a server is deprecated. Provide it when calling `run` or as a global setting instead.\n",
" self._handle_deprecated_settings(\n",
"/project/.venv/lib/python3.12/site-packages/fastmcp/server/server.py:255: DeprecationWarning: Providing `port` when creating a server is deprecated. Provide it when calling `run` or as a global setting instead.\n",
" self._handle_deprecated_settings(\n"
]
}
],
"source": [
"import json\n",
"import os\n",
"from pathlib import Path\n",
"\n",
"import arxiv\n",
"from fastmcp import FastMCP\n",
"\n",
"# Note: This example uses arxiv library for demonstration\n",
"# In practice, you'd need: pip install arxiv\n",
"\n",
"PAPER_DIR = \"../data/papers\"\n",
"PORT = int(os.environ.get(\"PORT\", 10000))\n",
"\n",
"# Create FastMCP server\n",
"mcp = FastMCP(\"research\", host=\"0.0.0.0\", port=PORT)\n",
"\n",
"\n",
"@mcp.tool()\n",
"def search_papers(topic: str, max_results: int = 5) -> list[str]:\n",
" \"\"\"Search for papers on arXiv based on a topic and store their information.\n",
"\n",
" Args:\n",
" topic: The topic to search for\n",
" max_results: Maximum number of results to retrieve (default: 5)\n",
"\n",
" Returns:\n",
" List of paper IDs found in the search\n",
" \"\"\"\n",
" # Use arxiv to find the papers\n",
" client = arxiv.Client()\n",
"\n",
" # Search for the most relevant articles matching the queried topic\n",
" search = arxiv.Search(\n",
" query=topic, max_results=max_results, sort_by=arxiv.SortCriterion.Relevance\n",
" )\n",
"\n",
" papers = client.results(search)\n",
"\n",
" # Create directory for this topic\n",
" path = Path(PAPER_DIR) / topic.lower().replace(\" \", \"_\")\n",
" path.mkdir(parents=True, exist_ok=True)\n",
"\n",
" file_path = path / \"papers_info.json\"\n",
"\n",
" # Try to load existing papers info\n",
" try:\n",
" with file_path.open() as json_file:\n",
" papers_info = json.load(json_file)\n",
" except (FileNotFoundError, json.JSONDecodeError):\n",
" papers_info = {}\n",
"\n",
" # Process each paper and add to papers_info\n",
" paper_ids = []\n",
" for paper in papers:\n",
" paper_ids.append(paper.get_short_id())\n",
" paper_info = {\n",
" \"title\": paper.title,\n",
" \"authors\": [author.name for author in paper.authors],\n",
" \"summary\": paper.summary,\n",
" \"pdf_url\": paper.pdf_url,\n",
" \"published\": str(paper.published.date()),\n",
" }\n",
" papers_info[paper.get_short_id()] = paper_info\n",
"\n",
" # Save updated papers_info to json file\n",
" with file_path.open(\"w\") as json_file:\n",
" json.dump(papers_info, json_file, indent=2)\n",
"\n",
" print(f\"Results are saved in: {file_path}\")\n",
"\n",
" return paper_ids\n",
"\n",
"\n",
"@mcp.tool()\n",
"def extract_info(paper_id: str) -> str:\n",
" \"\"\"Search for information about a specific paper across all topic directories.\n",
"\n",
" Args:\n",
" paper_id: The ID of the paper to look for\n",
"\n",
" Returns:\n",
" JSON string with paper information if found, error message if not found\n",
" \"\"\"\n",
" for item in Path(PAPER_DIR).iterdir():\n",
" if item.is_dir():\n",
" file_path = item / \"papers_info.json\"\n",
" if file_path.is_file():\n",
" try:\n",
" with file_path.open() as json_file:\n",
" papers_info = json.load(json_file)\n",
" if paper_id in papers_info:\n",
" return json.dumps(papers_info[paper_id], indent=2)\n",
" except (FileNotFoundError, json.JSONDecodeError) as e:\n",
" print(f\"Error reading {file_path}: {str(e)}\")\n",
" continue\n",
"\n",
" return f\"There's no saved information related to paper {paper_id}.\"\n",
"\n",
"\n",
"# RESOURCES - This is the key part!\n",
"\n",
"\n",
"@mcp.resource(\"papers://folders\")\n",
"def get_available_folders() -> str:\n",
" \"\"\"List all available topic folders in the papers directory.\n",
"\n",
" This resource provides a simple list of all available topic folders.\n",
" \"\"\"\n",
" folders = []\n",
"\n",
" # Get all topic directories\n",
" paper_dir = Path(PAPER_DIR)\n",
" if paper_dir.exists():\n",
" for topic_dir in paper_dir.iterdir():\n",
" if topic_dir.is_dir():\n",
" papers_file = topic_dir / \"papers_info.json\"\n",
" if papers_file.exists():\n",
" folders.append(topic_dir.name)\n",
"\n",
" # Create a simple markdown list\n",
" content = \"# Available Topics\\n\\n\"\n",
" if folders:\n",
" for folder in folders:\n",
" content += f\"- {folder}\\n\"\n",
" content += \"\\nUse papers://{topic} to access papers in that topic.\\n\"\n",
" else:\n",
" content += \"No topics found.\\n\"\n",
"\n",
" return content\n",
"\n",
"\n",
"@mcp.resource(\"papers://{topic}\")\n",
"def get_topic_papers(topic: str) -> str:\n",
" \"\"\"Get detailed information about papers on a specific topic.\n",
"\n",
" Args:\n",
" topic: The research topic to retrieve papers for\n",
" \"\"\"\n",
" topic_dir = topic.lower().replace(\" \", \"_\")\n",
" papers_file = Path(PAPER_DIR) / topic_dir / \"papers_info.json\"\n",
"\n",
" if not papers_file.exists():\n",
" return f\"# No papers found for topic: {topic}\\n\\nTry searching for papers on this topic first.\"\n",
"\n",
" try:\n",
" with papers_file.open() as f:\n",
" papers_data = json.load(f)\n",
"\n",
" # Create markdown content with paper details\n",
" content = f\"# Papers on {topic.replace('_', ' ').title()}\\n\\n\"\n",
" content += f\"Total papers: {len(papers_data)}\\n\\n\"\n",
"\n",
" for paper_id, paper_info in papers_data.items():\n",
" content += f\"## {paper_info['title']}\\n\"\n",
" content += f\"- **Paper ID**: {paper_id}\\n\"\n",
" content += f\"- **Authors**: {', '.join(paper_info['authors'])}\\n\"\n",
" content += f\"- **Published**: {paper_info['published']}\\n\"\n",
" content += (\n",
" f\"- **PDF URL**: [{paper_info['pdf_url']}]({paper_info['pdf_url']})\\n\\n\"\n",
" )\n",
" content += f\"### Summary\\n{paper_info['summary'][:500]}...\\n\\n\"\n",
" content += \"---\\n\\n\"\n",
"\n",
" return content\n",
" except json.JSONDecodeError:\n",
" return f\"# Error reading papers data for {topic}\\n\\nThe papers data file is corrupted.\"\n",
"\n",
"\n",
"print(\"Research MCP Server defined with:\")\n",
"print(\"- 2 Tools: search_papers, extract_info\")\n",
"print(\"- 2 Resources: papers://folders, papers://{topic}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Practical Demonstration\n",
"\n",
"Let's see how this MCP server works in practice:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"=== Using Tools ===\n",
"Results are saved in: ../data/papers/mcp_server/papers_info.json\n",
"Found papers: ['2509.24272v1', '2504.08999v1']\n",
"\n",
"=== Extracting Paper Information ===\n",
"{\n",
" \"title\": \"When MCP Servers Attack: Taxonomy, Feasibility, and Mitigation\",\n",
" \"authors\": [\n",
" \"Weibo Zhao\",\n",
" \"Jiahao Liu\",\n",
" \"Bonan Ruan\",\n",
" \"Shaofei Li\",\n",
" \"Zhenkai Liang\"\n",
" ],\n",
" \"summary\": \"Model Context Protocol (MCP) servers enable AI applications to connect to\\nexternal systems in a plug-and-play manner, but their rapid proliferation also\\nintroduces severe security risks. Unlike mature software ecosystems with\\nrigorous vetting, MCP servers still lack standardized review mechanisms, giving\\nadversaries opportunities to distribute malicious implementations. Despite this\\npressing risk, the security implications of MCP servers remain underexplored.\\nTo address this gap, we present the first systematic study that treats MCP\\nservers as active threat actors and decomposes them into core components to\\nexamine how adversarial developers can implant malicious intent. Specifically,\\nwe investigate three research questions: (i) what types of attacks malicious\\nMCP servers can launch, (ii) how vulnerable MCP hosts and Large Language Models\\n(LLMs) are to these attacks, and (iii) how feasible it is to carry out MCP\\nserver attacks in practice. Our study proposes a component-based taxonomy\\ncomprising twelve attack categories. For each category, we develop\\nProof-of-Concept (PoC) servers and demonstrate their effectiveness across\\ndiverse real-world host-LLM settings. We further show that attackers can\\ngenerate large numbers of malicious servers at virtually no cost. We then test\\nstate-of-the-art scanners on the generated servers and found that existing\\ndetection approaches are insufficient. These findings highlight that malicious\\nMCP servers are easy to implement, difficult to detect with current tools, and\\ncapable of causing concrete damage to AI agent systems. Addressing this threat\\nrequires coordinated efforts among protocol designers, host developers, LLM\\nproviders, and end users to build a more secure and resilient MCP ecosystem.\",\n",
" \"pdf_url\": \"http://arxiv.org/pdf/2509.24272v1\",\n",
" \"published\": \"2025-09-29\"\n",
"}\n"
]
}
],
"source": [
"# Step 1: Search for papers using the tool\n",
"print(\"=== Using Tools ===\")\n",
"\n",
"\n",
"# Note: In a real MCP server, these would be called by AI assistants\n",
"# For demonstration, we'll call the underlying functions directly\n",
"def search_papers_demo(topic: str, max_results: int = 5):\n",
" \"\"\"Demo version of search_papers that we can call directly\"\"\"\n",
" from pathlib import Path\n",
"\n",
" import arxiv\n",
"\n",
" # Use arxiv to find the papers\n",
" client = arxiv.Client()\n",
"\n",
" # Search for the most relevant articles matching the queried topic\n",
" search = arxiv.Search(\n",
" query=topic, max_results=max_results, sort_by=arxiv.SortCriterion.Relevance\n",
" )\n",
"\n",
" papers = client.results(search)\n",
"\n",
" # Create directory for this topic\n",
" path = Path(PAPER_DIR) / topic.lower().replace(\" \", \"_\")\n",
" path.mkdir(parents=True, exist_ok=True)\n",
"\n",
" file_path = path / \"papers_info.json\"\n",
"\n",
" # Try to load existing papers info\n",
" try:\n",
" with file_path.open() as json_file:\n",
" papers_info = json.load(json_file)\n",
" except (FileNotFoundError, json.JSONDecodeError):\n",
" papers_info = {}\n",
"\n",
" # Process each paper and add to papers_info\n",
" paper_ids = []\n",
" for paper in papers:\n",
" paper_ids.append(paper.get_short_id())\n",
" paper_info = {\n",
" \"title\": paper.title,\n",
" \"authors\": [author.name for author in paper.authors],\n",
" \"summary\": paper.summary,\n",
" \"pdf_url\": paper.pdf_url,\n",
" \"published\": str(paper.published.date()),\n",
" }\n",
" papers_info[paper.get_short_id()] = paper_info\n",
"\n",
" # Save updated papers_info to json file\n",
" with file_path.open(\"w\") as json_file:\n",
" json.dump(papers_info, json_file, indent=2)\n",
"\n",
" print(f\"Results are saved in: {file_path}\")\n",
" return paper_ids\n",
"\n",
"\n",
"def extract_info_demo(paper_id: str):\n",
" \"\"\"Demo version of extract_info that we can call directly\"\"\"\n",
" from pathlib import Path\n",
"\n",
" for item in Path(PAPER_DIR).iterdir():\n",
" if item.is_dir():\n",
" file_path = item / \"papers_info.json\"\n",
" if file_path.is_file():\n",
" try:\n",
" with file_path.open() as json_file:\n",
" papers_info = json.load(json_file)\n",
" if paper_id in papers_info:\n",
" return json.dumps(papers_info[paper_id], indent=2)\n",
" except (FileNotFoundError, json.JSONDecodeError) as e:\n",
" print(f\"Error reading {file_path}: {str(e)}\")\n",
" continue\n",
"\n",
" return f\"There's no saved information related to paper {paper_id}.\"\n",
"\n",
"\n",
"# Now we can call the demo functions\n",
"paper_ids = search_papers_demo(\"mcp server\", max_results=2)\n",
"print(f\"Found papers: {paper_ids}\")\n",
"\n",
"# Step 2: Extract specific paper information\n",
"print(\"\\n=== Extracting Paper Information ===\")\n",
"if paper_ids:\n",
" paper_info = extract_info_demo(paper_ids[0])\n",
" print(paper_info)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"=== Using Resources ===\n",
"Available folders:\n",
"# Available Topics\n",
"\n",
"- machine_learning\n",
"- mcp_server\n",
"\n",
"Use papers://{topic} to access papers in that topic.\n",
"\n",
"\n",
"==================================================\n",
"\n",
"Papers in machine_learning topic:\n",
"# Papers on Mcp Server\n",
"\n",
"Total papers: 2\n",
"\n",
"## When MCP Servers Attack: Taxonomy, Feasibility, and Mitigation\n",
"- **Paper ID**: 2509.24272v1\n",
"- **Authors**: Weibo Zhao, Jiahao Liu, Bonan Ruan, Shaofei Li, Zhenkai Liang\n",
"- **Published**: 2025-09-29\n",
"- **PDF URL**: [http://arxiv.org/pdf/2509.24272v1](http://arxiv.org/pdf/2509.24272v1)\n",
"\n",
"### Summary\n",
"Model Context Protocol (MCP) servers enable AI applications to connect to\n",
"external systems in a plug-and-play manner, but their rapid proliferation also\n",
"introduces severe security risks. Unlike mature software ecosystems with\n",
"rigorous vetting, MCP servers still lack standardized review mechanisms, giving\n",
"adversaries opportunities to distribute malicious implementations. Despite this\n",
"pressing risk, the security implications of MCP servers remain underexplored.\n",
"To address this gap, we present the ...\n",
"\n",
"---\n",
"\n",
"## MCP Bridge: A Lightweight, LLM-Agnostic RESTful Proxy for Model Context Protocol Servers\n",
"- **Paper ID**: 2504.08999v1\n",
"- **Authors**: Arash Ahmadi, Sarah Sharif, Yaser M. Banad\n",
"- **Published**: 2025-04-11\n",
"- **PDF URL**: [http://arxiv.org/pdf/2504.08999v1](http://arxiv.org/pdf/2504.08999v1)\n",
"\n",
"### Summary\n",
"Large Language Models (LLMs) are increasingly augmented with external tools\n",
"through standardized interfaces like the Model Context Protocol (MCP). However,\n",
"current MCP implementations face critical limitations: they typically require\n",
"local process execution through STDIO transports, making them impractical for\n",
"resource-constrained environments like mobile devices, web browsers, and edge\n",
"computing. We present MCP Bridge, a lightweight RESTful proxy that connects to\n",
"multiple MCP servers and expose...\n",
"\n",
"---\n",
"\n",
"\n"
]
}
],
"source": [
"# Step 3: Access resources for context\n",
"print(\"=== Using Resources ===\")\n",
"\n",
"\n",
"# Note: In a real MCP server, these would be accessed by AI assistants via URI\n",
"# For demonstration, we'll call the underlying functions directly\n",
"def get_available_folders_demo():\n",
" \"\"\"Demo version of get_available_folders that we can call directly\"\"\"\n",
" from pathlib import Path\n",
"\n",
" folders = []\n",
"\n",
" # Get all topic directories\n",
" paper_dir = Path(PAPER_DIR)\n",
" if paper_dir.exists():\n",
" for topic_dir in paper_dir.iterdir():\n",
" if topic_dir.is_dir():\n",
" papers_file = topic_dir / \"papers_info.json\"\n",
" if papers_file.exists():\n",
" folders.append(topic_dir.name)\n",
"\n",
" # Create a simple markdown list\n",
" content = \"# Available Topics\\n\\n\"\n",
" if folders:\n",
" for folder in folders:\n",
" content += f\"- {folder}\\n\"\n",
" content += \"\\nUse papers://{topic} to access papers in that topic.\\n\"\n",
" else:\n",
" content += \"No topics found.\\n\"\n",
"\n",
" return content\n",
"\n",
"\n",
"def get_topic_papers_demo(topic: str):\n",
" \"\"\"Demo version of get_topic_papers that we can call directly\"\"\"\n",
" from pathlib import Path\n",
"\n",
" topic_dir = topic.lower().replace(\" \", \"_\")\n",
" papers_file = Path(PAPER_DIR) / topic_dir / \"papers_info.json\"\n",
"\n",
" if not papers_file.exists():\n",
" return f\"# No papers found for topic: {topic}\\n\\nTry searching for papers on this topic first.\"\n",
"\n",
" try:\n",
" with papers_file.open() as f:\n",
" papers_data = json.load(f)\n",
"\n",
" # Create markdown content with paper details\n",
" content = f\"# Papers on {topic.replace('_', ' ').title()}\\n\\n\"\n",
" content += f\"Total papers: {len(papers_data)}\\n\\n\"\n",
"\n",
" for paper_id, paper_info in papers_data.items():\n",
" content += f\"## {paper_info['title']}\\n\"\n",
" content += f\"- **Paper ID**: {paper_id}\\n\"\n",
" content += f\"- **Authors**: {', '.join(paper_info['authors'])}\\n\"\n",
" content += f\"- **Published**: {paper_info['published']}\\n\"\n",
" content += (\n",
" f\"- **PDF URL**: [{paper_info['pdf_url']}]({paper_info['pdf_url']})\\n\\n\"\n",
" )\n",
" content += f\"### Summary\\n{paper_info['summary'][:500]}...\\n\\n\"\n",
" content += \"---\\n\\n\"\n",
"\n",
" return content\n",
" except json.JSONDecodeError:\n",
" return f\"# Error reading papers data for {topic}\\n\\nThe papers data file is corrupted.\"\n",
"\n",
"\n",
"# Access the folders resource\n",
"print(\"Available folders:\")\n",
"folders_content = get_available_folders_demo()\n",
"print(folders_content)\n",
"\n",
"print(\"\\n\" + \"=\" * 50 + \"\\n\")\n",
"\n",
"# Access topic-specific papers resource\n",
"print(\"Papers in machine_learning topic:\")\n",
"topic_content = get_topic_papers_demo(\"mcp_server\")\n",
"print(topic_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Important Note: MCP Decorators vs Demo Functions\n",
"\n",
"In the **MCP Server Implementation** (Cell 2), we use decorators like `@mcp.tool()` and `@mcp.resource()`:\n",
"\n",
"```python\n",
"@mcp.tool()\n",
"def search_papers(topic: str, max_results: int = 5) -> List[str]:\n",
" # This function becomes a FunctionTool object\n",
" # It's not directly callable in Python\n",
" pass\n",
"\n",
"@mcp.resource(\"papers://{topic}\")\n",
"def get_topic_papers(topic: str) -> str:\n",
" # This function becomes a Resource object\n",
" # It's accessed via URI by AI assistants\n",
" pass\n",
"```\n",
"\n",
"In the **Demo Cells** above, we created separate demo functions that we can call directly:\n",
"\n",
"```python\n",
"def search_papers_demo(topic: str, max_results: int = 5):\n",
" # This is a regular Python function we can call directly\n",
" pass\n",
"\n",
"def get_topic_papers_demo(topic: str):\n",
" # This is a regular Python function we can call directly\n",
" pass\n",
"```\n",
"\n",
"### How It Works in Practice:\n",
"\n",
"1. **MCP Server**: The decorated functions (`@mcp.tool()`, `@mcp.resource()`) are registered with the MCP server\n",
"2. **AI Assistant**: Connects to the MCP server and can:\n",
" - Call tools: `search_papers(topic=\"AI\", max_results=5)`\n",
" - Access resources: `papers://machine_learning`\n",
"3. **Demo**: We use regular functions to demonstrate the same functionality\n",
"\n",
"The demo functions show you exactly what happens when the MCP server processes tool calls and resource requests!\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## How AI Assistants Use Resources\n",
"\n",
"When an AI assistant connects to this MCP server, it can:\n",
"\n",
"1. **Browse Available Data**: Access `papers://folders` to see what topics are available\n",
"2. **Get Context**: Read `papers://machine_learning` to understand what papers exist\n",
"3. **Make Informed Decisions**: Use resource data to decide which tools to call\n",
"4. **Provide Rich Responses**: Include formatted resource content in responses\n",
"\n",
"### Example AI Assistant Workflow:\n",
"\n",
"```\n",
"User: \"Tell me about machine learning papers\"\n",
"\n",
"AI Assistant:\n",
"1. Reads papers://folders → sees \"machine_learning\" topic exists\n",
"2. Reads papers://machine_learning → gets formatted list of papers\n",
"3. Calls extract_info() for specific papers if needed\n",
"4. Provides comprehensive response with context\n",
"```\n",
"\n",
"## Best Practices for Resources\n",
"\n",
"1. **Use Descriptive URIs**: Make resource paths intuitive (`papers://{topic}` vs `data://{id}`)\n",
"2. **Return Structured Content**: Use Markdown formatting for readability\n",
"3. **Handle Errors Gracefully**: Provide meaningful error messages\n",
"4. **Keep Resources Lightweight**: Avoid heavy computations in resource functions\n",
"5. **Use Templates Wisely**: Dynamic resources should have clear parameter patterns\n",
"6. **Document Resources**: Include clear docstrings explaining what each resource provides\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.11"
}
},
"nbformat": 4,
"nbformat_minor": 2
}