We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/SinghAngad05/mcp_research'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
MCP.ipynbā¢33.2 kB
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "7b95c536",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: arxiv in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (2.3.1)\n",
"Requirement already satisfied: feedparser~=6.0.10 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from arxiv) (6.0.12)\n",
"Requirement already satisfied: requests~=2.32.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from arxiv) (2.32.5)\n",
"Requirement already satisfied: sgmllib3k in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from feedparser~=6.0.10->arxiv) (1.0.0)\n",
"Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests~=2.32.0->arxiv) (3.4.4)\n",
"Requirement already satisfied: idna<4,>=2.5 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests~=2.32.0->arxiv) (3.11)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests~=2.32.0->arxiv) (2.6.2)\n",
"Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests~=2.32.0->arxiv) (2025.11.12)\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"[notice] A new release of pip is available: 24.2 -> 25.3\n",
"[notice] To update, run: python.exe -m pip install --upgrade pip\n"
]
}
],
"source": [
"pip install arxiv"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fce1c41e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: ollama in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (0.6.1)\n",
"Requirement already satisfied: httpx>=0.27 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from ollama) (0.28.1)\n",
"Requirement already satisfied: pydantic>=2.9 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from ollama) (2.12.5)\n",
"Requirement already satisfied: anyio in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27->ollama) (4.12.0)\n",
"Requirement already satisfied: certifi in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27->ollama) (2025.11.12)\n",
"Requirement already satisfied: httpcore==1.* in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27->ollama) (1.0.9)\n",
"Requirement already satisfied: idna in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27->ollama) (3.11)\n",
"Requirement already satisfied: h11>=0.16 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpcore==1.*->httpx>=0.27->ollama) (0.16.0)\n",
"Requirement already satisfied: annotated-types>=0.6.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.9->ollama) (0.7.0)\n",
"Requirement already satisfied: pydantic-core==2.41.5 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.9->ollama) (2.41.5)\n",
"Requirement already satisfied: typing-extensions>=4.14.1 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.9->ollama) (4.15.0)\n",
"Requirement already satisfied: typing-inspection>=0.4.2 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.9->ollama) (0.4.2)\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"[notice] A new release of pip is available: 24.2 -> 25.3\n",
"[notice] To update, run: python.exe -m pip install --upgrade pip\n"
]
}
],
"source": [
"pip install ollama\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9f2a3420",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: anthropic in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (0.75.0)\n",
"Requirement already satisfied: anyio<5,>=3.5.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (4.12.0)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (1.9.0)\n",
"Requirement already satisfied: docstring-parser<1,>=0.15 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (0.17.0)\n",
"Requirement already satisfied: httpx<1,>=0.25.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (0.28.1)\n",
"Requirement already satisfied: jiter<1,>=0.4.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (0.12.0)\n",
"Requirement already satisfied: pydantic<3,>=1.9.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (2.12.5)\n",
"Requirement already satisfied: sniffio in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (1.3.1)\n",
"Requirement already satisfied: typing-extensions<5,>=4.10 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anthropic) (4.15.0)\n",
"Requirement already satisfied: idna>=2.8 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anyio<5,>=3.5.0->anthropic) (3.11)\n",
"Requirement already satisfied: certifi in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.25.0->anthropic) (2025.11.12)\n",
"Requirement already satisfied: httpcore==1.* in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.25.0->anthropic) (1.0.9)\n",
"Requirement already satisfied: h11>=0.16 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpcore==1.*->httpx<1,>=0.25.0->anthropic) (0.16.0)\n",
"Requirement already satisfied: annotated-types>=0.6.0 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic<3,>=1.9.0->anthropic) (0.7.0)\n",
"Requirement already satisfied: pydantic-core==2.41.5 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic<3,>=1.9.0->anthropic) (2.41.5)\n",
"Requirement already satisfied: typing-inspection>=0.4.2 in c:\\users\\geek9\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic<3,>=1.9.0->anthropic) (0.4.2)\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"[notice] A new release of pip is available: 24.2 -> 25.3\n",
"[notice] To update, run: python.exe -m pip install --upgrade pip\n"
]
}
],
"source": [
"pip install anthropic"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fbba4c2e",
"metadata": {},
"outputs": [],
"source": [
"import arxiv\n",
"import json\n",
"import os\n",
"from typing import List, Dict\n",
"from dotenv import load_dotenv\n",
"import ollama\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "20ee3421",
"metadata": {},
"outputs": [],
"source": [
"PAPER_DIR = \"papers\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e8080f55",
"metadata": {},
"outputs": [],
"source": [
"def search_papers(topic: str, max_results: int = 5) -> List[str]:\n",
" \"\"\"Search for papers on arXiv related to the given topic.\"\"\"\n",
"\n",
" client = arxiv.Client()\n",
"\n",
" # Search for most relevant articles matching the query topics.\n",
" search = arxiv.Search(\n",
" query=topic,\n",
" max_results=max_results,\n",
" sort_by=arxiv.SortCriterion.Relevance\n",
" )\n",
" papers = client.results(search)\n",
"\n",
" # Creating directory of the topic.\n",
" path = os.path.join(PAPER_DIR, topic.lower().replace(\" \", \"_\"))\n",
" os.makedirs(path, exist_ok=True)\n",
"\n",
" file_path = os.path.join(path, \"papers.json\")\n",
"\n",
" # Try to load existing paper info.\n",
" papers_info = {}\n",
" paper_ids = []\n",
" for paper in papers:\n",
" paper_ids.append(paper.get_short_id())\n",
" paper_info = {\n",
" 'title': paper.title,\n",
" 'authors': [author.name for author in paper.authors],\n",
" 'summary': paper.summary,\n",
" 'pdf_url': paper.pdf_url,\n",
" 'published': str(paper.published.date())\n",
" }\n",
" papers_info[paper.get_short_id()] = paper_info\n",
"\n",
" # Save updated papers into JSON file.\n",
" with open(file_path, 'w') as json_file:\n",
" json.dump(papers_info, json_file, indent=2)\n",
" print(f\"Results are saved to {file_path}\")\n",
" return paper_ids\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f8f6c604",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Results are saved to papers\\machine_learning\\papers.json\n"
]
},
{
"data": {
"text/plain": [
"['2306.04338v1',\n",
" '2006.16189v4',\n",
" '2201.12150v2',\n",
" '2302.08893v4',\n",
" '2304.02381v2']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"search_papers(\"machine learning\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3f84f79f",
"metadata": {},
"outputs": [],
"source": [
"def extract_info(paper_id: str) -> str:\n",
" \"\"\"Extract title and abstract from a paper given its arXiv ID.\"\"\"\n",
" for item in os.listdir(PAPER_DIR):\n",
" item_path = os.path.join(PAPER_DIR, item)\n",
" if os.path.isdir(item_path):\n",
" file_path = os.path.join(item_path, \"papers.json\")\n",
" if os.path.exists(file_path):\n",
" try:\n",
" with open(file_path, 'r') as json_file:\n",
" papers_info = json.load(json_file)\n",
" if paper_id in papers_info:\n",
" return json.dumps(papers_info[paper_id], indent=2)\n",
" except (FileNotFoundError, json.JSONDecodeError) as e:\n",
" print(f\"Error reading {file_path}: {str(e)}\")\n",
" continue\n",
"\n",
" return f\"There is no information found for paper ID {paper_id}.\"\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "9fdcf80b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\\n \"title\": \"Learning Curves for Decision Making in Supervised Machine Learning: A Survey\",\\n \"authors\": [\\n \"Felix Mohr\",\\n \"Jan N. van Rijn\"\\n ],\\n \"summary\": \"Learning curves are a concept from social sciences that has been adopted in the context of machine learning to assess the performance of a learning algorithm with respect to a certain resource, e.g., the number of training examples or the number of training iterations. Learning curves have important applications in several machine learning contexts, most notably in data acquisition, early stopping of model training, and model selection. For instance, learning curves can be used to model the performance of the combination of an algorithm and its hyperparameter configuration, providing insights into their potential suitability at an early stage and often expediting the algorithm selection process. Various learning curve models have been proposed to use learning curves for decision making. Some of these models answer the binary decision question of whether a given algorithm at a certain budget will outperform a certain reference performance, whereas more complex models predict the entire learning curve of an algorithm. We contribute a framework that categorises learning curve approaches using three criteria: the decision-making situation they address, the intrinsic learning curve question they answer and the type of resources they use. We survey papers from the literature and classify them into this framework.\",\\n \"pdf_url\": \"https://arxiv.org/pdf/2201.12150v2\",\\n \"published\": \"2022-01-28\"\\n}'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"extract_info(\"2201.12150v2\")"
]
},
{
"cell_type": "markdown",
"id": "ec7ce97c",
"metadata": {},
"source": [
"## Tool Mapping"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f31970eb",
"metadata": {},
"outputs": [],
"source": [
"mapping_tool_function = {\n",
" \"search_papers\": search_papers,\n",
" \"extract_info\": extract_info\n",
"}\n",
"\n",
"def execute_tool(tool_name, tool_args):\n",
" \n",
" result = mapping_tool_function[tool_name](**tool_args)\n",
"\n",
" if result is None:\n",
" result = \"The operation completed but didn't return any results.\"\n",
" \n",
" elif isinstance(result, list):\n",
" result = ', '.join(result)\n",
" \n",
" elif isinstance(result, dict):\n",
" # Convert dictionaries to formatted JSON strings\n",
" result = json.dumps(result, indent=2)\n",
" \n",
" else:\n",
" # For any other type, convert using str()\n",
" result = str(result)\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b7c08318",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" {\n",
" \"name\": \"search_papers\",\n",
" \"description\": \"Search for papers on arXiv related to a given topic.\",\n",
" \"input_schema\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"topic\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The topic to search for papers about\"\n",
" },\n",
" \"max_results\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"Maximum number of papers to return (default: 5)\"\n",
" }\n",
" },\n",
" \"required\": [\"topic\"]\n",
" }\n",
" },\n",
" {\n",
" \"name\": \"extract_info\",\n",
" \"description\": \"Extract detailed information (title, authors, summary, etc.) from a paper given its arXiv ID.\",\n",
" \"input_schema\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"paper_id\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The arXiv ID of the paper (e.g., '2201.12150v2')\"\n",
" }\n",
" },\n",
" \"required\": [\"paper_id\"]\n",
" }\n",
" }\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "10caa9b6",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Ollama setup - runs locally, no API key needed!\n",
"# Make sure Ollama is running in the background (ollama serve)\n",
"# And you've pulled a model: ollama pull mistral\n",
"\n",
"OLLAMA_MODEL = 'mistral' # Change to 'neural-chat' or 'llama2' if you prefer\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "4f3b30b0",
"metadata": {},
"outputs": [],
"source": [
"def process_query(query):\n",
" \"\"\"\n",
" Process user query with tool use capability.\n",
" The AI can decide to call search_papers or extract_info tools.\n",
" \"\"\"\n",
" try:\n",
" if not query or query.strip() == \"\":\n",
" print(\"Please enter a valid query\")\n",
" return\n",
" \n",
" # Build system prompt that tells Ollama about available tools\n",
" system_prompt = \"\"\"You are an AI assistant specialized in searching and analyzing academic papers on arXiv.\n",
"\n",
"You have access to the following tools:\n",
"1. search_papers(topic: str, max_results: int = 5) - Search for papers on arXiv by topic\n",
"2. extract_info(paper_id: str) - Get detailed info about a specific paper\n",
"\n",
"When the user asks you to search for papers, use the search_papers tool.\n",
"When the user asks about a specific paper, use the extract_info tool.\n",
"\n",
"If you decide to use a tool, respond in this exact format:\n",
"TOOL_CALL: [tool_name] | {json_with_args}\n",
"\n",
"For example:\n",
"TOOL_CALL: search_papers | {\"topic\": \"machine learning\", \"max_results\": 5}\n",
"\n",
"After getting tool results, provide a helpful response to the user.\"\"\"\n",
"\n",
" # First, ask Ollama what it thinks it should do\n",
" full_prompt = f\"{system_prompt}\\n\\nUser: {query}\"\n",
" \n",
" response = ollama.generate(\n",
" model=OLLAMA_MODEL, \n",
" prompt=full_prompt,\n",
" stream=False\n",
" )\n",
" \n",
" if not response or 'response' not in response:\n",
" print(\"Could not generate a response. Try a different query.\")\n",
" return\n",
" \n",
" assistant_response = response['response'].strip()\n",
" \n",
" # Check if the assistant wants to call a tool\n",
" if \"TOOL_CALL:\" in assistant_response:\n",
" # Extract the tool call\n",
" tool_call_idx = assistant_response.find(\"TOOL_CALL:\")\n",
" tool_section = assistant_response[tool_call_idx:].split('\\n')[0]\n",
" \n",
" try:\n",
" # Parse: TOOL_CALL: search_papers | {\"topic\": \"algebra\"}\n",
" parts = tool_section.replace(\"TOOL_CALL:\", \"\").strip().split(\"|\")\n",
" if len(parts) == 2:\n",
" tool_name = parts[0].strip()\n",
" tool_args = json.loads(parts[1].strip())\n",
" \n",
" print(f\"\\nš Using tool: {tool_name}\")\n",
" print(f\" Args: {tool_args}\\n\")\n",
" \n",
" # Execute the tool\n",
" tool_result = execute_tool(tool_name, tool_args)\n",
" \n",
" print(f\"š Tool result:\\n{tool_result}\\n\")\n",
" \n",
" # Now ask Ollama to provide a helpful response based on the tool result\n",
" followup_prompt = f\"{system_prompt}\\n\\nUser: {query}\\n\\nTool used: {tool_name}\\nTool result:\\n{tool_result}\\n\\nProvide a helpful summary for the user:\"\n",
" \n",
" followup_response = ollama.generate(\n",
" model=OLLAMA_MODEL,\n",
" prompt=followup_prompt,\n",
" stream=False\n",
" )\n",
" \n",
" if followup_response and 'response' in followup_response:\n",
" print(f\"š¬ Assistant: {followup_response['response'].strip()}\")\n",
" return\n",
" except (json.JSONDecodeError, ValueError) as e:\n",
" print(f\"Could not parse tool call: {e}\")\n",
" print(f\"Response: {assistant_response}\")\n",
" return\n",
" \n",
" # No tool call, just print the response\n",
" print(f\"š¬ Assistant: {assistant_response}\")\n",
" \n",
" except ConnectionError:\n",
" print(\"ā Error: Ollama is not running!\")\n",
" print(\"Start Ollama with: & \\\"C:\\\\Users\\\\geek9\\\\AppData\\\\Local\\\\Programs\\\\Ollama\\\\ollama.exe\\\" serve\")\n",
" except Exception as e:\n",
" print(f\"ā Error: {str(e)}\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "d1c692e5",
"metadata": {},
"outputs": [],
"source": [
"def chat_loop():\n",
" print(\"Type your queries or 'quit' to exit.\")\n",
" while True:\n",
" try:\n",
" query = input(\"\\nQuery: \").strip()\n",
" if query.lower() == 'quit':\n",
" break\n",
" \n",
" process_query(query)\n",
" print(\"\\n\")\n",
" except Exception as e:\n",
" print(f\"\\nError: {str(e)}\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "da554a0b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"============================================================\n",
"š¤ ArXiv Paper Search Assistant (Powered by Ollama + Mistral)\n",
"============================================================\n",
"\n",
"⨠I can help you search academic papers on arXiv!\n",
"\n",
"Examples of what you can ask:\n",
" ⢠'Search for papers on machine learning'\n",
" ⢠'Find papers about deep learning'\n",
" ⢠'What papers exist on quantum computing?'\n",
" ⢠'Tell me about paper 2201.12150v2'\n",
"\n",
"Type 'quit' to exit.\n",
"\n",
"============================================================\n"
]
}
],
"source": [
"print(\"=\" * 60)\n",
"print(\"š¤ ArXiv Paper Search Assistant (Powered by Ollama + Mistral)\")\n",
"print(\"=\" * 60)\n",
"print(\"\\n⨠I can help you search academic papers on arXiv!\")\n",
"print(\"\\nExamples of what you can ask:\")\n",
"print(\" ⢠'Search for papers on machine learning'\")\n",
"print(\" ⢠'Find papers about deep learning'\")\n",
"print(\" ⢠'What papers exist on quantum computing?'\")\n",
"print(\" ⢠'Tell me about paper 2201.12150v2'\")\n",
"print(\"\\nType 'quit' to exit.\\n\")\n",
"print(\"=\" * 60)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "57c2d993",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Type your queries or 'quit' to exit.\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'Machine Learning', 'max_results': 10}\n",
"\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'Machine Learning', 'max_results': 10}\n",
"\n",
"Results are saved to papers\\machine_learning\\papers.json\n",
"š Tool result:\n",
"2306.04338v1, 2006.16189v4, 2201.12150v2, 2302.08893v4, 2304.02381v2, 2303.15563v1, 1905.04749v2, 1705.05172v1, 1906.01101v1, 2404.12511v1\n",
"\n",
"Results are saved to papers\\machine_learning\\papers.json\n",
"š Tool result:\n",
"2306.04338v1, 2006.16189v4, 2201.12150v2, 2302.08893v4, 2304.02381v2, 2303.15563v1, 1905.04749v2, 1705.05172v1, 1906.01101v1, 2404.12511v1\n",
"\n",
"š¬ Assistant: Here are some interesting papers I found on arXiv related to your query. They cover topics such as machine learning, physics, and mathematics.\n",
"\n",
"1. \"A Fast Learning Algorithm for Sparse Linear Models\" (2306.04338v1)\n",
"2. \"Review of Stochastic Optimization Algorithms\" (2006.16189v4)\n",
"3. \"Statistical Mechanics of Machine Learning\" (2201.12150v2)\n",
"4. \"On the Foundations of Deep Learning via Probabilistic Programming\" (2302.08893v4)\n",
"5. \"Theoretically Grounded Recommendation with Deep Reinforcement Learning\" (2304.02381v2)\n",
"6. \"A General Framework for Regularization and Optimization in Deep Learning\" (2303.15563v1)\n",
"7. \"On the Mathematics of Deep Learning\" (1905.04749v2)\n",
"8. \"The Mathematics of Physics: A Modern Perspective on Quantum Field Theory\" (1705.05172v1)\n",
"9. \"Quantum Mechanics for the Working Lab Scientist\" (1906.01101v1)\n",
"10. \"An Overview of Adversarial Attacks\" (2404.12511v1)\n",
"\n",
"Enjoy exploring these papers! If you'd like more information about any specific paper, just let me know the paper ID, and I can provide you with additional details.\n",
"\n",
"\n",
"š¬ Assistant: Here are some interesting papers I found on arXiv related to your query. They cover topics such as machine learning, physics, and mathematics.\n",
"\n",
"1. \"A Fast Learning Algorithm for Sparse Linear Models\" (2306.04338v1)\n",
"2. \"Review of Stochastic Optimization Algorithms\" (2006.16189v4)\n",
"3. \"Statistical Mechanics of Machine Learning\" (2201.12150v2)\n",
"4. \"On the Foundations of Deep Learning via Probabilistic Programming\" (2302.08893v4)\n",
"5. \"Theoretically Grounded Recommendation with Deep Reinforcement Learning\" (2304.02381v2)\n",
"6. \"A General Framework for Regularization and Optimization in Deep Learning\" (2303.15563v1)\n",
"7. \"On the Mathematics of Deep Learning\" (1905.04749v2)\n",
"8. \"The Mathematics of Physics: A Modern Perspective on Quantum Field Theory\" (1705.05172v1)\n",
"9. \"Quantum Mechanics for the Working Lab Scientist\" (1906.01101v1)\n",
"10. \"An Overview of Adversarial Attacks\" (2404.12511v1)\n",
"\n",
"Enjoy exploring these papers! If you'd like more information about any specific paper, just let me know the paper ID, and I can provide you with additional details.\n",
"\n",
"\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'Machine Learning', 'max_results': 5}\n",
"\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'Machine Learning', 'max_results': 5}\n",
"\n",
"Results are saved to papers\\machine_learning\\papers.json\n",
"š Tool result:\n",
"2306.04338v1, 2006.16189v4, 2201.12150v2, 2302.08893v4, 2304.02381v2\n",
"\n",
"Results are saved to papers\\machine_learning\\papers.json\n",
"š Tool result:\n",
"2306.04338v1, 2006.16189v4, 2201.12150v2, 2302.08893v4, 2304.02381v2\n",
"\n",
"š¬ Assistant: I'm an AI specialized in searching and analyzing academic papers on arXiv. Here are five papers that might be relevant to your request:\n",
"\n",
"1. \"Fast Speech: Learning Autoregressive Duration Models for End-to-End Speech Synthesis\" (2019)\n",
"2. \"A Tutorial on Support Vector Machine for Text Classification\" (2006)\n",
"3. \"Convolutional Neural Networks on GPU's with Large Minibatches\" (2012)\n",
"4. \"Improved Training of Deep Convolutional Networks\" (2008)\n",
"5. \"LSTM: A Search Space Odyssey\" (2014)\n",
"\n",
"Enjoy exploring these papers! If you'd like more information about any specific paper, let me know its ID.\n",
"\n",
"\n",
"š¬ Assistant: I'm an AI specialized in searching and analyzing academic papers on arXiv. Here are five papers that might be relevant to your request:\n",
"\n",
"1. \"Fast Speech: Learning Autoregressive Duration Models for End-to-End Speech Synthesis\" (2019)\n",
"2. \"A Tutorial on Support Vector Machine for Text Classification\" (2006)\n",
"3. \"Convolutional Neural Networks on GPU's with Large Minibatches\" (2012)\n",
"4. \"Improved Training of Deep Convolutional Networks\" (2008)\n",
"5. \"LSTM: A Search Space Odyssey\" (2014)\n",
"\n",
"Enjoy exploring these papers! If you'd like more information about any specific paper, let me know its ID.\n",
"\n",
"\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'Algebra', 'max_results': 5}\n",
"\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'Algebra', 'max_results': 5}\n",
"\n",
"Results are saved to papers\\algebra\\papers.json\n",
"š Tool result:\n",
"2411.11095v3, 0905.2613v3, math/0501518v2, 1203.2454v2, math/0602046v1\n",
"\n",
"Results are saved to papers\\algebra\\papers.json\n",
"š Tool result:\n",
"2411.11095v3, 0905.2613v3, math/0501518v2, 1203.2454v2, math/0602046v1\n",
"\n",
"š¬ Assistant: Here are five academic papers on Algebra that I found on arXiv:\n",
"\n",
"1. \"On the Existence of Finite Simple Groups of Lie Type\" (arXiv: 2411.11095v3)\n",
"2. \"Lorentzian Kac-Moody algebras and superconformal field theories\" (arXiv: 0905.2613v3)\n",
"3. \"On the representation theory of Lie algebras\" (arXiv: math/0501518v2)\n",
"4. \"A Survey on Quantum Algebra and Its Applications\" (arXiv: 1203.2454v2)\n",
"5. \"Quantum Invariant Theory, Universal Enveloping Algebras, and Affine Lie Algebras\" (arXiv: math/0602046v1)\n",
"\n",
"Enjoy exploring these papers on algebra! If you need more details about any specific paper, let me know its ID, and I can provide additional information.\n",
"\n",
"\n",
"š¬ Assistant: Here are five academic papers on Algebra that I found on arXiv:\n",
"\n",
"1. \"On the Existence of Finite Simple Groups of Lie Type\" (arXiv: 2411.11095v3)\n",
"2. \"Lorentzian Kac-Moody algebras and superconformal field theories\" (arXiv: 0905.2613v3)\n",
"3. \"On the representation theory of Lie algebras\" (arXiv: math/0501518v2)\n",
"4. \"A Survey on Quantum Algebra and Its Applications\" (arXiv: 1203.2454v2)\n",
"5. \"Quantum Invariant Theory, Universal Enveloping Algebras, and Affine Lie Algebras\" (arXiv: math/0602046v1)\n",
"\n",
"Enjoy exploring these papers on algebra! If you need more details about any specific paper, let me know its ID, and I can provide additional information.\n",
"\n",
"\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'fine arts', 'max_results': 5}\n",
"\n",
"\n",
"š Using tool: search_papers\n",
" Args: {'topic': 'fine arts', 'max_results': 5}\n",
"\n",
"Results are saved to papers\\fine_arts\\papers.json\n",
"š Tool result:\n",
"2406.14485v8, 2511.10482v1, 1801.04486v6, 1109.0705v1, 2207.11099v2\n",
"\n",
"Results are saved to papers\\fine_arts\\papers.json\n",
"š Tool result:\n",
"2406.14485v8, 2511.10482v1, 1801.04486v6, 1109.0705v1, 2207.11099v2\n",
"\n"
]
}
],
"source": [
"chat_loop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}