Skip to main content
Glama
flyanima

Open Search MCP

by flyanima

search_arxiv

Find academic papers on arXiv by entering search queries to access research publications for academic and technical domains.

Instructions

Search arXiv for academic papers

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
maxResultsNoMaximum results to return

Implementation Reference

  • Complete tool registration block containing the handler (execute function), schema, and core implementation of search_arxiv. Handles arXiv API calls with retry logic, XML parsing, error handling, search engine fallback, and result formatting.
    registry.registerTool({
      name: 'search_arxiv',
      description: 'Search arXiv for academic papers',
      category: 'academic',
      source: 'arxiv.org',
      inputSchema: {
        type: 'object',
        properties: {
          query: { type: 'string', description: 'Search query' },
          maxResults: { type: 'number', description: 'Maximum results to return' }
        },
        required: ['query']
      },
      execute: async (args: ToolInput): Promise<ToolOutput> => {
        const query = args.query || '';
        const maxResults = Math.min(args.maxResults || 10, 50); // Limit to 50 results
    
        // Declare lastError at function scope
        let lastError: any = null;
    
        try {
          const startTime = Date.now();
    
          // Try arXiv API with enhanced retry mechanism
          let results = [];
          let apiSuccess = false;
    
          // Try multiple endpoints with different configurations
          const apiConfigs = [
            {
              url: 'https://export.arxiv.org/api/query',
              timeout: 20000,
              headers: {
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
                'Accept': 'application/atom+xml'
              }
            },
            {
              url: 'http://export.arxiv.org/api/query',
              timeout: 15000,
              headers: {
                'User-Agent': 'Open-Search-MCP/1.0',
                'Accept': 'application/atom+xml'
              }
            },
            {
              url: 'https://arxiv.org/api/query',
              timeout: 10000,
              headers: {
                'User-Agent': 'Open-Search-MCP/1.0',
                'Accept': 'application/atom+xml'
              }
            }
          ];
    
          for (const config of apiConfigs) {
            for (let attempt = 0; attempt < 3; attempt++) {
              try {
                const params = {
                  search_query: `all:${encodeURIComponent(query)}`,
                  start: 0,
                  max_results: maxResults,
                  sortBy: 'relevance',
                  sortOrder: 'descending'
                };
    
                const response = await axios.get(config.url, {
                  params,
                  timeout: config.timeout,
                  headers: config.headers,
                  maxRedirects: 5,
                  validateStatus: (status) => status < 500 // Accept 4xx but retry on 5xx
                });
    
                if (response.status === 200 && response.data) {
                  // Parse XML response
                  const xmlData = response.data;
                  results = parseArxivXML(xmlData);
                  if (results.length > 0) {
                    apiSuccess = true;
                    break;
                  }
                }
              } catch (apiError) {
                lastError = apiError;
                // Wait before retry
                if (attempt < 2) {
                  await new Promise(resolve => setTimeout(resolve, 1000 * (attempt + 1)));
                }
              }
            }
            if (apiSuccess) break;
          }
    
          // If API fails, try search engine as fallback
          if (!apiSuccess || results.length === 0) {
            try {
              console.log('arXiv API failed, trying search engine fallback...');
              const searchQuery = `site:arxiv.org "${query}" filetype:pdf`;
              const searchEngine = await import('../../engines/search-engine-manager.js');
              const searchResults = await searchEngine.SearchEngineManager.getInstance().search(searchQuery, {
                maxResults: maxResults * 2,
                timeout: 10000
              });
    
              if (searchResults && searchResults.results && searchResults.results.length > 0) {
                results = extractArxivResultsFromSearch(searchResults.html || '', query);
                console.log(`Found ${results.length} results from search engine fallback`);
              }
            } catch (searchError) {
              console.log('Search engine fallback also failed:', searchError);
            }
          }
    
          const searchTime = Date.now() - startTime;
    
          // If no results found, provide helpful error message
          if (results.length === 0) {
            return {
              success: false,
              error: 'No arXiv papers found for this query',
              data: {
                source: 'arXiv',
                query,
                results: [],
                totalResults: 0,
                searchTime,
                apiUsed: apiSuccess,
                suggestions: [
                  'Try broader search terms',
                  'Check spelling of technical terms',
                  'Use different keywords or synonyms',
                  'Try searching without quotes'
                ],
                lastError: lastError ? (lastError instanceof Error ? lastError.message : String(lastError)) : null
              }
            };
          }
    
          return {
            success: true,
            data: {
              source: apiSuccess ? 'arXiv API' : 'arXiv (Search Engine)',
              query,
              results: results.slice(0, maxResults),
              totalResults: results.length,
              searchTime,
              apiUsed: apiSuccess,
              fallbackUsed: !apiSuccess
            },
            metadata: {
              totalResults: results.length,
              searchTime,
              sources: ['arxiv.org'],
              cached: false,
              apiSuccess,
              fallbackUsed: !apiSuccess
            }
          };
        } catch (error) {
          return {
            success: false,
            error: `arXiv search failed: ${error instanceof Error ? error.message : String(error)}`,
            data: {
              source: 'arXiv',
              query,
              results: [],
              totalResults: 0,
              apiUsed: false,
              lastError: lastError ? (lastError instanceof Error ? lastError.message : String(lastError)) : null,
              suggestions: [
                'Check your internet connection',
                'Try again in a few moments',
                'Use different search terms',
                'Contact support if the problem persists'
              ]
            }
          };
        }
      }
    });
  • src/index.ts:229-229 (registration)
    Server initialization calls registerAcademicTools which registers the search_arxiv tool into the ToolRegistry.
    registerAcademicTools(this.toolRegistry);           // 1 tool: search_arxiv
  • Input schema defined in the tool registration for validating query and optional maxResults parameters.
    inputSchema: {
      type: 'object',
      properties: {
        query: { type: 'string', description: 'Search query' },
        maxResults: { type: 'number', description: 'Maximum results to return' }
      },
      required: ['query']
  • Helper function to parse arXiv XML API response into structured paper results.
    function parseArxivXML(xmlData: string): any[] {
      const results: any[] = [];
    
      try {
        // Simple XML parsing for arXiv entries
        const entryRegex = /<entry>(.*?)<\/entry>/gs;
        const entries = xmlData.match(entryRegex) || [];
    
        for (const entry of entries) {
          const result = {
            id: extractXMLValue(entry, 'id'),
            title: extractXMLValue(entry, 'title')?.replace(/\s+/g, ' ').trim(),
            summary: extractXMLValue(entry, 'summary')?.replace(/\s+/g, ' ').trim(),
            authors: extractAuthors(entry),
            published: extractXMLValue(entry, 'published'),
            updated: extractXMLValue(entry, 'updated'),
            categories: extractCategories(entry),
            url: extractXMLValue(entry, 'id'),
            pdfUrl: extractPdfUrl(entry)
          };
    
          if (result.title && result.summary) {
            results.push(result);
          }
        }
      } catch (error) {}
    
      return results;
    }
  • Runtime input validation schema mapping for search_arxiv using shared academicSearch Zod schema.
    'search_arxiv': ToolSchemas.academicSearch,
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but offers minimal information. It doesn't mention rate limits, authentication requirements, result format, pagination behavior, or whether this is a read-only operation. The agent must infer everything from the generic 'search' verb.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just 5 words, with zero wasted language. It's front-loaded with the essential information and contains no unnecessary elaboration or redundant phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description is insufficiently complete. It doesn't explain what kind of results to expect, how results are ranked, whether there are field-specific search capabilities, or any limitations of the arXiv search system that the agent should know about.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (query and maxResults). The description adds no additional parameter information beyond what's in the schema, so it meets the baseline expectation but doesn't provide extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search arXiv for academic papers' clearly states the action (search) and target resource (arXiv academic papers), making the purpose immediately understandable. However, it doesn't distinguish this tool from similar sibling tools like search_pubmed or search_semantic_scholar, which also search academic sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance about when to use this tool versus alternatives. With multiple academic search tools available (search_pubmed, search_semantic_scholar, search_biorxiv, etc.), there's no indication of what makes arXiv unique or when it should be preferred over other research databases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/flyanima/open-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server