Skip to main content
Glama
gscfwid

NCCN Guidelines MCP Server

by gscfwid

get_index

Retrieve the raw YAML index file containing structured NCCN clinical cancer guidelines for direct access to authoritative treatment protocols.

Instructions

Get the raw contents of the NCCN guidelines index YAML file. Returns: String containing the raw YAML content of the guidelines index

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The MCP tool handler for 'get_index'. This function reads and returns the raw YAML content of the NCCN guidelines index file generated by the scraper.
    @mcp.tool() async def get_index() -> str: """ Get the raw contents of the NCCN guidelines index YAML file. Returns: String containing the raw YAML content of the guidelines index """ try: index_path = current_dir / GUIDELINES_INDEX_FILE with open(index_path, 'r', encoding='utf-8') as f: content = f.read() logger.info(f"Successfully loaded guidelines index from {index_path}") return content except FileNotFoundError: logger.error(f"Guidelines index file not found: {index_path}") return "Error: Guidelines index file not found" except Exception as e: logger.error(f"Error reading guidelines index: {str(e)}") return f"Error reading guidelines index: {str(e)}"
  • Helper function that scrapes the NCCN website to generate and maintain the guidelines index YAML file used by the get_index tool. Called during server initialization.
    async def ensure_nccn_index(output_file: str = DEFAULT_OUTPUT_FILE, max_age_days: int = CACHE_MAX_AGE_DAYS) -> dict: """ Ensure NCCN guideline index exists and is valid This is the main interface for MCP Server calls Args: output_file: Output file path max_age_days: Maximum cache file validity period (days) Returns: Parsed guideline index data """ import time # Check cache file cache_info = check_cache_file(output_file) # Determine if re-scraping is needed should_scrape = not cache_info['exists'] or not cache_info['is_valid'] if cache_info['exists']: if cache_info['is_valid']: logger.info(f"Using valid cache file: {output_file} (created at {cache_info['created_time'].strftime('%Y-%m-%d %H:%M:%S')}, {cache_info['age_days']} days ago)") else: logger.info(f"Cache file expired ({cache_info['age_days']} days > {max_age_days} days) or corrupted, starting re-scraping...") else: logger.info("Cache file not found, starting NCCN guideline index scraping...") if should_scrape: start_time = time.time() try: # Scrape all category data categories_data = await scrape_all_categories() if not categories_data: logger.error("Scraping failed, no data retrieved") # If scraping fails but old cache exists, try using old cache if cache_info['exists']: logger.info("Scraping failed, attempting to use existing cache file") return load_cached_data(output_file) return {} # Generate YAML document yaml_content = generate_yaml(categories_data) # Save to file with open(output_file, 'w', encoding='utf-8') as f: f.write(yaml_content) # Calculate statistics total_guidelines = sum(len(cat.get('items', [])) for cat in categories_data) successful_guidelines = sum( len([item for item in cat.get('items', []) if item.get('guideline_link')]) for cat in categories_data ) elapsed_time = time.time() - start_time logger.info(f"Scraping completed! Index saved to {output_file}") logger.info(f"Processed {len(categories_data)} categories, found {successful_guidelines}/{total_guidelines} valid guideline links") logger.info(f"Scraping time: {elapsed_time:.2f} seconds") except Exception as e: logger.error(f"Error during scraping process: {e}") # If scraping fails but cache exists, use cache if cache_info['exists']: logger.info("Scraping failed, using existing cache file") return load_cached_data(output_file) return {} # Load and return data cached_data = load_cached_data(output_file) if cached_data and 'nccn_guidelines' in cached_data: total_categories = len(cached_data['nccn_guidelines']) total_guidelines = sum(len(cat.get('guidelines', [])) for cat in cached_data['nccn_guidelines']) logger.info(f"NCCN guideline index ready: {total_categories} categories, {total_guidelines} total guidelines") else: logger.warning("Guideline index file format is abnormal") return cached_data
  • server.py:146-146 (registration)
    The @mcp.tool() decorator registers the get_index function as an MCP tool.
    @mcp.tool()
  • Server initialization call to ensure_nccn_index, which populates the index file before the tool is used.
    guidelines_data = await ensure_nccn_index( output_file=str(current_dir / GUIDELINES_INDEX_FILE), max_age_days=7 # Refresh index every 7 days )

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gscfwid/NCCN_guidelines_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server