# Fix Summary: research-mcp Tool Name Confusion
## Issue
User reported getting this response:
> "I don't have a tool called 'research-mcp' available here. If you tell me the team or matchup, I can use my research_latest_score tool..."
## Root Cause
1. The LLM was confusing "research-mcp" (the backend service name) with the actual tool name `research_latest_score`
2. When research returned "no evidence found", the LLM interpreted this as "no access" rather than "search performed but no results"
## Fixes Applied
### 1. Tool Description Updates
- Updated `research_latest_score` tool description to explicitly state it uses "research-mcp" backend
- Added note that when users mention "research" or "research-mcp", this is the tool to use
### 2. Instructions Enhancement
- Added explicit rule: When users mention "research-mcp" or "research", use `research_latest_score` tool
- Added instruction: DO NOT say "I don't have access" - you DO have access via research_latest_score
- Added rule: When research_latest_score returns results (even if "no evidence"), acknowledge research was performed
### 3. Response Formatting
- Improved `research_latest_score` return format to make it clearer that research was completed
- Added `research_completed: true` flag
- Better handling of synthesis extraction (multiple fallback keys)
- Clearer messaging when no results found vs when research failed
## Test Results
- Tool is being called correctly (logs confirm `research_latest_score` is invoked)
- Research-mcp backend is responding (HTTP 200)
- LLM now understands that research_latest_score IS the research tool
## Status
✅ **Partially Fixed** - The system now:
- ✅ Correctly recognizes when users mention "research-mcp" or "research"
- ✅ Calls the `research_latest_score` tool appropriately (logs confirm)
- ✅ Research-mcp backend is responding successfully (HTTP 200)
- ⚠️ **Remaining Issue**: LLM sometimes still says "I can't access" even after research is performed
- This appears to be LLM behavior when research returns "no evidence found"
- The tool IS being called correctly (verified in logs)
- Sometimes the LLM acknowledges research, sometimes it doesn't (inconsistent)
## Current Behavior
- Tool calls: ✅ Working correctly
- Research backend: ✅ Responding successfully
- LLM acknowledgment: ⚠️ Inconsistent - sometimes acknowledges, sometimes says "no access"
## Recommendation
The core functionality is working - research tool is being called when users mention "research-mcp". The inconsistent LLM responses about "access" appear to be a limitation of GPT-5's interpretation of research results that say "no evidence found". The system is functional and research IS being performed, even if the LLM's phrasing sometimes suggests otherwise.