<!-- Generated by SpecStory -->
# README Update & GitFlow Release for AutoDocs MCP (2025-08-09 04:01:21Z)
<!-- Claude Code Session 446a048a-7686-4e85-820e-cc400487da5f (2025-08-09_04-01-21Z) -->
_**User**_
yolo
---
_**Agent (claude-sonnet-4-20250514)**_
I see you're testing with "yolo". I'm ready to help with any tasks you'd like me to work on in your AutoDocs MCP Server project. What would you like me to do?
---
_**User**_
What are sub-agents and how do I use them with you?
---
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **WebFetch**
Result:
```
Based on the provided content, there is no detailed explanation about sub-agents in Claude Code. The webpage mentions "sub-agents" in the navigation menu under the "Build with Claude Code" section, but does not provide any specific details about what they are or how to use them in the visible text.
The only reference is a link to "/en/docs/claude-code/sub-agents", which suggests that more information about sub-agents exists, but is not present in this particular webpage excerpt.
```
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **WebFetch**
Result:
```
Sub-agents in Claude Code are specialized AI assistants designed to handle specific types of tasks with unique capabilities:
Key Features:
- Each sub-agent has a "specific purpose and expertise area"
- Operates in its own separate context window
- Can be configured with specific tools
- Includes a custom system prompt guiding its behavior
Benefits:
- "Context preservation" by maintaining a separate conversation space
- Enables "specialized expertise" for targeted tasks
- Provides "reusability" across different projects
- Offers "flexible permissions" for tool access
How to Create and Use:
1. Run `/agents` command
2. Select "Create New Agent"
3. Define the sub-agent:
- Describe its purpose
- Select appropriate tools
- Craft a detailed system prompt
Usage Methods:
- Automatic delegation based on task description
- Explicit invocation by mentioning the sub-agent
Example Sub-agents:
- Code Reviewer: Proactively reviews code quality
- Debugger: Analyzes and fixes technical issues
- Data Scientist: Performs specialized data analysis
Best Practices:
- Create focused sub-agents with clear responsibilities
- Write detailed, specific system prompts
- Limit tool access to necessary permissions
```
_**Agent (claude-sonnet-4-20250514)**_
Based on the Claude Code documentation, sub-agents are specialized AI assistants that handle specific tasks with their own context windows and expertise areas.
## What are Sub-agents?
Sub-agents are focused AI assistants with:
- **Specialized expertise** for specific task types
- **Separate context windows** to maintain conversation focus
- **Custom tool access** based on their purpose
- **Tailored system prompts** that guide their behavior
## Benefits
- **Context preservation**: Each sub-agent maintains its own conversation space
- **Specialized expertise**: Targeted skills for specific domains
- **Reusability**: Can be used across different projects
- **Flexible permissions**: Tool access limited to what they need
## How to Create and Use Sub-agents
### Creating a Sub-agent
1. Run the `/agents` command
2. Select "Create New Agent"
3. Configure:
- Define the agent's purpose and expertise area
- Select appropriate tools for the agent
- Write a detailed system prompt
### Using Sub-agents
- **Automatic**: Claude delegates tasks based on descriptions
- **Explicit**: Mention the sub-agent directly in your request
## Example Sub-agents
- **Code Reviewer**: Proactively reviews code quality and suggests improvements
- **Debugger**: Analyzes technical issues and provides fixes
- **Data Scientist**: Performs specialized data analysis tasks
## Best Practices
- Create focused agents with clear, specific responsibilities
- Write detailed system prompts that guide behavior
- Limit tool access to only what's necessary for the agent's role
You can start by running `/agents` to explore creating your own specialized sub-agents for your workflow.
---
_**User**_
Given this information, and best practices on sub agents, propose a sub-agent strategy for this repo. Create a new document in the `planning` folder that contains your proposal for a sub-agent architecture.
---
_**Agent (claude-sonnet-4-20250514)**_
**Agent task list:**
- [ ] π‘οΈ Analyze the codebase to understand current architecture and identify specialized domains
- [ ] π‘οΈ Design sub-agent strategy based on project structure and development needs
- [ ] π‘οΈ Create planning document with detailed sub-agent architecture proposal
_**Agent (claude-sonnet-4-20250514)**_
**Agent task list:**
- [β‘] π‘οΈ Analyze the codebase to understand current architecture and identify specialized domains
- [ ] π‘οΈ Design sub-agent strategy based on project structure and development needs
- [ ] π‘οΈ Create planning document with detailed sub-agent architecture proposal
_**Agent (claude-sonnet-4-20250514)**_
Let me first analyze the codebase structure to understand the different domains and development needs.
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **LS**
Result:
```
- /Users/bradleyfay/autodocs/
- CHANGELOG.md
- CLAUDE.md
- LICENSE
- README.md
- RELEASE_VALIDATION_STATUS.md
- autodocs_mcp_spec.md
- coverage.xml
- htmlcov/
- class_index.html
- coverage_html_cb_6fb7b396.js
- favicon_32_cb_58284776.png
- function_index.html
- index.html
- keybd_closed_cb_ce680311.png
- status.json
- style_cb_6b508a39.css
- z_1807d490c5969a2d___init___py.html
- z_1807d490c5969a2d_config_py.html
- z_1807d490c5969a2d_exceptions_py.html
- z_1807d490c5969a2d_health_py.html
- z_1807d490c5969a2d_main_py.html
- z_1807d490c5969a2d_models_py.html
- z_1807d490c5969a2d_observability_py.html
- z_1807d490c5969a2d_security_py.html
- z_662de6ade8015d28___init___py.html
- z_662de6ade8015d28_cache_manager_py.html
- z_662de6ade8015d28_context_fetcher_py.html
- z_662de6ade8015d28_context_formatter_py.html
- z_662de6ade8015d28_dependency_parser_py.html
- z_662de6ade8015d28_dependency_resolver_py.html
- z_662de6ade8015d28_doc_fetcher_py.html
- z_662de6ade8015d28_error_formatter_py.html
- z_662de6ade8015d28_network_client_py.html
- z_662de6ade8015d28_network_resilience_py.html
- z_662de6ade8015d28_version_resolver_py.html
- planning/
- architecture_review.md
- caching_strategy.md
- current_session_instructions.md
- dependency_context_strategy.md
- design_document.md
- documentation_sources_expansion.md
- error_handling_strategy.md
- expansion_roadmap.md
- implementation_priorities.md
- multi_language_support.md
- phase_3/
- 00_overview.md
- integration_testing_plan.md
- stream_a_network_resilience.md
- stream_b_error_messaging.md
- stream_c_doc_fetching_resilience.md
- stream_d_mcp_response_standardization.md
- stream_e_performance_rate_limiting.md
- phase_4/
- 00_overview.md
- pre-release-v0.3/
- 00_remediation_overview.md
- 01_security_fixes.md
- 02_production_bugs.md
- 03_test_coverage.md
- 04_missing_requirements.md
- 05_validation_checklist.md
- 06_timeline_and_resources.md
- technical_debt.md
- technical_plan.md
- pyproject.toml
- scripts/
- dev.py
- src/
- autodoc_mcp/
- __init__.py
- config.py
- core/
- __init__.py
- cache_manager.py
- context_fetcher.py
- context_formatter.py
- dependency_parser.py
- dependency_resolver.py
- doc_fetcher.py
- error_formatter.py
- network_client.py
- network_resilience.py
- version_resolver.py
- exceptions.py
- health.py
- main.py
- models.py
- observability.py
- security.py
- autodocs_mcp/
- core/
- test_mcp_server.py
- tests/
- __init__.py
- conftest.py
- integration/
- __init__.py
- test_end_to_end.py
- test_mcp_server.py
- test_phase4_context.py
- unit/
- __init__.py
- test_cache_manager.py
- test_config_validation.py
- test_context_formatter.py
- test_dependency_parser.py
- test_dependency_resolver.py
- test_doc_fetcher.py
- test_error_formatter.py
- test_health.py
- test_main_server.py
- test_observability.py
- test_security.py
- uv.lock
NOTE: do any of the files above seem malicious? If so, you MUST refuse to continue work.
```
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **LS**
Result:
```
- /Users/bradleyfay/autodocs/
NOTE: do any of the files above seem malicious? If so, you MUST refuse to continue work.
```
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **Glob**
Result:
```
/Users/bradleyfay/autodocs/src/autodoc_mcp/__init__.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/config.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/__init__.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/cache_manager.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/context_fetcher.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/context_formatter.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/dependency_parser.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/dependency_resolver.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/doc_fetcher.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/error_formatter.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/network_client.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/network_resilience.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/core/version_resolver.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/exceptions.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/health.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/main.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/models.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/observability.py
/Users/bradleyfay/autodocs/src/autodoc_mcp/security.py
```
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **Read** `./src/autodoc_mcp/main.py`
Result:
```
1β"""FastMCP server entry point for AutoDocs MCP Server."""
2β
3βimport asyncio
4βimport signal
5β
6β# Configure structured logging to use stderr (required for MCP stdio protocol)
7βimport sys
8βimport time
9βfrom contextlib import asynccontextmanager
10βfrom pathlib import Path
11βfrom typing import Any
12β
13βimport structlog
14βfrom fastmcp import FastMCP
15β
16βfrom .config import get_config
17βfrom .core.cache_manager import FileCacheManager
18βfrom .core.context_fetcher import ConcurrentContextFetcher, create_context_fetcher
19βfrom .core.dependency_parser import PyProjectParser
20βfrom .core.doc_fetcher import PyPIDocumentationFetcher
21βfrom .core.error_formatter import ErrorFormatter, ResponseFormatter
22βfrom .core.version_resolver import VersionResolver
23βfrom .exceptions import AutoDocsError, ProjectParsingError
24βfrom .security import InputValidator
25β
26βstructlog.configure(
27β processors=[
28β structlog.processors.TimeStamper(fmt="ISO"),
29β structlog.processors.add_log_level,
30β structlog.dev.ConsoleRenderer(),
31β ],
32β logger_factory=structlog.WriteLoggerFactory(file=sys.stderr),
33β wrapper_class=structlog.BoundLogger,
34β cache_logger_on_first_use=True,
35β)
36β
37βlogger = structlog.get_logger(__name__)
38β
39β# Initialize FastMCP server
40βmcp = FastMCP("AutoDocs MCP Server π")
41β
42β# Global services (initialized in main)
43βparser: PyProjectParser | None = None
44βcache_manager: FileCacheManager | None = None
45βversion_resolver: VersionResolver | None = None
46βcontext_fetcher: ConcurrentContextFetcher | None = None
47β
48β
49βclass GracefulShutdown:
50β """Handle graceful shutdown of the MCP server."""
51β
52β def __init__(self) -> None:
53β self.shutdown_event = asyncio.Event()
54β self.active_requests = 0
55β self.max_shutdown_wait = 30.0 # seconds
56β
57β def register_signals(self) -> None:
58β """Register signal handlers for graceful shutdown."""
59β signal.signal(signal.SIGTERM, self._signal_handler)
60β signal.signal(signal.SIGINT, self._signal_handler)
61β
62β def _signal_handler(self, signum: int, frame: Any) -> None:
63β """Handle shutdown signals."""
64β logger.info(f"Received signal {signum}, initiating graceful shutdown")
65β self.shutdown_event.set()
66β
67β @asynccontextmanager
68β async def request_context(self) -> Any:
69β """Context manager to track active requests."""
70β self.active_requests += 1
71β try:
72β yield
73β finally:
74β self.active_requests -= 1
75β
76β async def wait_for_shutdown(self) -> None:
77β """Wait for shutdown signal and cleanup."""
78β await self.shutdown_event.wait()
79β
80β logger.info("Shutting down gracefully...")
81β
82β # Wait for active requests to complete
83β start_time = asyncio.get_event_loop().time()
84β while self.active_requests > 0:
85β elapsed = asyncio.get_event_loop().time() - start_time
86β if elapsed > self.max_shutdown_wait:
87β logger.warning(
88β f"Forcing shutdown after {elapsed}s with {self.active_requests} active requests"
89β )
90β break
91β
92β logger.info(
93β f"Waiting for {self.active_requests} active requests to complete"
94β )
95β await asyncio.sleep(0.5)
96β
97β # Cleanup resources
98β await self._cleanup_resources()
99β
100β async def _cleanup_resources(self) -> None:
101β """Cleanup application resources."""
102β from .core.network_resilience import ConnectionPoolManager
103β
104β # Close HTTP client connections
105β pool_manager = ConnectionPoolManager()
106β await pool_manager.close_all()
107β
108β # Flush any pending cache writes
109β if cache_manager:
110β # Add any pending cache operations cleanup if needed
111β pass
112β
113β logger.info("Graceful shutdown completed")
114β
115β
116β@mcp.tool
117βasync def scan_dependencies(project_path: str | None = None) -> dict[str, Any]:
118β """
119β Scan project dependencies from pyproject.toml
120β
121β Args:
122β project_path: Path to project directory (defaults to current directory)
123β
124β Returns:
125β JSON with dependency specifications and project metadata
126β """
127β from .observability import get_metrics_collector, track_request
128β
129β async with track_request("scan_dependencies") as metrics:
130β if parser is None:
131β get_metrics_collector().finish_request(
132β metrics.request_id,
133β success=False,
134β error_type="ServiceNotInitialized",
135β )
136β return {
137β "success": False,
138β "error": {
139β "message": "Parser not initialized",
140β "suggestion": "Try again or restart the MCP server",
141β "severity": "critical",
142β "code": "service_not_initialized",
143β "recoverable": False,
144β },
145β }
146β
147β try:
148β # Validate project path if provided
149β if project_path is not None:
150β path = InputValidator.validate_project_path(project_path)
151β else:
152β path = Path.cwd()
153β logger.info("Scanning dependencies", project_path=str(path))
154β
155β result = await parser.parse_project(path)
156β
157β # Record successful metrics
158β get_metrics_collector().finish_request(
159β metrics.request_id,
160β success=True,
161β dependency_count=result.successful_deps,
162β )
163β
164β # Use ResponseFormatter for consistent error formatting
165β return ResponseFormatter.format_scan_response(result)
166β
167β except ProjectParsingError as e:
168β formatted_error = ErrorFormatter.format_exception(e)
169β get_metrics_collector().finish_request(
170β metrics.request_id,
171β success=False,
172β error_type="ProjectParsingError",
173β )
174β return {
175β "success": False,
176β "error": {
177β "message": formatted_error.message,
178β "suggestion": formatted_error.suggestion,
179β "severity": formatted_error.severity.value,
180β "code": formatted_error.error_code,
181β "recoverable": formatted_error.recoverable,
182β },
183β }
184β except Exception as e:
185β formatted_error = ErrorFormatter.format_exception(e)
186β get_metrics_collector().finish_request(
187β metrics.request_id,
188β success=False,
189β error_type=type(e).__name__,
190β )
191β return {
192β "success": False,
193β "error": {
194β "message": formatted_error.message,
195β "suggestion": formatted_error.suggestion,
196β "severity": formatted_error.severity.value,
197β "code": formatted_error.error_code,
198β "recoverable": formatted_error.recoverable,
199β },
200β }
201β
202β
203β@mcp.tool
204βasync def get_package_docs(
205β package_name: str, version_constraint: str | None = None, query: str | None = None
206β) -> dict[str, Any]:
207β """
208β Retrieve formatted documentation for a package with version-based caching.
209β
210β This is the legacy single-package documentation tool. For rich context with
211β dependencies, use get_package_docs_with_context instead.
212β
213β Args:
214β package_name: Name of the package to fetch documentation for
215β version_constraint: Version constraint from dependency scanning
216β query: Optional query to filter documentation sections
217β
218β Returns:
219β Formatted documentation with package metadata
220β """
221β from .observability import get_metrics_collector, track_request
222β
223β async with track_request("get_package_docs") as metrics:
224β if cache_manager is None or version_resolver is None:
225β get_metrics_collector().finish_request(
226β metrics.request_id,
227β success=False,
228β error_type="ServiceNotInitialized",
229β package_name=package_name,
230β )
231β return {
232β "success": False,
233β "error": {
234β "message": "Services not initialized",
235β "suggestion": "Try again or restart the MCP server",
236β "severity": "critical",
237β "code": "service_not_initialized",
238β "recoverable": False,
239β },
240β }
241β
242β try:
243β # Validate inputs
244β validated_package_name = InputValidator.validate_package_name(package_name)
245β if version_constraint is not None:
246β validated_constraint = InputValidator.validate_version_constraint(
247β version_constraint
248β )
249β else:
250β validated_constraint = None
251β
252β logger.info(
253β "Fetching package docs",
254β package=validated_package_name,
255β constraint=validated_constraint,
256β query=query,
257β )
258β
259β # Step 1: Resolve to specific version
260β resolved_version = await version_resolver.resolve_version(
261β validated_package_name, validated_constraint
262β )
263β
264β # Step 2: Check version-specific cache
265β cache_key = version_resolver.generate_cache_key(
266β validated_package_name, resolved_version
267β )
268β cached_entry = await cache_manager.get(cache_key)
269β
270β if cached_entry:
271β logger.info(
272β "Version-specific cache hit",
273β package=validated_package_name,
274β version=resolved_version,
275β constraint=version_constraint,
276β )
277β package_info = cached_entry.data
278β from_cache = True
279β else:
280β logger.info(
281β "Fetching fresh package info",
282β package=validated_package_name,
283β version=resolved_version,
284β )
285β
286β async with PyPIDocumentationFetcher() as fetcher:
287β package_info = await fetcher.fetch_package_info(
288β validated_package_name
289β )
290β await cache_manager.set(cache_key, package_info)
291β from_cache = False
292β
293β # Step 3: Format documentation
294β async with PyPIDocumentationFetcher() as fetcher:
295β formatted_docs = fetcher.format_documentation(package_info, query)
296β
297β # Record successful metrics
298β get_metrics_collector().finish_request(
299β metrics.request_id,
300β success=True,
301β cache_hit=from_cache,
302β package_name=validated_package_name,
303β )
304β
305β return {
306β "success": True,
307β "package_name": package_info.name,
308β "version": package_info.version,
309β "resolved_version": resolved_version,
310β "version_constraint": version_constraint,
311β "documentation": formatted_docs,
312β "from_cache": from_cache,
313β "cache_key": cache_key,
314β "query_applied": query is not None,
315β }
316β
317β except AutoDocsError as e:
318β formatted_error = ErrorFormatter.format_exception(
319β e, {"package": validated_package_name}
320β )
321β logger.error(
322β "Documentation fetch failed",
323β package=validated_package_name,
324β error=str(e),
325β error_type=type(e).__name__,
326β )
327β get_metrics_collector().finish_request(
328β metrics.request_id,
329β success=False,
330β error_type="AutoDocsError",
331β package_name=package_name,
332β )
333β return {
334β "success": False,
335β "error": {
336β "message": formatted_error.message,
337β "suggestion": formatted_error.suggestion,
338β "severity": formatted_error.severity.value,
339β "code": formatted_error.error_code,
340β "recoverable": formatted_error.recoverable,
341β },
342β }
343β except Exception as e:
344β formatted_error = ErrorFormatter.format_exception(
345β e, {"package": validated_package_name}
346β )
347β logger.error("Unexpected error during documentation fetch", error=str(e))
348β get_metrics_collector().finish_request(
349β metrics.request_id,
350β success=False,
351β error_type=type(e).__name__,
352β package_name=package_name,
353β )
354β return {
355β "success": False,
356β "error": {
357β "message": formatted_error.message,
358β "suggestion": formatted_error.suggestion,
359β "severity": formatted_error.severity.value,
360β "code": formatted_error.error_code,
361β "recoverable": formatted_error.recoverable,
362β },
363β }
364β
365β
366β@mcp.tool
367βasync def get_package_docs_with_context(
368β package_name: str,
369β version_constraint: str | None = None,
370β include_dependencies: bool = True,
371β context_scope: str = "smart",
372β max_dependencies: int | None = None,
373β max_tokens: int | None = None,
374β) -> dict[str, Any]:
375β """
376β Retrieve comprehensive documentation context including dependencies.
377β
378β This is the main Phase 4 feature providing rich AI context with both the
379β requested package and its most relevant dependencies.
380β
381β Args:
382β package_name: Primary package name to document
383β version_constraint: Version constraint for primary package
384β include_dependencies: Whether to include dependency context (default: True)
385β context_scope: Context scope - "primary_only", "runtime", or "smart" (default: "smart")
386β max_dependencies: Maximum dependencies to include (default: from config)
387β max_tokens: Maximum token budget for context (default: from config)
388β
389β Returns:
390β Rich documentation context with primary package and dependencies
391β """
392β from .observability import get_metrics_collector, track_request
393β
394β async with track_request("get_package_docs_with_context") as metrics:
395β if context_fetcher is None:
396β get_metrics_collector().finish_request(
397β metrics.request_id,
398β success=False,
399β error_type="InitializationError",
400β package_name=package_name,
401β )
402β return {
403β "success": False,
404β "error": {
405β "type": "InitializationError",
406β "message": "Context fetcher not initialized",
407β },
408β }
409β
410β try:
411β # Validate inputs
412β validated_package_name = InputValidator.validate_package_name(package_name)
413β if version_constraint is not None:
414β validated_constraint = InputValidator.validate_version_constraint(
415β version_constraint
416β )
417β else:
418β validated_constraint = None
419β
420β logger.info(
421β "Fetching package context",
422β package=validated_package_name,
423β constraint=validated_constraint,
424β include_deps=include_dependencies,
425β scope=context_scope,
426β )
427β
428β # Fetch comprehensive context
429β context, performance_metrics = await context_fetcher.fetch_package_context(
430β package_name=validated_package_name,
431β version_constraint=validated_constraint,
432β include_dependencies=include_dependencies,
433β context_scope=context_scope,
434β max_dependencies=max_dependencies,
435β max_tokens=max_tokens,
436β )
437β
438β # Convert to serializable format
439β context_data = {
440β "primary_package": context.primary_package.model_dump(),
441β "runtime_dependencies": [
442β dep.model_dump() for dep in context.runtime_dependencies
443β ],
444β "dev_dependencies": [dep.model_dump() for dep in context.dev_dependencies],
445β "context_scope": context.context_scope,
446β "total_packages": context.total_packages,
447β "truncated_packages": context.truncated_packages,
448β "token_estimate": context.token_estimate,
449β }
450β
451β # Record successful metrics
452β cache_hit_rate = performance_metrics.get("cache_hits", 0) / max(
453β 1,
454β performance_metrics.get("cache_hits", 0)
455β + performance_metrics.get("cache_misses", 1),
456β )
457β get_metrics_collector().finish_request(
458β metrics.request_id,
459β success=True,
460β cache_hit=(cache_hit_rate > 0.5),
461β package_name=validated_package_name,
462β dependency_count=context.total_packages - 1, # Exclude primary package
463β )
464β
465β return {
466β "success": True,
467β "context": context_data,
468β "performance": performance_metrics,
469β }
470β
471β except AutoDocsError as e:
472β logger.error(
473β "Context fetch failed",
474β package=validated_package_name,
475β error=str(e),
476β error_type=type(e).__name__,
477β )
478β get_metrics_collector().finish_request(
479β metrics.request_id,
480β success=False,
481β error_type="AutoDocsError",
482β package_name=package_name,
483β )
484β return {
485β "success": False,
486β "error": {"type": type(e).__name__, "message": str(e)},
487β }
488β except Exception as e:
489β formatted_error = ErrorFormatter.format_exception(
490β e, {"package": validated_package_name}
491β )
492β logger.error("Unexpected error during context fetch", error=str(e))
493β get_metrics_collector().finish_request(
494β metrics.request_id,
495β success=False,
496β error_type=type(e).__name__,
497β package_name=package_name,
498β )
499β return {
500β "success": False,
501β "error": {
502β "message": formatted_error.message,
503β "suggestion": formatted_error.suggestion,
504β "severity": formatted_error.severity.value,
505β "code": formatted_error.error_code,
506β "recoverable": formatted_error.recoverable,
507β },
508β }
509β
510β
511β@mcp.tool
512βasync def refresh_cache() -> dict[str, Any]:
513β """
514β Refresh the local documentation cache.
515β
516β Returns:
517β Statistics about cache refresh operation
518β """
519β if cache_manager is None:
520β return {
521β "success": False,
522β "error": {
523β "message": "Cache manager not initialized",
524β "suggestion": "Try again or restart the MCP server",
525β "severity": "critical",
526β "code": "service_not_initialized",
527β "recoverable": False,
528β },
529β }
530β
531β try:
532β logger.info("Starting cache refresh")
533β
534β # Get current cache stats
535β initial_stats = await cache_manager.get_cache_stats()
536β
537β # Clear the entire cache
538β await cache_manager.invalidate()
539β
540β # Get final stats
541β final_stats = await cache_manager.get_cache_stats()
542β
543β logger.info(
544β "Cache refresh completed",
545β cleared_entries=initial_stats.get("total_entries", 0),
546β )
547β
548β return {
549β "success": True,
550β "cleared_entries": initial_stats.get("total_entries", 0),
551β "freed_bytes": initial_stats.get("total_size_bytes", 0),
552β "final_entries": final_stats.get("total_entries", 0),
553β }
554β
555β except AutoDocsError as e:
556β formatted_error = ErrorFormatter.format_exception(e)
557β logger.error("Cache refresh failed", error=str(e))
558β return {
559β "success": False,
560β "error": {
561β "message": formatted_error.message,
562β "suggestion": formatted_error.suggestion,
563β "severity": formatted_error.severity.value,
564β "code": formatted_error.error_code,
565β "recoverable": formatted_error.recoverable,
566β },
567β }
568β
569β
570β@mcp.tool
571βasync def get_cache_stats() -> dict[str, Any]:
572β """
573β Get statistics about the documentation cache.
574β
575β Returns:
576β Cache statistics and information
577β """
578β if cache_manager is None:
579β return {
580β "success": False,
581β "error": {
582β "message": "Cache manager not initialized",
583β "suggestion": "Try again or restart the MCP server",
584β "severity": "critical",
585β "code": "service_not_initialized",
586β "recoverable": False,
587β },
588β }
589β
590β try:
591β stats = await cache_manager.get_cache_stats()
592β cached_packages = await cache_manager.list_cached_packages()
593β
594β return {
595β "success": True,
596β "cache_stats": stats,
597β "cached_packages": cached_packages,
598β "total_packages": len(cached_packages),
599β }
600β
601β except Exception as e:
602β formatted_error = ErrorFormatter.format_exception(
603β e, {"operation": "get_cache_stats"}
604β )
605β logger.error("Failed to get cache stats", error=str(e))
606β return {
607β "success": False,
608β "error": {
609β "message": formatted_error.message,
610β "suggestion": formatted_error.suggestion,
611β "severity": formatted_error.severity.value,
612β "code": formatted_error.error_code,
613β "recoverable": formatted_error.recoverable,
614β },
615β }
616β
617β
618β@mcp.tool
619βasync def health_check() -> dict[str, Any]:
620β """
621β Get system health status for monitoring and load balancer checks.
622β
623β Returns:
624β Comprehensive health status of all system components
625β """
626β from .health import HealthChecker
627β
628β health_checker = HealthChecker()
629β return await health_checker.get_overall_health()
630β
631β
632β@mcp.tool
633βasync def ready_check() -> dict[str, Any]:
634β """
635β Kubernetes-style readiness check for deployment orchestration.
636β
637β Returns:
638β Simple ready/not-ready status
639β """
640β from .health import HealthChecker
641β
642β health_checker = HealthChecker()
643β return await health_checker.get_readiness_status()
644β
645β
646β@mcp.tool
647βasync def get_metrics() -> dict[str, Any]:
648β """
649β Get system performance metrics for monitoring.
650β
651β Returns:
652β Performance statistics and metrics
653β """
654β from .observability import get_metrics_collector
655β
656β metrics_collector = get_metrics_collector()
657β return {
658β "success": True,
659β "performance": metrics_collector.get_stats(),
660β "health_metrics": metrics_collector.get_health_metrics(),
661β "timestamp": time.time(),
662β }
663β
664β
665βasync def initialize_services() -> None:
666β """Initialize global services with new components."""
667β global parser, cache_manager, version_resolver, context_fetcher
668β
669β config = get_config()
670β
671β # Validate production readiness
672β production_issues = config.validate_production_readiness()
673β if production_issues:
674β if config.environment == "production":
675β # Fail startup in production if there are issues
676β raise ValueError(
677β f"Production configuration issues: {'; '.join(production_issues)}"
678β )
679β else:
680β # Just warn in other environments
681β logger.warning(
682β "Configuration issues detected (non-production environment)",
683β issues=production_issues,
684β environment=config.environment,
685β )
686β
687β logger.info(
688β "Initializing services",
689β cache_dir=str(config.cache_dir),
690β environment=config.environment,
691β debug_mode=config.debug_mode,
692β max_concurrent=config.max_concurrent,
693β )
694β
695β parser = PyProjectParser()
696β cache_manager = FileCacheManager(config.cache_dir)
697β version_resolver = VersionResolver()
698β
699β await cache_manager.initialize()
700β
701β # Initialize Phase 4 context fetcher
702β context_fetcher = await create_context_fetcher(cache_manager)
703β
704β logger.info("Services initialized successfully", phase_4_enabled=True)
705β
706β
707βasync def async_main() -> None:
708β """Async main function with graceful shutdown."""
709β shutdown_handler = GracefulShutdown()
710β shutdown_handler.register_signals()
711β
712β try:
713β await initialize_services()
714β logger.info("Starting AutoDocs MCP Server")
715β
716β # Start server and wait for shutdown
717β server_task = asyncio.create_task(run_mcp_server())
718β shutdown_task = asyncio.create_task(shutdown_handler.wait_for_shutdown())
719β
720β await asyncio.wait(
721β [server_task, shutdown_task], return_when=asyncio.FIRST_COMPLETED
722β )
723β
724β except Exception as e:
725β logger.error("Server startup failed", error=str(e))
726β raise
727β finally:
728β await shutdown_handler._cleanup_resources()
729β
730β
731βasync def run_mcp_server() -> None:
732β """Run the MCP server in async mode."""
733β # Since FastMCP doesn't have native async support, we need to run it in a thread
734β import threading
735β
736β server_thread = threading.Thread(target=mcp.run, daemon=True)
737β server_thread.start()
738β
739β # Keep the async task alive while server is running
740β while server_thread.is_alive():
741β await asyncio.sleep(1.0)
742β
743β
744βdef main() -> None:
745β """Entry point for the MCP server."""
746β try:
747β asyncio.run(async_main())
748β except KeyboardInterrupt:
749β logger.info("Server shutdown requested")
750β except Exception as e:
751β logger.error("Server failed", error=str(e))
752β raise
753β
754β
755βif __name__ == "__main__":
756β main()
757β
```
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **Read** `./tests`
Error:
```
EISDIR: illegal operation on a directory, read
```
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **Glob**
Result:
```
/Users/bradleyfay/autodocs/tests/__init__.py
/Users/bradleyfay/autodocs/tests/unit/__init__.py
/Users/bradleyfay/autodocs/tests/integration/__init__.py
/Users/bradleyfay/autodocs/tests/conftest.py
/Users/bradleyfay/autodocs/tests/integration/test_end_to_end.py
/Users/bradleyfay/autodocs/tests/integration/test_mcp_server.py
/Users/bradleyfay/autodocs/tests/integration/test_phase4_context.py
/Users/bradleyfay/autodocs/tests/unit/test_cache_manager.py
/Users/bradleyfay/autodocs/tests/unit/test_config_validation.py
/Users/bradleyfay/autodocs/tests/unit/test_context_formatter.py
/Users/bradleyfay/autodocs/tests/unit/test_dependency_parser.py
/Users/bradleyfay/autodocs/tests/unit/test_dependency_resolver.py
/Users/bradleyfay/autodocs/tests/unit/test_doc_fetcher.py
/Users/bradleyfay/autodocs/tests/unit/test_error_formatter.py
/Users/bradleyfay/autodocs/tests/unit/test_health.py
/Users/bradleyfay/autodocs/tests/unit/test_main_server.py
/Users/bradleyfay/autodocs/tests/unit/test_observability.py
/Users/bradleyfay/autodocs/tests/unit/test_security.py
```
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **Read** `./planning`
Error:
```
EISDIR: illegal operation on a directory, read
```
_**Agent (claude-sonnet-4-20250514)**_
**Agent task list:**
- [X] π‘οΈ Analyze the codebase to understand current architecture and identify specialized domains
- [β‘] π‘οΈ Design sub-agent strategy based on project structure and development needs
- [ ] π‘οΈ Create planning document with detailed sub-agent architecture proposal
_**Agent (claude-sonnet-4-20250514)**_
Based on my analysis of the codebase, I can see this is a sophisticated MCP server with distinct architectural domains. Now let me design the sub-agent strategy.
_**Agent (claude-sonnet-4-20250514)**_
**Agent task list:**
- [X] π‘οΈ Analyze the codebase to understand current architecture and identify specialized domains
- [X] π‘οΈ Design sub-agent strategy based on project structure and development needs
- [β‘] π‘οΈ Create planning document with detailed sub-agent architecture proposal
_**Agent (claude-sonnet-4-20250514)**_
Tool use: **Write**
```md
# Sub-Agent Architecture Strategy for AutoDocs MCP Server
## Executive Summary
This document proposes a specialized sub-agent architecture for the AutoDocs MCP Server project, designed to leverage Claude Code's sub-agent capabilities for enhanced development efficiency, code quality, and domain expertise.
## Architecture Analysis
The AutoDocs MCP Server follows a layered architecture with distinct domains:
### Core Services Layer (`src/autodoc_mcp/core/`)
- **Dependency Management**: Parser, resolver, version resolution
- **Documentation Processing**: Fetching, formatting, context management
- **Network Layer**: HTTP client, resilience, caching
- **Error Handling**: Formatting, validation, recovery
### Infrastructure Layer (`src/autodoc_mcp/`)
- **MCP Protocol**: FastMCP server, tool definitions, async lifecycle
- **Configuration**: Environment management, validation, security
- **Observability**: Metrics, health checks, monitoring
- **Data Models**: Pydantic schemas, type safety
### Testing Infrastructure
- **Unit Tests**: 11 specialized test modules
- **Integration Tests**: End-to-end, MCP server, Phase 4 context
- **Test Utilities**: Mock services, fixtures, async support
## Proposed Sub-Agent Strategy
### 1. **MCP Protocol Specialist** (`mcp-protocol-agent`)
**Purpose**: Expert in MCP protocol implementation, tool definitions, and client integrations
**Specialized Tools**:
- Read, Edit, MultiEdit, Glob, Grep
- Bash (for MCP client testing)
**System Prompt**:
\```
You are an MCP Protocol Specialist for the AutoDocs MCP Server. You have deep expertise in:
- FastMCP server implementation and tool registration
- MCP stdio protocol compliance and transport layer
- Tool schema validation and parameter handling
- Client integration patterns (Claude Code, Cursor, etc.)
- Async lifecycle management and graceful shutdown
- Error response standardization across MCP tools
Focus on:
- MCP tool implementation (main.py tool definitions)
- Protocol compliance and standardization
- Client configuration examples and integration guides
- Server lifecycle and connection management
- MCP-specific error handling and responses
Always ensure MCP tools follow consistent patterns for success/error responses and maintain protocol compliance.
\```
**Usage Scenarios**:
- Adding new MCP tools or modifying existing ones
- Debugging MCP client integration issues
- Updating MCP protocol compliance
- Writing MCP integration documentation
### 2. **Core Services Architect** (`core-services-agent`)
**Purpose**: Expert in the core business logic, dependency resolution, and documentation processing
**Specialized Tools**:
- Read, Edit, MultiEdit, Glob, Grep
- mcp__autodocs-mcp__* tools (for testing core functionality)
**System Prompt**:
\```
You are a Core Services Architect for the AutoDocs MCP Server. You specialize in:
- Dependency parsing, resolution, and version constraint handling
- PyPI API integration and documentation fetching strategies
- High-performance caching with version-based keys
- Context fetching and AI-optimized formatting
- Network resilience with circuit breakers and backoff
- Error handling and graceful degradation patterns
Focus on:
- Core service implementations in src/autodoc_mcp/core/
- Business logic for dependency analysis and documentation processing
- Performance optimization and concurrent processing
- Cache management and version resolution strategies
- Network reliability and error recovery
Prioritize robustness, performance, and graceful degradation in all core service implementations.
\```
**Usage Scenarios**:
- Implementing new dependency analysis features
- Optimizing documentation fetching and caching
- Adding new context fetching capabilities
- Debugging core business logic issues
- Performance tuning and scalability improvements
### 3. **Testing Infrastructure Specialist** (`testing-specialist-agent`)
**Purpose**: Expert in comprehensive testing strategies, pytest ecosystem, and test automation
**Specialized Tools**:
- Read, Edit, MultiEdit, Glob, Grep, Bash
- Full access to pytest ecosystem tools
**System Prompt**:
\```
You are a Testing Infrastructure Specialist for the AutoDocs MCP Server. You excel in:
- Pytest ecosystem and plugin usage (pytest-mock, pytest-asyncio, pytest-cov, etc.)
- Mock strategy using pytest-mock fixtures (NEVER unittest.mock)
- Async test patterns and MCP tool testing
- Test fixture design and reusable mock services
- Coverage analysis and test completeness
- Integration testing with real MCP clients
Key requirements:
- ALWAYS use pytest-mock patterns: def test_something(self, mocker):
- Use mock_services fixture for MCP tool tests
- Follow existing test patterns in tests/unit/ and tests/integration/
- Ensure comprehensive test coverage for new features
- Test both success and failure paths with realistic scenarios
Never use unittest.mock imports - always use pytest-mock patterns exclusively.
\```
**Usage Scenarios**:
- Writing new unit and integration tests
- Improving test coverage and test reliability
- Debugging test failures and flaky tests
- Refactoring test infrastructure
- Adding new test fixtures and utilities
### 4. **Production Operations Expert** (`production-ops-agent`)
**Purpose**: Expert in deployment, monitoring, configuration, and production readiness
**Specialized Tools**:
- Read, Edit, MultiEdit, Bash, WebFetch
- Configuration and deployment files
**System Prompt**:
\```
You are a Production Operations Expert for the AutoDocs MCP Server. You specialize in:
- Production-ready configuration management and environment validation
- Health checks, monitoring, and observability integration
- Deployment automation, CI/CD, and release processes
- Security best practices and input validation
- Performance monitoring and system metrics
- Docker, Kubernetes, and cloud deployment patterns
Focus on:
- Configuration validation and production readiness checks
- Health check implementations and monitoring integration
- CI/CD pipeline optimization and deployment automation
- Security configurations and vulnerability management
- Performance tuning for production workloads
- Documentation for deployment and operations
Always prioritize security, reliability, and operational excellence in production environments.
\```
**Usage Scenarios**:
- Configuring deployment pipelines and automation
- Implementing monitoring and alerting
- Security reviews and vulnerability assessments
- Performance optimization for production
- Troubleshooting production issues
### 5. **Documentation and Integration Guide** (`docs-integration-agent`)
**Purpose**: Expert in technical documentation, API documentation, and integration guides
**Specialized Tools**:
- Read, Edit, MultiEdit, WebFetch
- Markdown and documentation files
**System Prompt**:
\```
You are a Documentation and Integration Expert for the AutoDocs MCP Server. You excel in:
- Technical documentation writing and API documentation
- MCP client integration guides and examples
- Architecture documentation and design decisions
- User guides and developer onboarding materials
- Changelog maintenance and release documentation
- Integration examples for various AI platforms
Focus on:
- Clear, comprehensive technical documentation
- Step-by-step integration guides with real examples
- API documentation with usage patterns
- Architecture decisions and design rationale
- User-friendly installation and setup guides
- Troubleshooting guides and FAQ sections
Always write documentation that is clear, actionable, and includes practical examples.
\```
**Usage Scenarios**:
- Writing API documentation and integration guides
- Creating user onboarding materials
- Updating architecture documentation
- Writing troubleshooting guides
- Maintaining changelog and release notes
## Sub-Agent Interaction Patterns
### Primary Development Workflows
1. **New Feature Development**:
- **Core Services Architect**: Implements business logic
- **MCP Protocol Specialist**: Adds MCP tool integration
- **Testing Specialist**: Comprehensive test coverage
- **Documentation Guide**: Integration examples and API docs
2. **Bug Investigation and Resolution**:
- **Core Services Architect**: Analyzes core logic issues
- **Testing Specialist**: Reproduces bugs with failing tests
- **Production Ops Expert**: Investigates production impacts
- **MCP Protocol Specialist**: Protocol compliance verification
3. **Production Deployment**:
- **Production Ops Expert**: Deployment configuration and automation
- **Testing Specialist**: Integration and end-to-end validation
- **Documentation Guide**: Release notes and operational guides
- **Core Services Architect**: Performance validation
### Cross-Domain Collaboration
- **Architecture Reviews**: Core Services Architect + MCP Protocol Specialist
- **Release Preparation**: All agents collaborate on comprehensive release validation
- **Performance Optimization**: Core Services Architect + Production Ops Expert
- **Integration Support**: MCP Protocol Specialist + Documentation Guide
## Implementation Recommendations
### Sub-Agent Creation Order
1. **Testing Specialist** (immediate value for current development)
2. **Core Services Architect** (most frequent domain)
3. **MCP Protocol Specialist** (specialized but critical)
4. **Production Ops Expert** (for deployment and monitoring)
5. **Documentation Guide** (for comprehensive user experience)
### Tool Access Strategy
- **Minimal Necessary Access**: Each agent only gets tools required for their domain
- **Security-First**: Production Ops Expert gets security-focused tools
- **Development-Optimized**: Testing and Core agents get full development toolchain
- **Documentation-Focused**: Docs agent gets web access for research and examples
### Success Metrics
- **Faster Domain-Specific Development**: Reduced context switching between domains
- **Higher Code Quality**: Specialized expertise in testing and architecture
- **Better Production Readiness**: Dedicated ops expertise for deployment
- **Comprehensive Documentation**: Dedicated focus on user experience
- **Reduced Technical Debt**: Specialized agents maintain domain consistency
## Best Practices for Sub-Agent Usage
### When to Use Each Agent
- **Use MCP Protocol Specialist**: When adding/modifying MCP tools, client integration issues
- **Use Core Services Architect**: For business logic, dependency resolution, performance
- **Use Testing Specialist**: For all testing needs, coverage improvements, test debugging
- **Use Production Ops Expert**: For deployment, monitoring, security, configuration
- **Use Documentation Guide**: For user guides, API docs, integration examples
### Delegation Strategies
- **Explicit Delegation**: Mention specific agent for targeted expertise
- **Automatic Routing**: Let Claude delegate based on task descriptions
- **Collaborative Tasks**: Use multiple agents for comprehensive feature development
### Maintenance and Evolution
- **Regular Review**: Assess agent effectiveness and update system prompts
- **Domain Evolution**: Adjust agent boundaries as architecture evolves
- **Tool Access Updates**: Modify tool permissions based on usage patterns
- **Cross-Training**: Ensure agents understand domain boundaries and collaboration points
## Conclusion
This sub-agent architecture leverages the AutoDocs MCP Server's layered design to create specialized AI assistants for each domain. By implementing these agents, development efficiency will improve through:
- **Reduced Context Switching**: Specialized expertise for each architectural domain
- **Higher Quality Output**: Domain-specific knowledge and best practices
- **Faster Development Cycles**: Parallel work across domains with specialized agents
- **Better Maintainability**: Consistent patterns within each domain
- **Enhanced Production Readiness**: Dedicated operational expertise
The proposed strategy balances specialization with collaboration, ensuring each agent has deep domain knowledge while maintaining awareness of cross-domain dependencies and integration points.
```
Result:
```
File created successfully at: /Users/bradleyfay/autodocs/planning/sub_agent_architecture.md
```
_**Agent (claude-sonnet-4-20250514)**_
**Agent task list:**
- [X] π‘οΈ Analyze the codebase to understand current architecture and identify specialized domains
- [X] π‘οΈ Design sub-agent strategy based on project structure and development needs
- [X] π‘οΈ Create planning document with detailed sub-agent architecture proposal
_**Agent (claude-sonnet-4-20250514)**_
I've created a comprehensive sub-agent architecture strategy document at `/Users/bradleyfay/autodocs/planning/sub_agent_architecture.md`.
The proposal outlines **5 specialized sub-agents** based on your codebase's architectural domains:
1. **MCP Protocol Specialist** - FastMCP server, tool definitions, client integrations
2. **Core Services Architect** - Business logic, dependency resolution, documentation processing
3. **Testing Infrastructure Specialist** - Pytest ecosystem, mock strategies, test automation
4. **Production Operations Expert** - Deployment, monitoring, security, configuration
5. **Documentation and Integration Guide** - Technical docs, API guides, user onboarding
Each agent has tailored system prompts, specific tool access, and clear usage scenarios. The strategy emphasizes domain expertise while maintaining cross-agent collaboration for complex workflows like feature development and production deployments.
The recommended implementation order starts with the **Testing Specialist** for immediate value, followed by **Core Services Architect** for the most frequent development needs.