We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/yigitkonur/research-powerpack-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
# 14 — Parameter Audit: `scrape_pages.use_llm`
## Metadata
| Field | Value |
|---|---|
| Canonical parameter | `scrape_pages.use_llm` |
| Legacy alias path | `scrape_links.use_llm` |
| Default | `true` |
| Active source | `src/schemas/scrape-links.ts` |
| YAML companion | `src/config/yaml/tools.yaml` |
## Original Texts (Verbatim)
### Zod `.describe()`
```text
AI extraction enabled by default (requires OPENROUTER_API_KEY). Auto-filters nav/ads/footers, extracts ONLY what you specify. Set false only for raw HTML debugging.
```
### YAML `schemaDescriptions`
```text
Defaults to true. AI extraction auto-filters noise, extracts only specified targets, returns clean structured content. Compression prefix+suffix auto-applied to maximize info density. Cost: ~$0.001/page. Set false ONLY for raw HTML debugging. Needs OPENROUTER_API_KEY.
```
## Criticism Table (10)
| # | Criticism | Impact |
|---:|---|---|
| 1 | "only for debugging" is too absolute | Blocks valid non-debug use cases |
| 2 | Zod/YAML duplicate messaging | Context waste |
| 3 | Internal compression details repeated | Noise |
| 4 | Cost hint may drift over time | Inaccuracy risk |
| 5 | Tradeoff not stated cleanly (quality vs speed/cost) | Decision friction |
| 6 | Missing explicit fallback behavior when key absent | Ambiguity |
| 7 | Overly prescriptive tone | Lower flexibility |
| 8 | No mention of relation to `what_to_extract` quality | Lost guidance |
| 9 | No bulk-triage use-case (`false` for first pass) | Missed workflow |
| 10 | Slight inconsistency between active and companion text | Drift risk |
## Recommended Parameter Description (copy-paste)
```text
Enable LLM extraction post-processing (default: true). Set false to return mostly raw cleaned content when you want manual parsing, lower latency/cost, or when an extraction model is unavailable.
```
## Alternatives (3)
| Alternative | Pros | Cons |
|---|---|---|
| A — **Recommended** balanced tradeoff | Clear and flexible | Less prescriptive |
| B — strict extraction-first | Maximizes automation | Too rigid |
| C — minimal technical note | Lowest token cost | Weak decision support |
## Flow Note
Common pattern: run broad triage with `use_llm=false`, then targeted high-fidelity passes with `use_llm=true`.