Skip to main content
Glama

get_bias_from_url

Analyze political leaning, credibility, and bias dimensions of news articles by URL. Get AI-generated summaries, bias scores, and context for media analysis.

Instructions

Get bias analysis for a specific article by its URL.

Use this when you have a direct link to an article and want to know its political leaning,
credibility, emotionality, and other bias dimensions — without needing to know the source name first.

On success (found=true), returns:
- title, source, date, link, category
- teaser: article excerpt
- summary: one-sentence AI summary
- context: AI-generated context for the article
- bias_description: narrative description of this specific article's bias
- bias_values: dict of per-dimension bias scores using plain-text keys (same schema as
  get_all_source_biases and search_news),
  e.g. {"liberal conservative bias": 12.3, "overall credibility": 40.1, "emotional bias": -5.2, ...}
  Positive values lean toward the second pole of each dimension (conservative, authoritarian, etc.).
- total_shares: total social shares
- wayback_link: Wayback Machine archive URL if available
- image: article image URL if available

On failure (found=false, HTTP 404):
- found: false
- message: explanation string
The URL is automatically queued for ingestion; retry after ~24 hours.

Tip: if you want source-level bias (not article-level), use get_source_bias instead.
Tip: bias_values keys here use plain-text format (e.g. 'liberal conservative bias') and are
identical to those in get_all_source_biases and search_news. Note: get_source_bias returns
bias_scores with emoji-prefixed display keys — do not cross-reference them with bias_values here.

Args:
    url: Full article URL, e.g. 'https://www.nytimes.com/2024/01/01/us/politics/example.html'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so well by disclosing key behaviors: it describes success/failure outcomes (including HTTP 404 details), automatic queuing for ingestion with retry advice (~24 hours), and clarifies differences in bias value formats compared to other tools. It doesn't mention rate limits or auth needs, but covers most operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage guidance, detailed return values, and parameter info. While comprehensive, some sections (like the detailed bias_values explanation) are slightly verbose but still informative. Most sentences earn their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (bias analysis with multiple dimensions), no annotations, and an output schema (implied by the detailed return description), the description is highly complete. It thoroughly documents success/failure outcomes, return fields, behavioral notes (queuing, retry), parameter details, and sibling tool differentiation, leaving no significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage (no schema descriptions), but the description fully compensates by providing detailed parameter semantics in the 'Args' section: it explains the 'url' parameter with a clear example and formatting guidance ('Full article URL, e.g. ...'). This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get bias analysis') and resources ('for a specific article by its URL'), explicitly distinguishing it from sibling tools like get_source_bias for source-level analysis. It specifies the analysis includes political leaning, credibility, emotionality, and other bias dimensions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when you have a direct link to an article and want to know its political leaning... without needing to know the source name first') and when not to use it (with tips directing to get_source_bias for source-level bias). It also mentions retry timing after failures.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/connerlambden/helium-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server