Skip to main content
Glama

get_all_source_biases

Retrieve bias scores for active news sources to analyze media credibility, identify political leanings, and compare outlets across multiple ideological and quality dimensions.

Instructions

Get bias scores for every news source in the Helium database.

Returns a list of all sources (active within the last 36 days, with >100 articles analyzed),
sorted by avg_social_shares descending. Use this to compare sources, find the most credible
outlets, identify politically extreme sources, or build a ranked overview of the media landscape.

Each entry contains:
- source_name, slug_name, page_url
- articles_analyzed: total articles analyzed for this source
- avg_social_shares: average social shares per article (proxy for reach/influence)
- emotionality_score (0-10): average emotional intensity of the writing
- prescriptiveness_score (0-10): how much the source tells readers what to think/do
- bias_values: dict mapping classifier key → integer score (-50 to +50 for bipolar,
  0 to +50 for unipolar). These keys are identical to what get_bias_from_url returns,
  so you can compare article-level and source-level scores directly.

  Political / ideological (bipolar: neg=left pole, pos=right pole):
    'liberal conservative bias'      neg=liberal, pos=conservative
    'libertarian authoritarian bias' neg=libertarian, pos=authoritarian
    'dovish hawkish bias'            neg=dovish, pos=hawkish
    'establishment bias'             neg=anti-establishment, pos=pro-establishment

  Credibility / quality (bipolar):
    'overall credibility'            neg=uncredible, pos=credible
    'integrity bias'                 neg=low integrity, pos=high integrity
    'article intelligence'           neg=low intelligence, pos=high intelligence
    'delusion bias'                  neg=truth-seeking, pos=delusional
    'objective subjective bias'      neg=objective, pos=subjective
    'bearish bullish bias'           neg=bearish, pos=bullish
    'emotional bias'                 neg=negative tone, pos=positive tone

  Unipolar bias dimensions (higher = more of that trait):
    'objective sensational bias'     sensationalism
    'opinion bias'                   opinion vs informative
    'descriptive prescriptive bias'  prescriptive vs descriptive
    'political bias'                 political content
    'fearful bias'                   fear-based framing
    'overconfidence bias'            overconfidence
    'gossip bias'                    gossip
    'manipulation bias'              manipulative framing
    'ideological bias'               ideological rigidity
    'conspiracy bias'                conspiracy content
    'double standard bias'           double standards
    'virtue signal bias'             virtue signaling
    'oversimplification bias'        oversimplification
    'appeal to authority bias'       appeal to authority
    'begging the question bias'      question-begging
    'victimization bias'             victimization framing
    'terrorism bias'                 terrorism content
    'scapegoat bias'                 scapegoating
    'suicidal empathy bias'          suicidal-empathy framing
    'cruelty bias'                   cruelty
    'woke bias'                      woke framing
    'written by AI'                  AI-written likelihood
    'immature bias'                  immaturity
    'circular reasoning bias'        circular reasoning
    'covering the response bias'     covering-the-response tactic
    'spam bias'                      spam-like content

Tip: use get_source_bias for full narrative descriptions and recent articles on a specific source.
Tip: bias_values keys here are identical to those in get_bias_from_url and search_news — compare them directly.
Warning: get_source_bias returns bias_scores with emoji-prefixed display keys (e.g. '🔵 Liberal <—> Conservative 🔴')
that are NOT interchangeable with the plain-text keys used here. Do not cross-reference them.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it returns a filtered list (active within last 36 days, >100 articles analyzed), sorted by avg_social_shares descending, and details the structure and meaning of the output data. However, it lacks information on potential limitations like rate limits, data freshness, or error conditions, which would be helpful for a tool with such rich output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and usage, but it becomes overly detailed in listing all bias dimensions, which might be better summarized or referenced. While the information is valuable, the extensive enumeration of bias keys (over 30 items) makes it less concise, though the tips and warnings at the end are well-placed for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (rich output data, no input parameters, and an output schema), the description is highly complete. It explains the filtering criteria, sorting order, output structure, and detailed semantics of bias scores, including comparisons to sibling tools. With an output schema present, it appropriately focuses on clarifying the meaning and usage of the returned data rather than just its format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the baseline is 4. The description appropriately does not discuss parameters, as none exist, and instead focuses on the output semantics, which is valuable given the complexity of the returned data. It adds significant meaning by explaining the bias_values mapping and the distinction between bipolar and unipolar scores.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get bias scores for every news source in the Helium database.' It specifies the verb ('Get'), resource ('bias scores for every news source'), and scope ('Helium database'), and distinguishes itself from sibling tools like get_source_bias by indicating this returns comprehensive data for all sources rather than a specific one.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'Use this to compare sources, find the most credible outlets, identify politically extreme sources, or build a ranked overview of the media landscape,' and includes tips and warnings that differentiate it from get_source_bias and get_bias_from_url, clarifying key distinctions in output formats and use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/connerlambden/helium-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server