Skip to main content
Glama
Augmented-Nature

ChEMBL MCP Server

assess_drug_likeness

Evaluate compound drug-likeness using Lipinski's Rule of Five and other pharmaceutical metrics to predict oral bioavailability potential.

Instructions

Assess drug-likeness using Lipinski Rule of Five and other metrics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
chembl_idNoChEMBL compound ID
smilesNoSMILES string (alternative to ChEMBL ID)

Implementation Reference

  • src/index.ts:672-683 (registration)
    Tool registration in the ListToolsRequestSchema response, including name, description, and input schema definition.
    {
      name: 'assess_drug_likeness',
      description: 'Assess drug-likeness using Lipinski Rule of Five and other metrics',
      inputSchema: {
        type: 'object',
        properties: {
          chembl_id: { type: 'string', description: 'ChEMBL compound ID' },
          smiles: { type: 'string', description: 'SMILES string (alternative to ChEMBL ID)' },
        },
        required: [],
      },
    },
  • src/index.ts:793-794 (registration)
    Tool handler dispatch in the CallToolRequestSchema switch statement.
    case 'assess_drug_likeness':
      return await this.handleAssessDrugLikeness(args);
  • The primary handler function that implements the assess_drug_likeness tool logic: fetches ChEMBL compound data, computes Lipinski Rule of Five violations, Veber rules compliance, and generates comprehensive drug-likeness assessment.
    private async handleAssessDrugLikeness(args: any) {
      if (!args || (!args.chembl_id && !args.smiles)) {
        throw new McpError(ErrorCode.InvalidParams, 'Invalid drug-likeness assessment arguments');
      }
    
      try {
        let molecule;
        if (args.chembl_id) {
          const response = await this.apiClient.get(`/molecule/${args.chembl_id}.json`);
          molecule = response.data;
        } else {
          return {
            content: [
              {
                type: 'text',
                text: JSON.stringify({
                  message: 'SMILES-based drug-likeness assessment requires ChEMBL ID',
                  smiles: args.smiles,
                }, null, 2),
              },
            ],
          };
        }
    
        const props = molecule.molecule_properties || {};
    
        // Lipinski Rule of Five
        const mw = props.full_mwt || props.molecular_weight || 0;
        const logp = props.alogp || 0;
        const hbd = props.hbd || 0;
        const hba = props.hba || 0;
    
        const lipinskiViolations = [
          mw > 500 ? 'Molecular weight > 500 Da' : null,
          logp > 5 ? 'LogP > 5' : null,
          hbd > 5 ? 'H-bond donors > 5' : null,
          hba > 10 ? 'H-bond acceptors > 10' : null,
        ].filter(v => v !== null);
    
        // Veber rules
        const rtb = props.rtb || 0;
        const psa = props.psa || 0;
        const veberPass = rtb <= 10 && psa <= 140;
    
        // Overall assessment
        const drugLikenessAssessment = {
          chembl_id: molecule.molecule_chembl_id,
          lipinski_rule_of_five: {
            violations: lipinskiViolations.length,
            details: lipinskiViolations.length > 0 ? lipinskiViolations : ['All criteria met'],
            pass: lipinskiViolations.length === 0,
            criteria: {
              molecular_weight: { value: mw, limit: 500, pass: mw <= 500 },
              logp: { value: logp, limit: 5, pass: logp <= 5 },
              hbd: { value: hbd, limit: 5, pass: hbd <= 5 },
              hba: { value: hba, limit: 10, pass: hba <= 10 },
            },
          },
          veber_rules: {
            pass: veberPass,
            criteria: {
              rotatable_bonds: { value: rtb, limit: 10, pass: rtb <= 10 },
              psa: { value: psa, limit: 140, pass: psa <= 140 },
            },
          },
          overall_assessment: {
            drug_likeness: lipinskiViolations.length === 0 ? 'Excellent' : lipinskiViolations.length === 1 ? 'Good' : 'Poor',
            oral_bioavailability: veberPass && lipinskiViolations.length <= 1 ? 'Likely' : 'Uncertain',
            recommendation: this.getDrugLikenessRecommendation(lipinskiViolations.length, veberPass),
          },
          molecular_properties: props,
        };
    
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify(drugLikenessAssessment, null, 2),
            },
          ],
        };
      } catch (error) {
        throw new McpError(
          ErrorCode.InternalError,
          `Failed to assess drug-likeness: ${error instanceof Error ? error.message : 'Unknown error'}`
        );
      }
    }
  • Supporting helper function called by the handler to provide textual recommendation based on the number of Lipinski violations and Veber rule compliance.
    private getDrugLikenessRecommendation(violations: number, veberPass: boolean): string {
      if (violations === 0 && veberPass) {
        return 'Excellent drug-like properties - suitable for oral administration';
      } else if (violations <= 1 && veberPass) {
        return 'Good drug-like properties - likely suitable for development';
      } else if (violations <= 2) {
        return 'Moderate drug-like properties - may require optimization';
      }
      return 'Poor drug-like properties - significant optimization needed';
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the assessment method (Lipinski Rule of Five and other metrics) but doesn't specify what the tool returns (e.g., scores, pass/fail status), whether it's a read-only operation, or any limitations like input constraints or performance characteristics. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core functionality and uses specific terminology ('Lipinski Rule of Five'), making it easy to understand quickly. Every part of the sentence contributes meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (assessing drug-likeness with multiple metrics), no annotations, and no output schema, the description is minimally adequate. It covers the what and how but lacks details on output format, behavioral traits, and usage context. This leaves the agent with gaps, especially since there's no output schema to clarify return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear documentation for 'chembl_id' and 'smiles' as alternative identifiers. The description doesn't add any parameter-specific information beyond what's in the schema, such as format details or usage examples. With high schema coverage, the baseline score of 3 is appropriate as the schema handles the parameter semantics adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Assess drug-likeness using Lipinski Rule of Five and other metrics'. It specifies the action (assess), the resource (drug-likeness), and the methodology (Lipinski Rule of Five and other metrics). However, it doesn't explicitly differentiate from sibling tools like 'analyze_admet_properties' or 'calculate_descriptors', which might also evaluate compound properties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for application, or comparisons to sibling tools such as 'analyze_admet_properties' or 'predict_solubility', which could also assess compound characteristics. Usage is implied but not explicitly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Augmented-Nature/ChEMBL-MCP-Server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server