Skip to main content
Glama
echelon-ai-labs

ServiceNow MCP Server

get_change_request_details

Retrieve comprehensive details of a specific change request by its ID using the ServiceNow API. Simplify tracking and management of change requests in ServiceNow instances.

Instructions

Get detailed information about a specific change request

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • The main handler function that executes the tool logic: validates input params, fetches change request details and associated tasks via ServiceNow REST API.
    def get_change_request_details(
        auth_manager: AuthManager,
        server_config: ServerConfig,
        params: Dict[str, Any],
    ) -> Dict[str, Any]:
        """
        Get details of a change request from ServiceNow.
    
        Args:
            auth_manager: The authentication manager.
            server_config: The server configuration.
            params: The parameters for getting change request details.
    
        Returns:
            The change request details.
        """
        # Unwrap and validate parameters
        result = _unwrap_and_validate_params(
            params, 
            GetChangeRequestDetailsParams,
            required_fields=["change_id"]
        )
        
        if not result["success"]:
            return result
        
        validated_params = result["params"]
        
        # Get the instance URL
        instance_url = _get_instance_url(auth_manager, server_config)
        if not instance_url:
            return {
                "success": False,
                "message": "Cannot find instance_url in either server_config or auth_manager",
            }
        
        # Get the headers
        headers = _get_headers(auth_manager, server_config)
        if not headers:
            return {
                "success": False,
                "message": "Cannot find get_headers method in either auth_manager or server_config",
            }
        
        # Make the API request
        url = f"{instance_url}/api/now/table/change_request/{validated_params.change_id}"
        
        params = {
            "sysparm_display_value": "true",
        }
        
        try:
            response = requests.get(url, headers=headers, params=params)
            response.raise_for_status()
            
            result = response.json()
            
            # Get tasks associated with this change request
            tasks_url = f"{instance_url}/api/now/table/change_task"
            tasks_params = {
                "sysparm_query": f"change_request={validated_params.change_id}",
                "sysparm_display_value": "true",
            }
            
            tasks_response = requests.get(tasks_url, headers=headers, params=tasks_params)
            tasks_response.raise_for_status()
            
            tasks_result = tasks_response.json()
            
            return {
                "success": True,
                "change_request": result["result"],
                "tasks": tasks_result["result"],
            }
        except requests.exceptions.RequestException as e:
            logger.error(f"Error getting change request details: {e}")
            return {
                "success": False,
                "message": f"Error getting change request details: {str(e)}",
            }
  • Pydantic BaseModel defining the input schema for the tool, requiring a 'change_id' field.
    class GetChangeRequestDetailsParams(BaseModel):
        """Parameters for getting change request details."""
    
        change_id: str = Field(..., description="Change request ID or sys_id")
  • Registration of the tool in the central tool_definitions dictionary used by the MCP server to expose the tool.
    "get_change_request_details": (
        get_change_request_details_tool,
        GetChangeRequestDetailsParams,
        str,  # Expects JSON string
        "Get detailed information about a specific change request",
        "json",  # Tool returns list/dict
    ),
  • Shared helper function used by multiple tools (including this one) to unwrap, validate input parameters against Pydantic models, and handle common formatting issues.
    def _unwrap_and_validate_params(params: Any, model_class: Type[T], required_fields: List[str] = None) -> Dict[str, Any]:
        """
        Helper function to unwrap and validate parameters.
        
        Args:
            params: The parameters to unwrap and validate.
            model_class: The Pydantic model class to validate against.
            required_fields: List of required field names.
            
        Returns:
            A tuple of (success, result) where result is either the validated parameters or an error message.
        """
        # Handle case where params might be wrapped in another dictionary
        if isinstance(params, dict) and len(params) == 1 and "params" in params and isinstance(params["params"], dict):
            logger.warning("Detected params wrapped in a 'params' key. Unwrapping...")
            params = params["params"]
        
        # Handle case where params might be a Pydantic model object
        if not isinstance(params, dict):
            try:
                # Try to convert to dict if it's a Pydantic model
                logger.warning("Params is not a dictionary. Attempting to convert...")
                params = params.dict() if hasattr(params, "dict") else dict(params)
            except Exception as e:
                logger.error(f"Failed to convert params to dictionary: {e}")
                return {
                    "success": False,
                    "message": f"Invalid parameters format. Expected a dictionary, got {type(params).__name__}",
                }
        
        # Validate required parameters are present
        if required_fields:
            for field in required_fields:
                if field not in params:
                    return {
                        "success": False,
                        "message": f"Missing required parameter '{field}'",
                    }
        
        try:
            # Validate parameters against the model
            validated_params = model_class(**params)
            return {
                "success": True,
                "params": validated_params,
            }
        except Exception as e:
            logger.error(f"Error validating parameters: {e}")
            return {
                "success": False,
                "message": f"Error validating parameters: {str(e)}",
            }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While 'Get detailed information' implies a read-only operation, it doesn't disclose important behavioral aspects like authentication requirements, rate limits, error conditions, or what constitutes 'detailed information' versus basic data. For a tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently states the tool's purpose. It's appropriately sized for a simple retrieval tool and front-loads the essential information without unnecessary elaboration. However, it could be slightly more specific about what 'detailed information' includes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations, no output schema, and minimal parameter documentation, the description is insufficient. It doesn't explain what information is returned, how errors are handled, or any prerequisites for using the tool. Given the context of change management systems where permissions and data sensitivity are important, more contextual information would be valuable for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage (the parameter 'change_id' has no description in the schema), the description doesn't add any parameter-specific information. It mentions 'specific change request' which aligns with the parameter name but provides no additional semantics about what format the ID should be, where to find it, or validation requirements. The baseline is 3 since there's only one parameter, but the description doesn't compensate for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Get detailed information about a specific change request', which clearly indicates a read operation on change requests. However, it doesn't differentiate from similar sibling tools like 'get_changeset_details' or 'get_workflow_details', leaving ambiguity about what distinguishes this specific retrieval operation from others in the same domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'list_change_requests' and 'get_changeset_details', there's no indication whether this tool is for individual records versus lists, or how it relates to other retrieval operations. The agent must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/echelon-ai-labs/servicenow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server