Skip to main content
Glama
echelon-ai-labs

ServiceNow MCP Server

publish_changeset

Publish a changeset in ServiceNow by providing the changeset ID and optional notes to update and deploy configurations or updates across the instance.

Instructions

Publish a changeset in ServiceNow

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • The core handler function implementing the publish_changeset tool. It validates input parameters, retrieves ServiceNow instance details, and performs a PATCH request to set the changeset state to 'published'.
    def publish_changeset(
        auth_manager: AuthManager,
        server_config: ServerConfig,
        params: Union[Dict[str, Any], PublishChangesetParams],
    ) -> Dict[str, Any]:
        """
        Publish a changeset in ServiceNow.
    
        Args:
            auth_manager: The authentication manager.
            server_config: The server configuration.
            params: The parameters for publishing a changeset. Can be a dictionary or a PublishChangesetParams object.
    
        Returns:
            The published changeset.
        """
        # Unwrap and validate parameters
        result = _unwrap_and_validate_params(
            params, 
            PublishChangesetParams, 
            required_fields=["changeset_id"]
        )
        
        if not result["success"]:
            return result
        
        validated_params = result["params"]
        
        # Get the instance URL
        instance_url = _get_instance_url(auth_manager, server_config)
        if not instance_url:
            return {
                "success": False,
                "message": "Cannot find instance_url in either server_config or auth_manager",
            }
        
        # Get the headers
        headers = _get_headers(auth_manager, server_config)
        if not headers:
            return {
                "success": False,
                "message": "Cannot find get_headers method in either auth_manager or server_config",
            }
        
        # Add Content-Type header
        headers["Content-Type"] = "application/json"
        
        # Prepare the request data for the publish action
        data = {
            "state": "published",
        }
        
        # Add publish notes if provided
        if validated_params.publish_notes:
            data["description"] = validated_params.publish_notes
        
        # Make the API request
        url = f"{instance_url}/api/now/table/sys_update_set/{validated_params.changeset_id}"
        
        try:
            response = requests.patch(url, json=data, headers=headers)
            response.raise_for_status()
            
            result = response.json()
            
            return {
                "success": True,
                "message": "Changeset published successfully",
                "changeset": result["result"],
            }
        except requests.exceptions.RequestException as e:
            logger.error(f"Error publishing changeset: {e}")
            return {
                "success": False,
                "message": f"Error publishing changeset: {str(e)}",
            }
  • Pydantic BaseModel defining the input schema for the publish_changeset tool, including required changeset_id and optional publish_notes.
    class PublishChangesetParams(BaseModel):
        """Parameters for publishing a changeset."""
    
        changeset_id: str = Field(..., description="Changeset ID or sys_id")
        publish_notes: Optional[str] = Field(None, description="Notes for publishing")
  • Tool registration in the get_tool_definitions() function's dictionary. Associates the tool name with its handler (publish_changeset_tool), input schema (PublishChangesetParams), return type (str), description, and serialization method.
    "publish_changeset": (
        publish_changeset_tool,
        PublishChangesetParams,
        str,
        "Publish a changeset in ServiceNow",
        "str",  # Tool returns simple message
    ),
  • Shared helper function used by publish_changeset (and other tools) to validate and unwrap input parameters against the Pydantic schema.
    def _unwrap_and_validate_params(
        params: Union[Dict[str, Any], BaseModel], 
        model_class: Type[T], 
        required_fields: Optional[List[str]] = None
    ) -> Dict[str, Any]:
        """
        Unwrap and validate parameters.
    
        Args:
            params: The parameters to unwrap and validate. Can be a dictionary or a Pydantic model.
            model_class: The Pydantic model class to validate against.
            required_fields: List of fields that must be present.
    
        Returns:
            A dictionary with success status and validated parameters or error message.
        """
        try:
            # Handle case where params is already a Pydantic model
            if isinstance(params, BaseModel):
                # If it's already the correct model class, use it directly
                if isinstance(params, model_class):
                    model_instance = params
                # Otherwise, convert to dict and create new instance
                else:
                    model_instance = model_class(**params.dict())
            # Handle dictionary case
            else:
                # Create model instance
                model_instance = model_class(**params)
            
            # Check required fields
            if required_fields:
                missing_fields = []
                for field in required_fields:
                    if getattr(model_instance, field, None) is None:
                        missing_fields.append(field)
                
                if missing_fields:
                    return {
                        "success": False,
                        "message": f"Missing required fields: {', '.join(missing_fields)}",
                    }
            
            return {
                "success": True,
                "params": model_instance,
            }
        except Exception as e:
            return {
                "success": False,
                "message": f"Invalid parameters: {str(e)}",
            }
  • Helper function to retrieve the ServiceNow instance URL from either auth_manager or server_config, used by publish_changeset.
    def _get_instance_url(auth_manager: AuthManager, server_config: ServerConfig) -> Optional[str]:
        """
        Get the instance URL from either auth_manager or server_config.
    
        Args:
            auth_manager: The authentication manager.
            server_config: The server configuration.
    
        Returns:
            The instance URL or None if not found.
        """
        # Try to get instance_url from server_config
        if hasattr(server_config, 'instance_url'):
            return server_config.instance_url
        
        # Try to get instance_url from auth_manager
        if hasattr(auth_manager, 'instance_url'):
            return auth_manager.instance_url
        
        # If neither has instance_url, check if auth_manager is actually a ServerConfig
        # and server_config is actually an AuthManager (parameters swapped)
        if hasattr(server_config, 'get_headers') and not hasattr(auth_manager, 'get_headers'):
            if hasattr(auth_manager, 'instance_url'):
                return auth_manager.instance_url
        
        logger.error("Cannot find instance_url in either auth_manager or server_config")
        return None
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but offers minimal insight. It states the action is 'publish' but doesn't clarify if this is a destructive/mutative operation, what permissions are required, what happens post-publish, or any side effects. This leaves critical behavioral traits undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence with no wasted words. It's front-loaded with the core action and resource, making it easy to parse quickly, though this brevity contributes to gaps in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a 'publish' operation in ServiceNow (likely a mutative action with side effects), no annotations, no output schema, and 0% schema coverage, the description is incomplete. It fails to address behavioral risks, parameter meanings, or expected outcomes, leaving the agent under-informed for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what 'changeset_id' represents, the format of 'publish_notes', or their roles in the publishing process. This leaves both parameters semantically unclear beyond their names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the action ('Publish') and resource ('a changeset in ServiceNow'), which provides a basic understanding of the tool's function. However, it lacks specificity about what 'publish' entails operationally and doesn't differentiate from sibling tools like 'commit_changeset' or 'update_changeset', leaving ambiguity about when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., whether the changeset must be committed first), exclusions, or relationships to sibling tools like 'commit_changeset', leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/echelon-ai-labs/servicenow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server