Agentic AI security vulnerability: Weaponizing ServiceNow’s Virtual Agent API and Now Assist AI agents

Imagine an unauthenticated attacker who has never logged into your ServiceNow instance and has no credentials, and is sitting halfway across the globe. With only a target’s email address, the attacker can impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges. This could grant nearly unlimited access to everything an organization houses, such as customer Social Security numbers, healthcare information, financial records, or confidential intellectual property.

This is not theoretical. I discovered a critical vulnerability, tracked as CVE-2025-12420, in the popular ServiceNow Virtual Agent API and the Now Assist AI Agents application. By chaining a hardcoded, platform-wide secret with account-linking logic that trusts a simple email address, an attacker can bypass multi-factor authentication (MFA), single sign-on (SSO), and other access controls. And it’s the most severe AI-driven security vulnerability uncovered to date. With these weaknesses linked together, the attacker can remotely drive privileged agentic workflows as any user.

This deep dive explains BodySnatcher, analyzing the specific interplay between the Virtual Agent API and Now Assist that enabled this exploit. It details how insecure configurations transformed a standard natural language understanding (NLU) chatbot into a silent launchpad for malicious AI agent execution.

Vulnerability Details

This vulnerability affected ServiceNow instances running the following application versions. 

ApplicationAffected Versions (Inclusive)Earliest Known Fixed Versions
Now Assist AI Agents (sn_aia)5.0.24 – 5.1.17, and 5.2.0 – 5.2.185.1.18 and 5.2.19
Virtual Agent API (sn_va_as_service)<= 3.15.1 and 4.0.0 – 4.0.33.15.2 and 4.0.4

Disclosure Timeline

October 23rd 2025AppOmni report vulnerability to ServiceNowServiceNow acknowledge receipt of vulnerability
October 30th 2025ServiceNow remediate vulnerabilityServiceNow send email communication to customers informing them of the vulnerability ServiceNow release KB article accrediting Aaron Costello & AppOmni with the finding

Virtual Agent internals: A necessary detour

Understanding Virtual Agent 

Those familiar with ServiceNow will know that Virtual Agent walked so that Now Assist AI could run. Virtual Agent is ServiceNow’s enterprise chatbot engine. It gives users a conversational way to interact with the system’s underlying data and services. Virtual Agent works through deterministic Topic Flows. It uses Natural Language Understanding (NLU) to determine user intent from an incoming message, then maps that intent to a specific pre-defined Topic. In the ServiceNow ecosystem, a “topic” is a structured workflow designed to complete a particular task, such as resetting a password or filing a ticket. Topics are ultimately limited to the paths explicitly defined by the developer.

ServiceNow’s Virtual Agent API lets conversations occur outside the ServiceNow web interface. This API acts as a bridge between external integrations, such as chat bots, and Virtual Agent. Organizations can use it to expose Virtual Agent topics to platforms like Slack and Microsoft Teams. Enterprise organizations adopt this architecture because employees can order hardware, file support tickets, or access helpful knowledge-base content without ever needing to log in to ServiceNow directly.

ServiceNow’s Virtual Agent API: The basic concepts

To handle external messages, Virtual Agent must know who is requesting information and what the message contains. Large organizations will likely need integrations for different platforms to facilitate the needs of various teams or departments. Each integration might send user messages to the Virtual Agent API in different formats. 

ServiceNow’s Virtual Agent API solves this by introducing providers and channels. Each integration uses its own provider within ServiceNow, which defines how incoming messages are authenticated and transformed so Virtual Agent can understand them. This architecture removes the need to create new API endpoints for each integration. Instead, all bots use the same out-of-the-box Virtual Agent API endpoint and simply include the channel identifier as part of their requests. The channel ID lets ServiceNow locate the provider record and interpret the data it received. 

How providers enforce authentication and perform identity-linking

The Contextual and Provider Attributes actions determine the ‘what’. They map the data from API requests into a format that the Virtual Agent understands, assigning the data to variables that the Virtual Agent uses for regular on-platform conversations. 

The Automatic Link action and the Message Auth record determine the ‘who’. 

  • Message Auth is an authentication method that external integrations can use as an alternative to OAuth or Basic Auth. It authenticates the integration to a particular provider. The Message Auth record holds a static credential, effectively acting as the client-secret or ‘password’ for the provider. When authenticating to the Virtual Agent API, this credential is presented in the request alongside the provider’s identifier. The reason for focusing on this specific method of authentication is because it is the form of authentication used by providers that were introduced in version 5.0.24 of the Now Assist Agents application. 
  • While Message Auth authenticates the integration itself, users interacting with the chatbot integration on an external platform such as Slack still need to identify themselves to ServiceNow. One way this can happen is through a feature called Auto-Linking. When enabled, auto-linking lets the provider automatically associate an individual on an external platform with their ServiceNow account. The Automatic Link Action script defines how this match happens. This linking of these identities is crucial because it ensures that all data accessed and any actions made through Virtual Agent occur in the context of the correct user account. 

This framework of providers, message authentication, and auto-linking gives third-party tools a customizable and seamless way to talk to ServiceNow’s Virtual Agent chatbot(s). However, the security of this communication model relies entirely on the integrity of the specific provider records. in particular, their associated secrets and auto-linking logic. When the Now Assist AI Agents application introduced new providers that leveraged these mechanisms insecurely, it exposed a path attackers could systematically abuse.

Insecure AI providers: Exploiting auto-linking using shared credentials

As ServiceNow enhanced the on-platform Virtual Agent capabilities to allow user communication with AI agents, the Now Assist AI Agents application introduced new providers to extend the capabilities  over the Virtual Agent API. These new providers pushed the Virtual Agent API beyond its bot-to-bot use cases and enabled it to support bot-to-agent or agent-to-agent interactions.

These new ‘AI Agent’ channel providers shared a number of configurations such as using message authentication to validate inbound API requests. Because of this design, authenticating to any of these providers required only the single, non-rotating static client secret that they had been configured with. It’s reasonable to assume ServiceNow chose this approach to provide a more seamless experience for end users, fully leveraging the transparent nature of auto-linking. However, the implementation suffered from two primary problems.

  1. First, these providers shipped with the exact same secret across all ServiceNow instances. This meant anyone who knew or obtained the token could interact with the Virtual Agent API of any customer environment where these providers were active. Possessing this shared token alone did not grant elevated privileges, since Virtual Agent still treated the requester as an unauthenticated external party. Nevertheless, the token provided a universal, instance-agnostic authentication bypass that should never have existed at all.
Fig. 4: The authentication token used by every ServiceNow instance’s AI Agent Provider – ‘servicenowexternalagent’
  1. Second, and more critically, the Auto-Linking logic trusted any requester who supplied the shared token. The channel provider(s) used Basic account-linking, which meant they did not enforce  multi-factor authentication. As a result, the provider required only the email address to link an external entity to a ServiceNow account. Once the requester provided a valid and existing email address, the provider linked them to that user. Subsequent Virtual Agent interactions processed all further interactions under the identity of the impersonated account. In practical terms, any unauthenticated attacker could impersonate any user during a conversation simply by knowing their email address.
Fig. 5: The auto-linking action code for the AI Agent Provider(s)

The net security risk of these problems alone was relatively minimal. At best an attacker could supply an undocumented ‘live_agent_only’ parameter in their message payload to the Virtual Agent API, which would force the Virtual Agent to pass-off the message content to a real human (if supported by the organization). By sending a message as a trusted user to a member of an organization’s IT support staff, a phishing risk is surfaced

A proof-of-concept (PoC) HTTP request to the Virtual Agent API demonstrates this behavior. It uses one of the vulnerable AI providers, ‘default-external-agent’, to deliver a phishing payload to a human live support agent from the admin’s (admin@example.com) account.

But even this phishing vector had limited impact  because the ‘AI Agent’ channel used by these providers operated asynchronously by design. In other words, attackers could send messages as any user, but support staff responses went to a pre-configured outbound URL that was outside of the attacker’s control. This resulted in one-way communication which further limited any practical impact.

How A2A requests enter the Virtual Agent framework

To understand how the exploit gains real impact, it is important to recall that the intended purpose of these AI agent providers was never to serve as ‘yet another channel provider’ for Virtual Agent bot-to-bot communications. ServiceNow  introduced these providers to support the agent-to-agent protocol, which is designed to allow external AI agents to interact with ServiceNow agents in a standardized manner.

To support this capability, the Now Assist AI Agents application includes an A2A Scripted REST API. Although this API is gated behind authentication, its internal behavior is noteworthy. The API reformats incoming POST data into the same structure the Virtual Agent API uses, then inserts the resulting payload into the Virtual Agent server queue. In effect, the API functions as an adapter for Virtual Agent. 

Below is a visual that provides a high-level breakdown of the code that facilitates this process.

Fig. 7: Breakdown of the code that powers the A2A API

Of the code functions depicted above, the _getContextVars function is most important for understanding the inputs needed for an attacker to trigger AI agent execution. But the code is ambiguous because it references constants which aren’t visible in the script.   

Portion of code that outlines the parameters needed for agent execution

These constants come from a separate Script Include, sn_aia.AIAgentConstants. But this script has a Protection policy of Protected, which prevents viewing the source code in the UI.

Fig. 8: The script’s protection policy prevents the code being directly viewed 

Dumping cross-scope constants: A refresher in application access controls

Although the source code is not visible in the UI, the Accessible From field in the previous image was set to All application scopes. This means  other application scopes can still access  the script’s values. ServiceNow configured it this way because its code is used and referenced by other vendor-supplied scripts that exist in other scopes, such as Global.

Attackers or researchers can take advantage of this by running a Background Script. Background Script lets administrators execute arbitrary Javascript code on the fly in ServiceNow. Through this means, an admin can dump the constant object _getContextVars references with the following one-liner:

The result is an object holding around ~1000 parameter-key values, including those used by _getContextVars.

Once you have the dictionary, you can rewrite the key-value lookup logic.

Portion of _getContextVars code with its constant values replaced

By translating the key-value pairs, the logic is no longer a mystery. The sys_id value d5986940ff702210e819fffffffffffe is the smoking gun. It identifies the topic that drives AI agent execution.

Introducing AIA-Agent Invoker AutoChat

The default_topic and topic values defined in the script correspond to a ServiceNow record identifier, or sys_id. In this case, it is the identifier of a topic record labeled AIA-Agent Invoker AutoChat. As hinted by the code in the previous section, the purpose of this topic is to execute AI agents through Virtual Agent. 

Generally, topics such as this can be inspected using Virtual Agent Designer, an on-platform application that can be used to visualize a topic’s functionality in a workflow-style format. But this particular topic is restricted from being opened in Virtual Agent Designer by customers. In fact, if you attempt to access it, you will encounter a Security Violation error page. 

You can still access the topic’s metadata when opening it directly, outside of the tool. However it will be presented as a tangled web of JSON structures and Javascript code. For clarity, I have distilled what I consider the most important parts of the topic’s code into a table of high-level actions. This representation is intended to be illustrative rather than literal and should not be read as a fully prescriptive implementation. Additionally, some of the code functions being called are inaccessible due to script protection policies. In these cases I’ve taken a ‘best effort’ approach at determining the actions that a particular function call is making, based on surrounding logic and the function signature. 

Entries in the table associated with scripts that have inaccessible code are prefixed with an asterisk (*)

StepCaller TopicArtifactTop-Level Function Details
1. Preparation & LoggingAIA-Agent Invoker AutoChatInvokerUtil (Script Include)__createExecutionPlanCreate a stub execution plan for the upcoming AI execution. In addition to this, it calls another function that injects context about the calling user (name etc) and their message(s) into short term memory.
2. Call Orchestration EngineAIA-Agent Invoker AutoChatInvoke Agent (Topic Node)vaSystem.invokeSubTopicExecutes the AIA-Orchestrator topic, providing the user inputs alongside the execution plan stub. 
3. Populate Execution PlanAIA-Orchestrator*AgentExecutionPlanUtil (Script Include)updateExecPlanUpdates the previously created execution plan with the requested agent’s ID and if applicable, the ID of the agent’s team. 
4. Pick a ‘Planner’ TopicAIA-Orchestrator* AgentOrchestratorUtil (Script Include) getOrchestratorStrategyTopicThis function maps the user-supplied agent ID to a corresponding ‘planner’ topic ID. While the internal lookup logic is proprietary, it may relate to out-of-the-box topics such as AIA-Default Planner. These planner topics can analyse an objective and architect a strategy that defines what is needed in order to complete it.
5. Generate a StrategyAIA-OrchestratorInvoke Agent (Topic Node)vaSystem.invokeSubTopicUsing the ID from the previous step, the ‘planner’ topic is invoked. This returns a set of sub-tasks that should be followed by the orchestration engine to ‘complete’ the requested objective successfully.
6. Schedule Sub TasksAIA-OrchestratorAgentSchedulerUtil (Script Include)scheduleSubtasksThe subtasks are queued for execution in a specific order, as prescribed by the plan.
7.  Update Task StateAIA-OrchestratorAgentTaskExecUtil (Script)initiateNextQueuedSubtaskFlips the status of the first task to be executed into a ‘ready to run’ state
6. Get Task DetailsAIA-OrchestratorAgentTaskExecUtilgetCurrentSubtaskGets the information for the ‘ready to execute’ task and loads it into Virtual Agent variables.
8. Execute AI AgentAIA-OrchestratorInvoke Agent (Topic Node)vaSystem.invokeSubTopicRuns the task, IE: executing an AI agent with specific inputs.

With respect to what was publicly understood regarding the availability of AI agents on the platform, this understanding is groundbreaking. The general consensus was that in order for an AI agent to be executed outside of testing, it must be deployed to a channel that has explicitly enabled the Now Assist feature. But this is not the case. Evidently, as long as the agent is in an active state, and the calling user has the necessary permissions, it can be executed directly through these topics. 

Putting the pieces together: Impersonating a user and executing AI 

Once the A2A API execution path became clear, it enabled a more impactful exploit. Specifically, one that impersonates a high-privileged user and executes an AI agent on their behalf to perform powerful actions. In the example proof-of-concept (PoC) exploit, I demonstrate how an unauthenticated attacker can create a new user on a ServiceNow instance, assign it the admin role, reset the password, and authenticate to it. But it’s important to note that this example is only one demonstration. The potential for exploitation extends far beyond account creation.

The four requirements for the full Bodysnatchers exploit chain:

To execute this specific PoC exploit, the attacker must satisfy four requirements beyond knowing the victim’s email (for auto-linking). But there is a simple solution for each.

1. A publicly accessible API to communicate with AI: As mentioned in a previous section, the attacker needs a publicly accessible API to issue AI instructions. The A2A API requires the attacker to have an existing ServiceNow account to communicate with it. This authentication requirement is configured at the API-level and cannot be bypassed. The Virtual Agent API that was used for the initial impersonation exploit solves this requirement as there is no authentication requirement at the same layer. 

2. The UID of an AI agent: To make the exploit platform agnostic, the unique identifier of an AI agent must be provided that exists across all ServiceNow instances. When the Now Assist AI application is installed, ServiceNow ships example AI agents to customers. At the time of this finding, one agent existed that was incredibly powerful, the Record management AI agent. After reporting this issue to ServiceNow, ServiceNow removed the agent. But during its existence, the agent had access to a tool, Create the record,which  allowed records to be created in arbitrary tables. Since this agent was included in the application for everyone, it had the same ID across all customer instances.

3. The UID of a privileged role: To create a record in the role-to-user assignment table, the ‘Create the record’ tool will need the ID of the role that an attacker wants their backdoor user to be granted. Similar to the case of the Record management AI agent, every ServiceNow customer has roles that are shipped out of the box when they receive an instance. Once of these is the admin role, and similar to the Record Management AI Agent, its ID is the same across all instances.

4. The UID of the user created by the Record Management AI Agent: In the same manner that the ‘Create the record’ tool needs the UID of a privileged role, it also needs the ID of the new user that is created during the exploit. Since the AI agent provider communicates asynchronously by sending responses to a pre-configured URL, the ID cannot directly be known. However, by combining the requests to (1) create a user and (2) assign it a role into a single payload, the AI agent itself will know the ID of the user it had just created, thus removing the need for an attacker to know it directly. 

BodySnatcher: The final exploit 

With all the necessary pieces of information in-hand, the HTTP request containing the exploit payload can be crafted and sent.

If someone was observing this attack in real-time from inside the platform, this is what they would see:

Additionally, they would see the AI agent attempting to communicate with the attacker, letting them know that they are ‘ready’ to create the user for them. 

Fig. 10: The Record management AI agent is asking for confirmation before continuing with user creation

This request for confirmation is noteworthy because it reveals  that the Create the Record tool is not running autonomously. The reason for this is that it has an Execution Mode of Supervised instead of Autonomous

Because responses are delivered asynchronously to a pre-configured URL, the attacker will never see this request for confirmation from the agent. But they never have to. By waiting 8-10 seconds after the first exploit request is sent, which is sufficient time for the AI agent to get to its ‘awaiting confirmation’ stage, the attacker can send a second payload instructing the AI agent to proceed. Since the AI agent is now present in the chat, this second payload no longer needs to contain the AI invocation topic or reference the agent ID.

After the attacker submits the confirmation payload, the AI agent creates the new user record.

The AI agent continues with the second part, which is to assign the admin role. After several seconds, it is ready to fulfil it, but it once again requires confirmation.

Fig. 13: The AI agent awaits confirmation before assigning the admin role

The attacker can reuse the same confirmation payload for a second time to authorize the role-assignment. After a short period, the AI agent assigns the admin role, concluding the exploit chain. 

From here, the attacker can reset the new account through the existing out-of-the-box ‘Forgot Password?’ flow. 

Once the password was reset, logging into the account and navigating to the user record showed that the admin role was assigned. The exploit successfully escalated privileges from an unauthenticated user to an administrator, granting full platform access.

On a final note, it’s important that the PoC I’ve outlined above does not mislead those reading about what security guardrails could have stopped the exploit. One suggested mitigation is to enforce SSO-based authentication to prevent an attacker from logging in. However, this could easily be bypassed by providing instructions to the agent which enable direct login, the self-service password flow, etc. Furthermore, the attacker would not need to create an account on the system to achieve their goals. The attacker could even bypass the need for a UI login entirely by instructing the agent to create a custom, unauthenticated API endpoint that allows for CRUD operations on any system resource.

Rather than merely disrupting the exploit at a single stage, a more resilient strategy involves addressing the fundamental configuration choices that enabled it in the first place. ServiceNow’s immediate response was to rotate the provider credentials and remove the powerful AI agent shown in the PoC, effectively patched the ‘BodySnatcher’ instance. But these are point-in-time fixes. The configuration choices that led to this agentic AI vulnerability in ServiceNow could still exist in an organization’s custom code or third-party solutions. To prevent this pattern from resurfacing in customizations, security teams and platform administrators should adhere to the following best practices.

Recommendations and security best practices

Require MFA when using account linking

While a complex and secret message authentication token provides a layer of validation, it does not account for the risk of credential theft or supply-chain compromise. Had MFA been a default requirement for these AI agent providers during the account-linking process, the BodySnatcher exploit chain would have been broken at the impersonation stage.

Fig. 16: The ‘Account linking type’ for the AI Agent provider did not enforce MFA

Fortunately, ServiceNow provides the flexibility to enforce MFA for any provider. When selecting a method, security teams should prioritize software-based authenticators (such as Google Authenticator) over SMS to mitigate the rising risk of targeted “smishing” and SIM-swapping attacks.

Important Implementation Note: Enforcing MFA is not a “toggle-and-forget” setting. Simply updating the Account linking type field is insufficient. You must also ensure the Automatic link action script associated with the provider contains the logic necessary to execute and validate the specific MFA challenge.

Implement an automated review process for AI agents

Even though ServiceNow’s Record management AI agent has been removed from customer environments, individuals may still build equally, if not more powerful custom AI agents on the platform. To ensure AI agents are being built in alignment with organization security policies, it’s important to implement a review process prior to deploying them to production environments. 

An automatic approval process can be configured on-platform using ServiceNow’s AI Control Tower application. 

To enable these controls, a user with the AI Steward role can perform the following steps within their ServiceNow instance:

  1. From the ServiceNow homepage, open the application navigator by selecting All in the upper left-hand corner of the page.
  2. Search for AI Control Tower, and select AI Control Tower > Configurations
  3. Within the Configurations menu bar, choose Controls > Approvals
  1. Activate and set-up both the AI steward approval required and Automatically trigger playbooks options.

Review and disable unused AI agents

In addition to ensuring AI agents are securely deployed, it’s imperative that a process exists for de-provisioning inactive and unused agents. As shown in this article, an agent’s active status can leave it susceptible to potential abuse, even if it is not deployed to any bot or channel. By implementing a regular auditing cadence for agents, organisations can reduce the blast radius of an attack.

From within ServiceNow’s AI Control Tower, AI stewards can identify active agents which have not been used for more than 90 days. These agents that ServiceNow flags as ‘dormant’ are strong candidates for being de-provisioned and removed.

  1. From the ServiceNow homepage, open the application navigator by selecting All in the upper left-hand corner of the page.
  2. Search for AI Control Tower, and select AI Control Tower
  3. From AI Control Tower’s Security home page, select the Security & privacy tab
  4. Scroll down to Dormant AI systems and select the information icon on the widget to see a breakdown of each agent that has been flagged.
Fig. 18: Selecting the icon will provide a view of each agent which has been flagged as dormant

Equipped with an inventory of unused AI agents, platform administrators can perform a review of the agents. Following this review, they can proceed to set agents to the inactive state or delete them entirely from within the AI Agent Studio application.

Why agentic AI must be treated as critical infrastructure

The discovery of BodySnatcher represents the most severe AI-driven security vulnerability uncovered to date and a defining example of agentic AI security vulnerabilities in modern SaaS platforms. It demonstrates how an attacker can effectively ‘remote control’ an organization’s AI, weaponizing the very tools meant to simplify enterprise workflows. This finding is particularly significant given the scale of the risk; ServiceNow’s Now Assist and Virtual Agent applications are utilized by nearly half of AppOmni’s Fortune 100 customers. 

But this exploit is not an isolated incident. It builds upon my previous research into ServiceNow’s Agent-to-Agent discovery mechanism, which detailed how attackers can trick AI agents into recruiting more powerful AI agents to fulfil a malicious task. These findings together confirm a troubling trend, AI agents are becoming more powerful and being built to handle more than just basic tasks. This shift means that without hard guardrails, an agent’s power is directly proportional to the risk it poses to the platform, creating fertile ground for vulnerabilities and misconfigurations.

AppOmni is dedicated to minimizing that risk for our customers, ensuring that AI remains an asset for productivity rather than a liability for their platform security. We met this challenge by building AppOmni AgentGuard for ServiceNow, the first solution of its kind with the ability to block injection attacks in real-time, prevent AI-DLP violations from occurring, and detect suspicious deviations in agent behaviour as they happen. Furthermore, AppOmni’s AISPM capabilities continuously monitor the security posture of ServiceNow’s AI agents, ensuring configuration(s) are in-line with the security best-practice recommendations outlined in this article and more.   

While these automated defenses are critical, security teams and platform administrators should still have a clear understanding of how SaaS security and AI security have converged, and what it means for their approach to ServiceNow security. To help with this, we are hosting a specialized ServiceNow Security Workshop in January. During the session we’ll look at the union of SaaS and AI on the platform, and walk through the practical approaches that organizations should take to confidently tackle the unique security risks that come with it. 

How’s your ServiceNow security posture?

Join our experts as they walk through a practical framework to assess and improve the security posture of your ServiceNow tenant. Learn how to address common pitfalls that lead to data exposure or audit gaps.