A swarm of bots armed with your credit card information sounds like a glaring-red signal to cancel the card. But a swarm of bots with your credit card information—and permission to buy those jeans you’ve been eyeing? Doesn’t sound so bad.
Yet “shopping” with tools like OpenAI or Perplexity could wreak havoc on companies that already struggle to distinguish between so-called good and bad bots, warns Experian in its 2026 Future of Fraud Forecast, published today. The No. 1 threat to companies, according to the forecast, is “machine-to-machine mayhem” in which cybercriminals blend good bots doing your shopping with bad bots tasked with fraud.
“It’s not enough anymore to say that it’s a bot, so we need to stop this traffic,” said Kathleen Peters, chief innovation officer for fraud and identity at Experian North America. “Now, we need to say, ‘Is it a good bot or is it a malicious bot?’”
The U.S. Federal Trade Commission last year found that consumers lost more than $12.5 billion to fraud, while nearly 60% of companies reported an increase in losses from 2024 to 2025. Strikingly, financial losses ballooned by 25% even as the number of fraud reports held steady at 2.3 million a year, showing that schemes are getting more effective at cheating consumers and companies out of their money.
In a separate survey released in July, Experian reported that 72% of business leaders believe that AI-enabled fraud and deepfakes will be among their top operational challenges this year.
The company predicts this year will be a “tipping point” for AI-enabled fraud that will force conversations about liability and regulation around agentic AI in e-commerce, Peters said. “We want to let the good agents through to provide convenience and efficiency, but we need to make sure that doesn’t accidentally become a shortcut for bad actors,” she said.
Some e-commerce companies already block AI agents. Amazon, for example, generally blocks bots from independent third parties from browsing and shopping on its platform, and sued to block Perplexity AI agents from shopping autonomously late last year. The e-commerce giant has publicly stated the move is to protect security and privacy.
Yet Peters warns that retailers will soon need to grapple with how to manage AI bots once consumers give agents permission to shop for them. She notes that retailers will need to confirm that a consumer gave the agent permission; that the agent is faithful to the consumer’s intent; that the agent has permission to buy and not just browse; and that there’s an actual consumer behind the bot, and not another cybercriminal.
Disruption is also on the table. Retailers want direct engagement with customers to recommend products, build loyalty, and gather data. Some—or all—of that could be crippled if an autonomous agent just completes a transaction and then vanishes.
Deepfake employees infiltrate companies
The second greatest threat for the year, according to Experian, are deepfake candidates infiltrating remote workforces. This threat has already materialized: The FBI and Department of Justice issued multiple warnings last year about documented North Korean operatives posing as IT workers to get jobs and send their salaries back to the regime. These fake IT workers use deepfake technology and identity manipulation to gain employment at hundreds of U.S. companies.
Experian predicts employment fraud will escalate as improved AI tools allow deepfake candidates to get through interviews more easily. Companies will unwittingly onboard these fake employees and grant them access to internal systems.
Beyond state-backed fraud, Peters said the tight labor market could also spur desperate job seekers to monetize their skills to get a job or to help a candidate get through an interview. Lucrative, fully remote data science jobs with robust salaries usually require technical proficiencies that are gauged in an interview. As deepfake tools improve, it will likely get harder for companies to tell how an interviewee is faring.
“It’s a very competitive job market out there, and individuals may offer their services to get through a technical interview,” she said.
Threats on the horizon
The forecast warns of three other trends expected to ramp up in 2026.
Smart home devices, including virtual assistants, smart locks, and security systems, will introduce new weaknesses that cybercriminals could exploit.
Website cloning could overwhelm fraud teams as AI tools make it simpler to replicate legitimate websites for attacks.
Intelligent bots with high emotional IQs will carry out automated romance and family-member-in-need scams with intense sophistication.
Just as companies are looking to increase their efficiency through AI, cybercriminals are getting more efficient. AI has “democratized” access to these powerful tools to not just engineers, but fraudsters as well, Peters said. “With less expertise, they’re able to create more convincing scams and more convincing text messages that they can blast out at scale.”
Fed chair | President Donald Trump announced on Friday that he will nominate Kevin Warsh to succeed Jerome Powell as chair of the Federal Reserve, ending a monthslong selection process that has challenged the board’s independence. Warsh served as a Fed governor from 2006 to 2011 and was a finalist for the job that went to Powell in 2017. He has emerged as a critic of the central bank, claiming it needs “regime change.” He also supports lowering interest rates, aligning with the president. Warsh must be confirmed by the Senate before taking on the role.
Imagine an unauthenticated attacker who has never logged into your ServiceNow instance and has no credentials, and is sitting halfway across the globe. With only a target’s email address, the attacker can impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges. This could grant nearly unlimited access to everything an organization houses, such as customer Social Security numbers, healthcare information, financial records, or confidential intellectual property.
This is not theoretical. I discovered a critical vulnerability, tracked as CVE-2025-12420, in the popular ServiceNowVirtual Agent API and the Now Assist AI Agents application. By chaining a hardcoded, platform-wide secret with account-linking logic that trusts a simple email address, an attacker can bypass multi-factor authentication (MFA), single sign-on (SSO), and other access controls. And it’s the most severe AI-driven security vulnerability uncovered to date. With these weaknesses linked together, the attacker can remotely drive privileged agentic workflows as any user.
This deep dive explains BodySnatcher, analyzing the specific interplay between the Virtual Agent API and Now Assist that enabled this exploit. It details how insecure configurations transformed a standard natural language understanding (NLU) chatbot into a silent launchpad for malicious AI agent execution.
Vulnerability Details
This vulnerability affected ServiceNow instances running the following application versions.
Application
Affected Versions (Inclusive)
Earliest Known Fixed Versions
Now Assist AI Agents (sn_aia)
5.0.24 – 5.1.17, and 5.2.0 – 5.2.18
5.1.18 and 5.2.19
Virtual Agent API (sn_va_as_service)
<= 3.15.1 and 4.0.0 – 4.0.3
3.15.2 and 4.0.4
Disclosure Timeline
October 23rd 2025
AppOmni report vulnerability to ServiceNowServiceNow acknowledge receipt of vulnerability
October 30th 2025
ServiceNow remediate vulnerabilityServiceNow send email communication to customers informing them of the vulnerability ServiceNow release KB article accrediting Aaron Costello & AppOmni with the finding
Virtual Agent internals: A necessary detour
Understanding Virtual Agent
Those familiar with ServiceNow will know that Virtual Agent walked so that Now Assist AI could run. Virtual Agent is ServiceNow’s enterprise chatbot engine. It gives users a conversational way to interact with the system’s underlying data and services. Virtual Agent works through deterministic Topic Flows. It uses Natural Language Understanding (NLU) to determine user intent from an incoming message, then maps that intent to a specific pre-defined Topic. In the ServiceNow ecosystem, a “topic” is a structured workflow designed to complete a particular task, such as resetting a password or filing a ticket. Topics are ultimately limited to the paths explicitly defined by the developer.
ServiceNow’sVirtual Agent API lets conversations occur outside the ServiceNow web interface. This API acts as a bridge between external integrations, such as chat bots, and Virtual Agent. Organizations can use it to expose Virtual Agent topics to platforms like Slack and Microsoft Teams. Enterprise organizations adopt this architecture because employees can order hardware, file support tickets, or access helpful knowledge-base content without ever needing to log in to ServiceNow directly.
Fig. 2: A simplified view of bot-to-bot communication using the Virtual Agent API
ServiceNow’s Virtual Agent API: The basic concepts
To handle external messages, Virtual Agent must know who is requesting information and what the message contains. Large organizations will likely need integrations for different platforms to facilitate the needs of various teams or departments. Each integration might send user messages to the Virtual Agent API in different formats.
ServiceNow’s Virtual Agent API solves this by introducing providers and channels. Each integration uses its own provider within ServiceNow, which defines how incoming messages are authenticated and transformed so Virtual Agent can understand them. This architecture removes the need to create new API endpoints for each integration. Instead, all bots use the same out-of-the-box Virtual Agent API endpoint and simply include the channel identifier as part of their requests. The channel ID lets ServiceNow locate the provider record and interpret the data it received.
Fig. 3: An example relationship diagram for a provider that uses message authentication
How providers enforce authentication and perform identity-linking
The Contextual and Provider Attributes actions determine the ‘what’. They map the data from API requests into a format that the Virtual Agent understands, assigning the data to variables that the Virtual Agent uses for regular on-platform conversations.
The Automatic Link action and the Message Auth record determine the ‘who’.
Message Auth is an authentication method that external integrations can use as an alternative to OAuth or Basic Auth. Itauthenticates the integration to a particular provider. The Message Auth record holds a static credential, effectively acting as the client-secret or ‘password’ for the provider. When authenticating to the Virtual Agent API, this credential is presented in the request alongside the provider’s identifier. The reason for focusing on this specific method of authentication is because it is the form of authentication used by providers that were introduced in version 5.0.24 of the Now Assist Agents application.
While Message Auth authenticates the integration itself, users interacting with the chatbot integration on an external platform such as Slack still need to identify themselves to ServiceNow. One way this can happen is through a feature called Auto-Linking. When enabled, auto-linking lets the provider automatically associate an individual on an external platform with their ServiceNow account. The Automatic Link Action script defines how this matchhappens. This linking of these identities is crucial because it ensures that all data accessed and any actions made through Virtual Agent occur in the context of the correct user account.
This framework of providers, message authentication, and auto-linking gives third-party tools a customizable and seamless way to talk to ServiceNow’s Virtual Agent chatbot(s). However, the security of this communication model relies entirely on the integrity of the specific provider records. in particular, their associated secrets and auto-linking logic. When the Now Assist AI Agents application introduced new providers that leveraged these mechanisms insecurely, it exposed a path attackers could systematically abuse.
Insecure AI providers: Exploiting auto-linking using shared credentials
As ServiceNow enhanced the on-platform Virtual Agent capabilities to allow user communication with AI agents, the Now Assist AI Agents application introduced new providers to extend the capabilities over the Virtual Agent API. These new providers pushed the Virtual Agent API beyond its bot-to-bot use cases and enabled it to support bot-to-agent or agent-to-agent interactions.
These new ‘AI Agent’ channel providers shared a number of configurations such as using message authentication to validate inbound API requests. Because of this design, authenticating to any of these providers required only the single, non-rotating static client secret that they had been configured with. It’s reasonable to assume ServiceNow chose this approach to provide a more seamless experience for end users, fully leveraging the transparent nature of auto-linking. However, the implementation suffered from two primary problems.
First, these providers shipped with the exact same secret across all ServiceNow instances. This meant anyone who knew or obtained the token could interact with the Virtual Agent API of any customer environment where these providers were active. Possessing this shared token alone did not grant elevated privileges, since Virtual Agent still treated the requester as an unauthenticated external party. Nevertheless, the token provided a universal, instance-agnostic authentication bypass that should never have existed at all.
Second, and more critically, the Auto-Linking logic trusted any requester who supplied the shared token. The channel provider(s) used Basic account-linking, which meant they did not enforce multi-factor authentication. As a result, the provider required only the email address to link an external entity to a ServiceNow account. Once the requester provided a valid and existing email address, the provider linked them to that user. Subsequent Virtual Agent interactions processed all further interactions under the identity of the impersonated account. In practical terms, any unauthenticated attacker could impersonate any user during a conversation simply by knowing their email address.
The net security risk of these problems alone was relatively minimal. At best an attacker could supply an undocumented ‘live_agent_only’ parameter in their message payload to the Virtual Agent API, which would force the Virtual Agent to pass-off the message content to a real human (if supported by the organization). By sending a message as a trusted user to a member of an organization’s IT support staff, a phishing risk is surfaced.
A proof-of-concept (PoC) HTTP request to the Virtual Agent API demonstrates this behavior. It uses one of the vulnerable AI providers, ‘default-external-agent’, to deliver a phishing payload to a human live support agent from the admin’s (ad***@*****le.com) account.
But even this phishing vector had limited impact because the ‘AI Agent’ channel used by these providers operated asynchronously by design. In other words, attackers could send messages as any user, but support staff responses went to a pre-configured outbound URL that was outside of the attacker’s control. This resulted in one-way communication which further limited any practical impact.
How A2A requests enter the Virtual Agent framework
To understand how the exploit gains real impact, it is important to recall that the intended purpose of these AI agent providers was never to serve as ‘yet another channel provider’ for Virtual Agent bot-to-bot communications. ServiceNow introduced these providers to support the agent-to-agent protocol, which is designed to allow external AI agents to interact with ServiceNow agents in a standardized manner.
To support this capability, the Now Assist AI Agents application includes an A2A Scripted REST API. Although this API is gated behind authentication, its internal behavior is noteworthy. The API reformats incoming POST data into the same structure the Virtual Agent API uses, then inserts the resulting payload into the Virtual Agent server queue. In effect, the API functions as an adapter for Virtual Agent.
Below is a visual that provides a high-level breakdown of the code that facilitates this process.
Of the code functions depicted above, the _getContextVars function is most important for understanding the inputs needed for an attacker to trigger AI agent execution. But the code is ambiguous because it references constants which aren’t visible in the script.
These constants come from a separate Script Include, sn_aia.AIAgentConstants. But this script has a Protection policy of Protected, which prevents viewing the source code in the UI.
Dumping cross-scope constants: A refresher in application access controls
Although the source code is not visible in the UI, the Accessible From field in the previous image was set to All application scopes. This means other application scopes can still access the script’s values. ServiceNow configured it this way because its code is used and referenced by other vendor-supplied scripts that exist in other scopes, such as Global.
Attackers or researchers can take advantage of this by running a Background Script. Background Script lets administrators execute arbitrary Javascript code on the fly in ServiceNow. Through this means, an admin can dump the constant object _getContextVars references with the following one-liner:
Introducing AIA-Agent Invoker AutoChat
The default_topic and topic values defined in the script correspond to a ServiceNow record identifier, or sys_id. In this case, it is the identifier of a topic record labeled AIA-Agent Invoker AutoChat. As hinted by the code in the previous section, the purpose of this topic is to execute AI agents through Virtual Agent.
Generally, topics such as this can be inspected using Virtual Agent Designer, an on-platform application that can be used to visualize a topic’s functionality in a workflow-style format. But this particular topic is restricted from being opened in Virtual Agent Designer by customers. In fact, if you attempt to access it, you will encounter a Security Violation error page.
You can still access the topic’s metadata when opening it directly, outside of the tool. However it will be presented as a tangled web of JSON structures and Javascript code. For clarity, I have distilled what I consider the most important parts of the topic’s code into a table of high-level actions. This representation is intended to be illustrative rather than literal and should not be read as a fully prescriptive implementation. Additionally, some of the code functions being called are inaccessible due to script protection policies. In these cases I’ve taken a ‘best effort’ approach at determining the actions that a particular function call is making, based on surrounding logic and the function signature.
Putting the pieces together: Impersonating a user and executing AI
Once the A2A API execution path became clear, it enabled a more impactful exploit. Specifically, one that impersonates a high-privileged user and executes an AI agent on their behalf to perform powerful actions. In the example proof-of-concept (PoC) exploit, I demonstrate how an unauthenticated attacker can create a new user on a ServiceNow instance, assign it the admin role, reset the password, and authenticate to it. But it’s important to note that this example is only one demonstration. The potential for exploitation extends far beyond account creation.
The four requirements for the full Bodysnatchers exploit chain:
To execute this specific PoC exploit, the attacker must satisfy four requirements beyond knowing the victim’s email (for auto-linking). But there is a simple solution for each.
1. A publicly accessible API to communicate with AI: As mentioned in a previous section, the attacker needs a publicly accessible API to issue AI instructions. The A2A API requires the attacker to have an existing ServiceNow account to communicate with it. This authentication requirement is configured at the API-level and cannot be bypassed. The Virtual Agent API that was used for the initial impersonation exploit solves this requirement as there is no authentication requirement at the same layer.
2. The UID of an AI agent: To make the exploit platform agnostic, the unique identifier of an AI agent must be provided that exists across all ServiceNow instances. When the Now Assist AI application is installed, ServiceNow ships example AI agents to customers. At the time of this finding, one agent existed that was incredibly powerful, the Record management AI agent. After reporting this issue to ServiceNow, ServiceNow removed the agent. But during its existence, the agent had access to a tool, Create the record,which allowed records to be created in arbitrary tables. Since this agent was included in the application for everyone, it had the same ID across all customer instances.
3. The UID of a privileged role: To create a record in the role-to-user assignment table, the ‘Create the record’ tool will need the ID of the role that an attacker wants their backdoor user to be granted. Similar to the case of the Record management AI agent, every ServiceNow customer has roles that are shipped out of the box when they receive an instance. Once of these is the admin role, and similar to the Record Management AI Agent, its ID is the same across all instances.
4. The UID of the user created by the Record Management AI Agent: In the same manner that the ‘Create the record’ tool needs the UID of a privileged role, it also needs the ID of the new user that is created during the exploit. Since the AI agent provider communicates asynchronously by sending responses to a pre-configured URL, the ID cannot directly be known. However, by combining the requests to (1) create a user and (2) assign it a role into a single payload, the AI agent itself will know the ID of the user it had just created, thus removing the need for an attacker to know it directly.
Recommendations and security best practices
Require MFA when using account linking
While a complex and secret message authentication token provides a layer of validation, it does not account for the risk of credential theft or supply-chain compromise. Had MFA been a default requirement for these AI agent providers during the account-linking process, the BodySnatcher exploit chain would have been broken at the impersonation stage.
Fortunately, ServiceNow provides the flexibility to enforce MFA for any provider. When selecting a method, security teams should prioritize software-based authenticators (such as Google Authenticator) over SMS to mitigate the rising risk of targeted “smishing” and SIM-swapping attacks.
Important Implementation Note: Enforcing MFA is not a “toggle-and-forget” setting. Simply updating the Account linking type field is insufficient. You must also ensure the Automatic link action script associated with the provider contains the logic necessary to execute and validate the specific MFA challenge.
Implement an automated review process for AI agents
Even though ServiceNow’s Record management AI agent has been removed from customer environments, individuals may still build equally, if not more powerful custom AI agents on the platform. To ensure AI agents are being built in alignment with organization security policies, it’s important to implement a review process prior to deploying them to production environments.
An automatic approval process can be configured on-platform using ServiceNow’s AI Control Towerapplication.
To enable these controls, a user with the AI Steward role can perform the following steps within their ServiceNow instance:
From the ServiceNow homepage, open the application navigator by selecting All in the upper left-hand corner of the page.
Search for AI Control Tower, and select AI Control Tower> Configurations
Within the Configurations menu bar, choose Controls> Approvals
Activate and set-up both the AI steward approval required and Automatically trigger playbooks options.
Review and disable unused AI agents
In addition to ensuring AI agents are securely deployed, it’s imperative that a process exists for de-provisioning inactive and unused agents. As shown in this article, an agent’s active status can leave it susceptible to potential abuse, even if it is not deployed to any bot or channel. By implementing a regular auditing cadence for agents, organisations can reduce the blast radius of an attack.
From within ServiceNow’s AI Control Tower, AI stewards can identify active agents which have not been used for more than 90 days. These agents that ServiceNow flags as ‘dormant’ are strong candidates for being de-provisioned and removed.
From the ServiceNow homepage, open the application navigator by selecting All in the upper left-hand corner of the page.
Search for AI Control Tower, and select AI Control Tower
From AI Control Tower’s Security home page, select the Security & privacy tab
Scroll down to Dormant AI systems and select the information icon on the widget to see a breakdown of each agent that has been flagged.
Equipped with an inventory of unused AI agents, platform administrators can perform a review of the agents. Following this review, they can proceed to set agents to the inactive state or delete them entirely from within the AI Agent Studio application.
Why agentic AI must be treated as critical infrastructure
The discovery of BodySnatcher represents the most severe AI-driven security vulnerability uncovered to date and a defining example of agentic AI security vulnerabilities in modern SaaS platforms. It demonstrates how an attacker can effectively ‘remote control’ an organization’s AI, weaponizing the very tools meant to simplify enterprise workflows. This finding is particularly significant given the scale of the risk; ServiceNow’s Now Assist and Virtual Agent applications are utilized by nearly half of AppOmni’s Fortune 100 customers.
But this exploit is not an isolated incident. It builds upon my previous research into ServiceNow’s Agent-to-Agent discovery mechanism, which detailed how attackers can trick AI agents into recruiting more powerful AI agents to fulfil a malicious task. These findings together confirm a troubling trend, AI agents are becoming more powerful and being built to handle more than just basic tasks. This shift means that without hard guardrails, an agent’s power is directly proportional to the risk it poses to the platform, creating fertile ground for vulnerabilities and misconfigurations.
AppOmni is dedicated to minimizing that risk for our customers, ensuring that AI remains an asset for productivity rather than a liability for their platform security. We met this challenge by building AppOmni AgentGuard for ServiceNow, the first solution of its kind with the ability to block injection attacks in real-time, prevent AI-DLP violations from occurring, and detect suspicious deviations in agent behaviour as they happen. Furthermore, AppOmni’s AISPM capabilities continuously monitor the security posture of ServiceNow’s AI agents, ensuring configuration(s) are in-line with the security best-practice recommendations outlined in this article and more.
While these automated defenses are critical, security teams and platform administrators should still have a clear understanding of how SaaS security and AI security have converged, and what it means for their approach to ServiceNow security. To help with this, we are hosting a specialized ServiceNow Security Workshopin January. During the session we’ll look at the union of SaaS and AI on the platform, and walk through the practical approaches that organizations should take to confidently tackle the unique security risks that come with it.
Verizon wireless network extender 4g lte | As a new day dawns, Verizon’s cell services seem to be back to normal. But we’re still waiting for answers on what happened the day before.
Verizon’s wireless network service abruptly went down around 12:30 pm Eastern/9:30 am Pacific on January 14th, forcing phones into SOS mode for customers up and down the eastern seaboard of the United States.
The company quickly acknowledged the issue on X, but didn’t give an estimate for repair time – understandable, but frustrating for its users.
On the outage tracking site Down Detector, reports hit an initial peak of 115,000 before surging to over 180,000. Reports have been in a steady decline since 2:30 pm ET, but are still sitting around 30,000 as of 9:20 pm.
Several readers reached out to Tom’s Guide, saying service is affected from South Florida to Albany, New York as far west as Harrisburg, Pennsylvania. New York City appears to be a hot spot as well. Later emails shifted the outage out to Texas and Missouri
We saw reports that AT&T and T-Mobile were affected, but those seem related to Verizon’s outage and company reps have confirmed that those networks remain stable. verizon wireless network extender 4g lte
At 9:00 pm ET/6 pm PT, Verizon finally provided a new statement that wasn’t just a reiteration of its “We’re working on it” message that was repeated throughout the day. The company apologized for the outage and promised to “make it right.”
Unfortunately, it doesn’t appear that service will see a complete recovery soon. “We are working non-stop and making progress. Our teams will continue to work through the night until service is restored for all impacted customers.”
Account credits and updates have been promised but concrete details have yet to be shared. Follow along with Tom’s Guide as we provide live updates on the fallout from the Verizon outage.
Several major data breaches are linked to a threat actor who relies on stolen credentials to compromise enterprise networks, Hudson Rock reports.
Operating under the moniker ‘Zestix’ but also linked to the online persona ‘Sentap’, the threat actor is an initial access broker (IAB) who was also seen exfiltrating victim data and selling it on hacker forums.
According to Hudson Rock, Zestix emerged as a distinct entity in late 2024-early 2025, but its activities can be linked to Sentap operations that have been ongoing since 2021.
Both personas can be linked to information-stealer infections resulting in the compromise of global enterprises operating in the aerospace, government infrastructure, legal, and robotics sectors.
The credentials, Hudson Rock says, were harvested from the personal or work devices of employees at the victim organizations using information stealers such as RedLine, Lumma, and Vidar.
“While some credentials were harvested from recently infected machines, others had been sitting in logs for years, waiting for an actor like Zestix to exploit them,” Hudson Rock notes.
The lack of multi-factor authentication (MFA) protections on accounts with access to file-transfer instances such as ShareFile, OwnCloud, and Nextcloud has allowed Zestix/Sentap to use the compromised credentials successfully on roughly 50 occasions.
The exfiltrated data is then offered for sale on closed Russian-language forums, but Zestix was also seen selling access to the compromised systems.
Zestix/Sentap victims
According to Hudson Rock, Zestix has established a reputation for reliability. This explains why they were asking $150,000 for the 77 GB of data allegedly stolen from Iberia, the Spanish flag carrier.
Other victims include Pickett & Associates (an engineering firm serving energy organizations), Intecro Robotics (aerospace and defense equipment maker), Maida Health (serves the Brazilian military police), CRRC MA (rolling stock maker subsidiary), K3G (Brazilian ISP), NMCV Business LLC (manages data for US healthcare facilities), and over a dozen others.
Under the Sentap moniker, the threat actor built a wider list of victims, but Hudson Rock says it could not link these breaches to file-sharing services or infostealer infections.
“It is possible that they still stem from similar Infostealer credentials based on the high number of victims we did identify to have infostealer credentials to those services, but we do not rule out access via another initial access,” Hudson Rock says.
The threat actor has claimed massive breaches at Pan-Pacific Mechanical (1.04 TB), Bradley R. Tyer & Associates (1.02 TB), The Providence Group (1 TB), Australian NBN (306 GB), UrbanX.io (275 GB), and dozens of others.
The infostealer problem
According to Hudson Rock, credentials pertaining to thousands of organizations that use ShareFile, OwnCloud, and Nextcloud are circulating in infostealer logs, including those of prominent names such as Deloitte, Honeywell, KPMG, Samsung, and Walmart.
“These organizations have employees or partners who have been infected, leaving valid sessions or credentials to sensitive file repositories exposed to actors like Zestix,” the cybersecurity firm notes.
The issue, however, has been around for a long time and is unlikely to be easily resolved. The information stealer industry is fueling modern cybercrime, acting as the starting point for data breaches, identity theft, and fraud.
“Stealers are an example of the commodification of cybercrime delivered through malware-as-a-service (MaaS),” SpyCloud Labs SVP of security research Trevor Hilligoss said in a discussion with SecurityWeek.
“You no longer need to be a skilled developer or hacker to gain access to tools that are incredibly effective when deployed at scale. Anyone can just buy or hire readymade malware from the MaaS marketplace,” Hilligoss added.
The success of information stealers builds on speed and stealth. They exfiltrate sensitive information in minutes and are often removed from the infected devices immediately after, leaving minimal traces of wrongdoing.
You should head to your phone’s settings app today to see if you have an update to install.
Android has come a long way in the past decade. When I first started professionally reviewing smartphones in 2016, I fell in love with the variety of design and specs you can choose from depending on your budget, but one thing I couldn’t forgive was the general lack of software updates.
Even major Android players such as Samsung only offered two or maybe three years of software updates on even their priciest handsets, and it wasn’t unusual to see cheaper Android phones get barely a year of support, leaving them vulnerable to software bugs and online hacks when their owners should have been encouraged to hold onto the devices for as long as they were functional.
Thankfully it’s a different story in 2026. Google and Samsung now offer seven years of software updates for many of their smartphones, while firms such as Honor and OnePlus have improved their software promises too.
It means if you have a modern Android phone, you will enjoy monthly security fixes in the form of free software updates. These updates land in the settings apps of Android phones, starting with Google’s own Pixel devices. Because Google owns and maintains Android, it patches security and privacy bugs and is able to push out updates fastest to its own phones.
Google publishes a monthly Android Security Bulletin that publicly lays out what it has fixed in the latest Android update. Once this is out, all Android manufacturers are able to implement the fixes and push them out to their customers’ devices. It’s all on these Android brands to make sure these software updates reach your phone.
If you have an Android phone, it’s a good idea to head to your Settings app and see if you have any software updates waiting to be installed. This month’s update could be there for you, and one expert says you should not delay downloading it.
“Although the security bulletin released by Google is short, it addresses a serious and long-running flaw that Android users should not ignore,” said Adam Boynton, Senior Security Strategy Manager EMEIA at Jamf, a security firm. “The vulnerability was discovered in 2025; however, this fix means it has now been patched at the Android platform level.”
“The vulnerability, CVE-2025-54957, is a flaw in Dolby’s DD+ (Dolby Digital Plus) Unified Decoder that allows an attacker to run malicious code. Most notably, on Android OS, audio attachments and voice messages are decoded locally; therefore, the flaw can be exploited without any user interaction.”
This sounds ominous, though in reality you are very unlikely to be personally targeted with any kind of hack, even if you havent updated your phone this month. However, if you were still using a phone from 2016 with no modern updates like I mentioned previously, you would be wide open to a plethora of security flaws that had built up over the years.
Smartphone updates are much more frequent than they once were, and that is a good thing.
“This month’s bulletin is a reminder that regular patching is one of the most effective ways to reduce mobile risk,” Boynton added. “Whether using Android or iOS, keeping devices updated remains the single best defence against modern mobile threats.” Contact Us