Fixed Intel
Aggregated Intel
Medium
Industry NewsImpact: 55/10

OpenAI Launches Bug Bounty Program for Abuse and Safety Risks

Through the new program, OpenAI will reward reports covering design or implementation issues leading to material harm.

FIFixed Intel Team||2 min read|2 Views
OpenAI Launches Bug Bounty Program for Abuse and Safety Risks

AI-Generated Summary

OpenAI has launched a public safety bug bounty program focused on AI-specific abuse and safety risks, complementing its existing security bug bounty program. The program covers issues such as third-party prompt injection, data exfiltration, agentic product misuse, and platform integrity weaknesses across products like Atlas Browser, Codex, Operator, and ChatGPT tools. Researchers can earn up to $7,500 for high-severity, reproducible findings submitted through Bugcrowd.

Affected Sectors

TechnologyArtificial IntelligenceFinancial ServicesHealthcareEducationRetail

Frameworks

NIST CSFISO27001NIST AI RMFOWASP Top 10 for LLMNCA-ECC

Aggregated from SecurityWeek

This article was automatically aggregated from an external source. Content may be summarized.

Read Original

Full Analysis

OpenAI has announced a new public safety bug bounty program focused on AI-specific abuse and safety risks in its products.

The new program complements OpenAI’s existing security bug bounty program and is open to issues that do not meet the criteria for a security vulnerability.

“Submissions will be triaged by OpenAI’s Safety and Security Bug Bounty teams and may be rerouted between the two programs depending on scope and ownership,” OpenAI says.

AI-specific safety scenarios covered by the new program include third-party prompt injection and data exfiltration attacks, disallowed actions performed by agentic OpenAI products on the company’s website at scale, and other harmful actions performed by the products.

The program also accepts submissions regarding issues that lead to the exposure of OpenAI’s proprietary information, as well as weaknesses in account and platform integrity.

“If researchers identify flaws that facilitate direct paths to user harm and actionable, discrete remediation steps, these may be considered in scope for rewards on a case-by-case basis,” OpenAI notes.

Advertisement. Scroll to continue reading.

The program runs on Bugcrowd and follows the same rules as the company’s security bug bounty program, with several additions.

Per the rules, design and implementation issues in OpenAI products that could lead to material harm are within the scope of the program, including flaws resulting in abuse protection bypasses.

Researchers are encouraged to identify abuse risks in agentic OpenAI products that perform actions on behalf of the user or access data as the user, including Atlas Browser, Codex, Operator, Connectors, and other ChatGPT tools.

Vulnerabilities in connectors and MCP integrators that can be abused to cause material harm are also accepted.

Researchers may earn up to $7,500 for reports that detail consistently reproducible issues of high severity, and which include a clear set of recommended steps or mitigations. However, OpenAI says reward decisions and amounts are up to its discretion.

Related: Google Paid Out $17 Million in Bug Bounty Rewards in 2025

Related: OpenAI Rolls Out Codex Security Vulnerability Scanner

Related: Microsoft Bug Bounty Program Expanded to Third-Party Code

Related: From Open Source to OpenAI: The Evolution of Third-Party Risk


Originally published by SecurityWeek

Original Source

SecurityWeek