Cybersecurity stands at a critical juncture as artificial intelligence capabilities advance to new frontiers. OpenAI's latest model, GPT-5.3-Codex, represents the most cyber-capable frontier reasoning model to date. While these capabilities offer tremendous potential for strengthening cyber defense, they also introduce new challenges that require careful management. OpenAI's response is Trusted Access for Cyber, an innovative identity and trust-based framework designed to ensure that enhanced cyber capabilities reach the right hands.

The Dual Nature of Cyber Capabilities

Cybersecurity represents one of the clearest areas where AI progress can both meaningfully strengthen the broader ecosystem and introduce new risks. The evolution of AI models has been dramatic: from systems that could auto-complete a few lines in a code editor, to models capable of working autonomously for hours or even days to accomplish complex tasks. These capabilities can dramatically strengthen cyber defense by accelerating vulnerability discovery and remediation.

To unlock the full defensive potential of these capabilities while reducing the risk of misuse, OpenAI is piloting Trusted Access for Cyber. This approach reflects the company's broader strategy for responsibly deploying highly capable models. In addition to this framework, OpenAI is committing $10 million in API credits to accelerate cyber defense.

The Urgent Need for Widespread Defensive Adoption

It is critically important that the world adopts frontier cyber capabilities quickly to make software more secure and continue raising the bar of security best practices. Highly capable models can help organizations of all sizes strengthen their security posture, reduce response times, and improve resilience. They enable security professionals to better detect, analyze, and defend against the most severe and targeted attacks.

These advances have the potential to meaningfully raise the baseline of cyber defense across the ecosystem if they are deployed in the hands of people focused on protection and prevention. The defensive advantage is clear: there will soon be many cyber-capable models with broad availability from different providers, including open-weight models. OpenAI believes it is critical that its models strengthen defensive capabilities from the outset.

This is why the company is launching a trust-based access pilot that prioritizes getting its most capable models and tools in the hands of defenders first.

The Challenge of Dual-Use Technology

Cybersecurity technology presents a unique challenge: it can be difficult to determine whether any particular cyber action is intended for defensive use or to cause harm. For example, a request to "find vulnerabilities in my code" could be part of responsible patching and coordinated disclosure, or it could be used to identify software vulnerabilities for exploitation.

Because of this inherent ambiguity, restrictions intended to prevent harm have historically created friction for good-faith work. OpenAI's approach aims to reduce that friction for legitimate security work while still preventing malicious activity.

A Trust-Based Approach to Frontier Cyber Capabilities

Frontier models like GPT-5.3-Codex have been designed with comprehensive mitigations. These include training the model to refuse clearly malicious requests, such as attempts to steal credentials. In addition to safety training, automated classifier-based monitors detect potential signals of suspicious cyber activity.

Developers and security professionals conducting cybersecurity-related work may encounter these mitigations as OpenAI calibrates its policies and classifiers. However, the company has created pathways to reduce this friction for legitimate users.

Accessing Enhanced Capabilities

To use models for potentially high-risk cybersecurity work, users have several options:

Individual Verification: Users can verify their identity at chatgpt.com/cyber to gain access to enhanced capabilities for security research and defensive work.

Enterprise Trusted Access: Enterprises can request trusted access for their entire team by default through their OpenAI representative. This streamlined approach enables security teams to work efficiently without individual verification barriers.

Invite-Only Advanced Program: Security researchers and teams who need access to even more cyber-capable or permissive models to accelerate legitimate defensive work can express interest in OpenAI's invite-only program. This tier is designed for advanced security research where additional capabilities are necessary for effective defensive work.

All users with trusted access must still abide by OpenAI's Usage Policies and Terms of Use. The framework is designed to enable legitimate security work, not to circumvent fundamental safety and ethical guidelines.

Prohibited Activities Remain Prohibited

Trusted Access for Cyber is designed to reduce friction for defenders while preventing prohibited behavior. Activities that remain strictly forbidden include:

- Data exfiltration
- Malware creation or deployment
- Destructive or unauthorized testing
- Any activities that violate OpenAI's Usage Policies

The approach balances enabling security professionals to do their work effectively with maintaining strong safeguards against misuse. OpenAI expects to evolve its mitigation strategy and Trusted Access for Cyber over time based on learnings from early participants.

The Cybersecurity Grant Program: Scaling Defensive Impact

To further accelerate the use of frontier models for defensive cybersecurity work, OpenAI is committing $10 million in API credits for teams through its Cybersecurity Grant Program. This significant investment aims to empower security teams to leverage the most capable AI models for protective purposes.

The program specifically seeks to partner with teams that have a proven track record of identifying and remediating vulnerabilities in open source software and critical infrastructure systems. These are the teams whose work has the broadest positive impact on ecosystem security.

Teams can apply for the program through OpenAI's application portal. The focus is on enabling security work that strengthens the foundations of the software ecosystem, particularly in areas where vulnerabilities could have widespread consequences.

Building on Previous Cybersecurity Initiatives

This $10 million commitment builds on OpenAI's $1 million Cybersecurity Grant Program launched in 2023. The expanded program reflects both the increased capabilities of frontier models and the growing potential for AI to meaningfully improve cybersecurity outcomes at scale.

Evolution and Learning

OpenAI's approach to Trusted Access for Cyber is explicitly iterative. The company recognizes that deploying powerful cyber capabilities requires continuous learning and adaptation. By starting with a pilot program and gathering feedback from early participants, OpenAI can refine its approach based on real-world experience.

This learning process will inform how the framework evolves, including:

- Calibration of automated monitoring systems
- Refinement of verification processes
- Adjustment of capability boundaries
- Development of new safeguards based on observed usage patterns

The goal is to create a system that maximizes defensive utility while minimizing the risk of misuse, informed by actual deployment experience rather than theoretical concerns alone.

A Model for Responsible AI Deployment

Trusted Access for Cyber represents a broader philosophy about how to deploy powerful AI capabilities responsibly. Rather than choosing between broad availability and strict restriction, OpenAI is pioneering a middle path: capability-based access grounded in identity and trust.

This approach acknowledges several realities:

  1. Powerful cyber capabilities will become widely available regardless of any single company's policies
  2. Defenders need these capabilities urgently to keep pace with evolving threats
  3. Trust-based systems can reduce friction for legitimate use while creating barriers for misuse
  4. Iterative deployment with learning loops enables continuous improvement

Looking Forward

The introduction of Trusted Access for Cyber marks an important milestone in the deployment of frontier AI capabilities for cybersecurity. By prioritizing defensive use, reducing friction for legitimate security work, and committing substantial resources through the Cybersecurity Grant Program, OpenAI is working to ensure that AI advances strengthen rather than weaken overall security.

As cyber threats continue to evolve in sophistication and scale, the tools available to defenders must evolve as well. Trusted Access for Cyber represents OpenAI's commitment to ensuring that its most capable models serve as force multipliers for security professionals working to protect systems, data, and users worldwide.

The success of this approach will be measured not just in technical capabilities, but in real-world security outcomes: fewer successful attacks, faster vulnerability remediation, and a more resilient software ecosystem overall.

Source: Introducing Trusted Access for Cyber - OpenAI Blog