Although you might not have heard of the term, an agentic AI security team is one that seeks to automate the process of detecting and responding to threats by using intelligent AI agents. I mention ...
Red teaming is a powerful way to uncover critical security gaps by simulating real-world adversary behaviors. However, in practice, traditional red team engagements are hard to scale. Usually relying ...
OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams' advanced capabilities in two areas: multi-step reinforcement and external red ...
The developers behind a popular AV/EDR evasion tool have confirmed it is being used by malicious actors in the wild, while slamming a security vendor for failing to responsibly disclose the threat.
In many organizations, red and blue teams still work in silos, usually pitted against each other, with the offense priding itself on breaking in and the defense doing what they can to hold the line.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results