Evolving Red Teaming for AI Environments

IBM

By Chris Thompson, Global Head of IBM X-Force Red


Imagine a scenario where a company's AI-powered threat detection system, designed to protect sensitive customer data, is compromised by a malicious actor. The system, instead of detecting threats, has the underlying platform comprised and modified to allowlist malicious activity and prevent security alerts. Meanwhile, the real threats go undetected, allowing attackers to breach the system and steal sensitive customer data. X-Force successfully conducted a red team test just like this against a bank's AI-powered fraud detection platform design to spot fraudulent trades and user activity.

This is a scenario that can unfold if a company fails to prioritize security of the AI platforms and systems surrounding ML models. According to the IBM Institute for Business Value, 96% of executives believe that adopting generative AI (GenAI) will increase the likelihood of a security breach in their organization within the next three years. As AI becomes more ingrained in businesses and daily life, the importance of security grows more paramount.

Testing AI can make the difference

It's important to test your AI for 3 reasons:

  1. Find and fix vulnerabilities—before criminals do: Hackers won't notify you of a vulnerability in your environment before they exploit it. Stay ahead of them by inspecting your applications, networks, etc. to identify and remediate those weaknesses
  2. Validate secure development processes: Ensure your development teams or software vendors are actually employing secure development methods and processes in their work. Scanning and testing acts as a validation step, revealing potential gaps that hackers could take advantage of.
  3. Maintain customer trust in your business and the brand: A breach involving customer data inevitably damages customer perception of the brand. Avoid embarrassment and loss of business by ensuring you have proper defenses and are managing cyber risk appropriately.

Ushering in a new era of red teaming

AI red teaming is emerging as one of the most effective first steps businesses can take to ensure safe and secure systems today. But security teams can't approach testing AI the same way they do software or applications. You need to understand AI to test it. Bringing in knowledge of data science is imperative — without that skill, there's a high risk of 'false' reports of safe and secure AI models and systems, widening the window of opportunity for attackers

When evaluating an AI testing service, look for a team with expertise in data science, AI red teaming, application penetration testing, and understands the real-world attack paths of sophisticated threat actors. A testing service that understands algorithms, data handling, and model interpretation can better anticipate vulnerabilities and safeguard against potential threats. Also look for simulation of realistic and relevant risks facing AI models, such as direct and indirect prompt injections, membership interference, data poisoning, model extraction, and adversarial evasion.

Make sure testing is comprehensive. Testing in the AI era needs to consider more than just model and weight extraction. Right now, not enough attention is being placed on securing AI applications, platforms and training environments, which have direct access or sit adjacent to an organization's crown jewel data. To close that gap, AI testing should also consider machine learning security — machine learning security operations or 'MLSecOps.' This ensures that the testing service can identify vulnerabilities and potential risks across the entire AI ecosystem.

By evaluating these key capabilities, organizations can ensure that their AI testing service is equipped to identify and remediate potential risks, upholding the integrity of their AI systems in an increasingly AI-powered digital landscape.

If you're attending Black Hat, come to IBM's session 'Evolving Red Teaming for AI Environments' on Wednesday, August 7th from 10:20 to 11:10 PST in Mandalay Bay 1 to learn more about AI testing, how it differs from traditional approaches and get more information about the new IBM X-Force Red Testing Services for AI.

Sustaining Partners