How AI Will Strengthen (Or Undermine) The SOC
By Kevin Simzer, Chief Operating Officer
In late 2022, headlines bloomed worldwide with the launch of ChatGPT. In mainstream news, analyses of business use cases [hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai] multiplied alongside think pieces about ethics and who’s responsible for what [techcrunch.com/2022/12/05/chatgpt-shrugged]. The cybersecurity industry has been having the same discussions, albeit for a bit longer. Machine learning has been in regular use across security platforms for over a decade now. But with the explosion of generative AI tools like large language models (LLMs), we are staring down the barrel of a new era. Some questions will need to be asked again.
Industries of all kinds are embracing AI to make processes more efficient and deliver more value to customers. Proper application of these tools in cybersecurity platforms can drive down costs, reduce burnout, and avoid slowdowns, resulting in stronger security platforms. Superior AI integrations—those that leverage LLMs in particular—will be a major factor moving forward. This is especially true as organizations consolidate tools and continue migrating towards full-service cybersecurity platforms over numerous point solutions.
AI has long been used for threat detection and response, but future integration of LLMs will open the door to vast improvements in how analysts interact with security products themselves. Product launches leveraging this technology will be coming sooner rather than later.
However, the foremost consideration when unleashing the potential of AI in your security operations center should be to not “unleash” it at all. Rapid deployment of new technology because it appears to be powerful is a move with far greater risks than potential benefits, and we’ve seen many companies do this recently. Careless use of these tools—especially overreliance on them—will lead to poor risk assessments, increase the potential for misinterpretation of data, and put customer security at risk through data leaks.
Deliberate and intentional application of LLM tools can revolutionize the SOC, but only with implementation that prioritizes the most important element: people.
More than three million skilled cybersecurity professionals [www.isc2.org/Research/Workforce-Study] are currently needed globally. New AI tools should not be viewed as a justification to reduce headcount, but as a means to empower experts and make critical roles more accessible to junior analysts. Requests and responses in plain English, for example, mean highly complex information will be quickly understood and acted upon at all levels.
An avalanche of AI-integrated security product launches is on the horizon, each of which will tout methods of leveraging generative AI to do everything discussed here and more. Trend Micro has already taken steps to address security and compliance concerns associated with this emerging technology. Although this will ensure it can be used safely, not everyone will be so cautious. As generative AI continues to evolve, it is more important than ever to stay aware and ahead of risks. We must consider not only how it will amplify the efforts of threat actors around the world, but how improper use internally can undermine our own efforts.