The Holy Grail for Security – AI and ML? Not so Fast

News Upcoming Webinars Trade Shows and Events Press Releases Newsletters Blog
Machine Learning

The Holy Grail for Security – AI and ML? Not so Fast

This is a great article, at least I think so – ‘AI and ML choices can greatly impact data security‘ offers an opinion on the pluses and minuses of using artificial intelligence (AI) and machine learning (ML) to aid in the identification and remediation of security vulnerabilities. According to the author, Dustin Hillard, you should beware, as most offerings don’t work, provide too much overload, and can become a productivity drain rather than a help. Well, isn’t that good news?

Seriously, I believe Hillard did make some valid points. Unfortunately, we have to admit it certainly seems like cybercriminals are winning the war and the battles. A cybercriminal who makes it into your neighborhood needs only a single intrusion flaw, and security teams are becoming bombarded with the software, hardware, tools, and alerts out there. Identifying a single intrusion flaw from within a morass of information isn’t easy, intuitive, or quick. It’s the vendor versus the client. Do AI and ML provide a silver bullet on the security front?

Stay away from:

  • No explanations – You have a great new AI system to keep away cybercriminals, supposedly. The system, hopefully, identifies vulnerabilities and intrusions, and assigns ratings, but doesn’t explain why these occurred. Would you trust the output? Didn’t think so. Instead of solving a problem for the security team, all it does is add one. AI applications need to improve, so they provide relevant, clear, and actionable information.
  • Too much information and false positives – It is not difficult to build models that can detect new potential threats, indicators of compromise, and anomalous behaviors. Don’t get carried away. Although the goal is to provide additional security alerts, which the models do, in reality, they end up generating more false positives that obscure and distract security operations teams from seeing the real threats.
  • Generic Data – AI and ML must both fall back on data, but the quality and meaning of the data is important. What most AI systems provide is, at best, a moderate extension beyond previous rule and signature-based approaches. AI is only as powerful as the data it receives, and most implementations of AI distribute generic models, which don’t understand the networks they are deployed to and are easy for adversaries to evade. Cybercriminals look for pattern detection, and those applied to static data across time and networks. The criminals can profile the detections, and easily update tools and tactics to avoid the defenses put in place.

Honestly, I am not bad-mouthing AI and ML, as they can be powerful tools in improving enterprise defenses, but success requires a strategic approach that avoids the weaknesses of most of today’s implementations. An effective AI system requires an ambitious goal that reduces the workload of the security team, and automates investigation with a focus on the full adversary objective. According to the article, “AI systems that uncover the core behaviors that an adversary must use will give security teams a small number of true risks to investigate. Effective solutions should have very low false positive rates, generating fewer than 10 high-priority investigations per week (not the hundreds and thousands of events produced by current approaches).”

Another area for improvement is when the security teams focus on their core objectives. Traditionally, criminals have the advantage because they can profile an environment and avoid detection. AI systems can gain the advantage by understanding the environment better than the adversary can. A system that understands the specifics of an environment can identify unusual behaviors with context, which adversaries could only gain with complete access to the full, and constantly updating, internal data feeds that the AI system receives to learn with.

AI and ML systems can be designed in such a way that they provide maximum benefit to their human partners. They should offer results that automate typical analyst workloads and explain the results in a way that builds trust and, over time, accelerates the skill and experience development of humans who use AI tools. This also creates a virtuous cycle where the algorithms learn from the analysts’ actions.

Most organizations will readily admit to a shortage of security talent or budget. So it is even more important that when AI and ML are used they are beneficial, both tactically and strategically, to security teams. These new tools must help fill skills gaps with automation. The tools must then provide interpretability and situational awareness to help grow the skills of security teams, while also making daily operations more efficient and impactful. Effective AI and ML deployments must address and rely on helping teams separate real alarms from false, and focus on what matters.

Before jumping on the AI and ML bandwagon, I would take a look at Netwrix, our parent company. Its product addresses the problems explored above by providing required and useful information to security teams, delivering the ability to foil an intrusion before it becomes a breach. No mumbo jumbo. Straight forward information. Usable information. You can download a free trial version. Also offered is a version with our Concept Searching data classification integral to the product – Data Discovery and Classification Edition. Actually, it is selling like hot cakes. You probably want to look at that, if going down the AI and ML route too – to cleanse and organize your data so decisions are made with accuracy, not in the dark.

Good luck finding your Holy Grail. If you have already found it, let us know what it is.