By: Matthew McKenna, VP International Operations
The hype around artificial intelligence and predictive security has been on a rapid ascent over the last two years. The promise of artificial intelligence and the potential value it can provide enterprises on their journey towards intelligent defensible networks is unquestionable. However, with hype comes confusion and with many vendors touting artificial intelligence these days in the cybersecurity space, it is important to know the right questions to ask to get a true sense of their capabilities.
When assessing a vendor’s AI capabilities a great place to start is to look at the companies data scientists and the companies experience in the space. Does the solution being offered have a heritage of usage in any practical real life settings prior to its entrance into the cybersecurity space? Building up tried and tested algorithms is not something that happens overnight.
A great article written by Ronald Ashri, Tech Strategy Director at @DeesonAgency, UK sums it up into a few points. The questions are:
How is the solution proactive?
How does it learn?
How autonomous is it?
How creative is it?
How connected is it to the internal and external environments surrounding it?
True AI will be able to gain understanding without manual input, programming or configuration. It will essentially, learn from the data it is working off of. Secondly, it will be able to learn the difference between between what is success or failure and it will be able to adjust itself accordingly to learn to adapt accordingly for future usage.
It is important to ask the hard questions to vendors such as
What algorithms, machine learning, deep machine learning and machine reasoning techniques are used within your AI approach and how do you apply them?
How does your AI predict next moves of a cyberthreat actor, but more importantly how does it understand to prioritize the potential severity of impact to the environment?
Ask why certain learning techniques are used and others are not. This serves as an excellent indicator of how well thought through the AI is and how mature the algorithms are.
How do you protect your algorithms from being tricked? Meaning how is adversarial AI being taken into account?
How do you use external data sources to enrich your data sets and understanding of the overall environment?
What techniques are used to ensure a minimal number of false positives?
When looking at AI driven, security solutions that are seeking to detect unknown threats before they turn into breaches, don’t get distracted by a vendors potential beautiful GUI or cool workflow. Yes, these are important in an assessment when it comes to determining the user friendliness of a tool, however, the true beef of what you are buying is in the reliability of the data science which is going to help you accurately identify and mitigate a threat.
Back To Blog