Why We Need To Change Our Current Vulnerability Scanning Practice? As part of a vulnerability management program, we depend heavily on vulnerability scanners to discover our assets on the networks and identify the potential vulnerabilities that exist on these assets. Additionally, vulnerability scanners are used to help us prioritize the detected vulnerabilities based on the traditional CVSS metrics and determine remediations or mitigative solutions where possible.
However, there are some critical issues we need to reconsider in our existing practice of vulnerability management with respect to vulnerability scanning. For this reason, we should take a new perspective other than our current one that is faulty and misleading.
Issue 1: Asset Identification Malpractice
The first and most critical malpractice is the dependence on the vulnerability scanners to identify the assets on the networks (whether the machines or the hardware and software running on them) so that the individual vulnerabilities that exist on these assets can be determined.
The question is, as a best security practice, shouldn’t we already know what assets we have on our networks by the help of an established asset and configuration management program? If we depend solely and blindly on the vulnerability scanners to discover and inform us about our assets rather than practicing an asset and configuration management program, then we are in serious trouble and our systems are at critical risk.
This situation actually reminds me of the popular saying by the Sun Tzu, since not being aware of our current assets and their configurations simply equals not knowing ourselves.
If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.Sun Tzu – The Art of War
The problem with the vulnerability scanners is that they are notoriously deficient in identifying the assets correctly on the networks and thus end up with reporting many non-existent vulnerabilities (false positives) or even worse, miss lots of the existing ones (false negatives).
Especially, unauthenticated scans (without providing credentials) often report incorrect or unspecified operating system or application versions and can not detect some of the installed software at all. With authenticated scans these results can be improved but still the correctness will highly questionable.
One can argue that this is what malicious actors outside our networks do and we should have the same hacker state of mind and viewpoint to get to know what is exposed to the outsiders and what is not. This sounds like a reasonable argument and there is even some justification in it. It is correct that amateur hackers or script kiddies usually conduct vulnerability scanning to finger print networks from the outside.
But there are two counter arguments to this proposition. Firstly, professional hackers try to conduct their activities in stealth and for this reason, usually they take advantage of social engineering methods more heavily compared to technical exploits.
Amateurs hack systems, professionals hack people.Bruce Schneier
Social engineering has become about 75% of an average hacker’s toolkit, and for the most successful hackers, it reaches 90% or more.John McAfee
Social engineering bypasses all technologies, including firewalls.Kevin Mitnick
Read more educational and inspirational cyber quotes at our page 100+ Best Cyber Security & Hacker Quotes.
Secondly, this proposition does not take into account the malicious insiders who are already well aware of all the software being used and the control measures implemented to defend against attackers.
Thus, we should clear our minds from the misconception that threats come solely from the outside of our network perimeters and hackers can only identify a partial list of our hardware, operating system or software in use, through scanning our network from the outside. Rather, we should assume that they already have a full knowledge of our assets and their configurations.
Actually, this concept of openness that assumes attackers can gain full knowledge of our systems is a fundamental security principle established back in 1975s by Saltzer and Schroeder. Actually, there are other security principles that dates back earlier than this, such as Kerckhoffs’ Principle or the Shannon’s Maxim that advise that secrecy in itself should not be our goal to guarantee security, rather we should assume that attackers already have full knowledge. However, our current practice of vulnerability scanning is just contrary to this fact and the established security principles. Rather, it favors the rotten practice of security through obscurity more to cause false sense of security.
One ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.Claude Shannon
Issue 2: Outdated Vulnerability Results Problem
The second issue with our current practice is the outdated vulnerability results problems that we face when using the active vulnerability scanning methodology. Most organizations conduct periodic vulnerability scans, whether monthly or weekly. Actually, for most organizations with medium to large networks, it wouldn’t be possible to conduct vulnerable scans more frequently even if they wanted to. This is due to the fact that active vulnerability scans take usually too long to complete and cause lots of undesirable traffic on the networks, even rendering some of the critical functionalities non-responsive in some cases.
But new vulnerabilities are discovered and published frequently for our currently installed and running assets, or new vulnerabilities could be introduced with new hardware/software installations on our systems. This reality renders the periodic vulnerability scanning approach ineffective since it is basically contrary to the dynamic nature of the vulnerabilities.
Another issue that contributes to the outdated vulnerability results problem is the fact that vulnerability detection scripts get updated periodically by the product vendors. As a matter of fact, it usually takes weeks, if not months, for the product vendors to prepare and publish the vulnerability detection scripts after their first discovery. This delay exposes systems to zero-day attacks during this update interval, since attackers will start immediately taking advantage of the zero-day vulnerabilities released publicly.
The Remedy: Passive Vulnerability Scanning
Given all these concerns and undesired side-effects, the question is, do we have to bear with the active vulnerability scanning approach? Not really. Taking the passive vulnerability scanning (a.k.a. scanless vulnerability assessment) approach, vulnerability detection could be simpler, faster, more accurate and non-disruptive.
Passive vulnerability scanning is a technique that leverages on the already available asset knowledge to identify the existing vulnerabilities on the networks. It simply compares the installed software and hardware data against a vulnerability database that keeps records of known vulnerabilities for each software and hardware version.
Passive vulnerability scanning is incomparably faster since all it takes to identify vulnerabilities on a network is a simple database lookup, given the installed software and hardware names and versions. It is more accurate and detects more vulnerabilities (due to less false negatives) as long as database queries are conducted with correct software and hardware versions. Last but not least, this method is non-disruptive since database queries does not cause any traffic load on the network.
As a matter of fact, passive vulnerability scanning is not something new and it is already implemented in some of the vulnerability assessment tools. However, it is not utilized as much as active scanning approach is used. This could be due to the fact that we are traditionally used to active scanning approach and this is what we have been taught and mislead for years. Or, it could be due to the fact that we don’t have accurate and complete data on the installed software and hardware in our systems, since there is no asset and configuration management practice being implemented.
Given all these advantages that increase our security posture significantly with more vulnerability detections and high accuracy and that reduce unbearable side-effects that come with active vulnerability scanning, should not we embrace passive vulnerability scanning more as a community?
Disadvantages of Passive Vulnerability Scanning
That being said, passive vulnerability scanning approach have its own shortcomings too.
For the first disadvantage, passive vulnerability scanners are incapable of detecting vulnerabilities that arise from misconfigurations. In other words, they can only detect vulnerabilities with CVE IDs and can not check any configurational weaknesses. To give an example, embedded clean text credentials in a software is a vulnerability with a registered CVE ID and can be detected by passive vulnerability scanning. However, use of weak passwords is a type of configuration made by users and only affect the systems on which weak passwords are used. Because, a weakness like this can only be identified by brute force methods, active vulnerability scanning is required in such a circumstance.
But note that, configurational vulnerabilities constitute only a very small portion of the vulnerabilities detected in our current practice of active vulnerability scanning. Passing the major work load of detecting CVE based vulnerabilities to passive vulnerability scanners, active vulnerability scanners can identify configurational vulnerabilities more quickly and with less traffic loads inflicted on networks.
For a second issue, though the accuracy of passive vulnerability scanning is comparably very high, it could be improved further. When newly discovered vulnerabilities are registered on the National Vulnerability Database (NVD), more attention and care could be paid on the vulnerable software and hardware versions, so that any naming ambiguities or missing other vulnerable versions can be prevented.
For the final remarks, we should change our traditional, defective and misleading approach of depending solely and blindly on the active vulnerability scanning and take advantage of the passive vulnerability scanning approach that come with significant advantages. It is already implemented and comes as a feature on some vulnerability assessment tools and products. All we need to do is to embrace the good and the old best security practice of asset and configuration management before taking advantage of the passive vulnerability scanning approach.