The Null Hypothesis & Vulnerability Scanning

August 18, 2025

The Null Hypothesis:  A Wrench in the Works of Vulnerability Scanning

How a core scientific principle (The Null Hypothesis in Cybersecurity) exposes flaws in traditional security & points toward a smarter, data-driven defense.

What is the Null Hypothesis? 🧐

In scientific research, the Null Hypothesis (H₀) is the default assumption that there is no relationship between two measured phenomena. It’s the “statement of no effect.” For a new drug to be deemed effective, researchers must first disprove the null hypothesis that the drug has no effect. This principle is akin to the legal concept of “innocent until proven guilty.” You start with the assumption of no guilt (or no effect) and then gather evidence to the contrary.

In cybersecurity, the process of gathering evidence is critically important. The ISC2 emphasizes that without robust digital forensics practices, many cyberattacks would go unnoticed or remain misunderstood. While it is common for cybersecurity professionals to engage in this, too often the diagnosis of a cybersecurity issues begins and ends at the first step; the vulnerability scan.

The Vulnerability Scan: A Flawed Premise?

Vulnerability scanning operates on a principle that is, in many ways, the opposite of the Null Hypothesis. It essentially assumes guilt. A scanner probes a network or application and reports on potential weaknesses. The problem is that these scans often lack context and can be rife with false positives. A scanner might flag a vulnerability that, in the specific context of the system, is not actually exploitable. This creates a “guilty until proven innocent” scenario, where security teams are sent scrambling to patch issues that may not pose a genuine threat.

  • Example: a notification of a patching vulnerability appears around a service (SSH), stating a device is not-patched.  It may well have been  successfully patched through an automated update although the specific documented release number (v1.153) wasn’t updated by the vendor. This is a quite common occurrence, and shows how easily scanning alone can fail to provide the full picture.

 

define RMM High Assurance Guard Cyber Ecosystem risk assessment vulnerability

 

Analogy: What if Bob Marley Came Back For A One-Night Only Show?

 

Any large public venue hosting a popular concert or a sporting event, is a potent example.  Imagine that Bob Marley & The Wailers were all to return to us to play for one massive concert. It would be an unforgettable event, and sell out at any price. Now, imagine that the security and venue management teams resolved a strategy to manage the show, operating on the single, sizable assumption that everyone going into the stadium or present within had a valid ticket. The absurdity of this is obvious.

 

You Just Can’t Assume That

We all know that in any large crowd, there will be individuals who have not paid for entry. In fact, you should anticipate that fake tickets will be sold and presented at the gate, people will be climbing walls and perimeter breaches will be happening, and even more. That’s why at any popular show tickets are checked and rechecked, scanners are used throughout the venue, badges and credentials are worn, large men in matching security jackets guard access points… all of whom are there to challenge the hypothesis that everyone present is legit.

 

The naive assumption that everyone has a ticket demonstrates the null hypothesis, and the realities observed by anyone who has attended a big show is that this assumption must always be challenged before it can be accepted. Sometimes, from many different possible alternative scenarios to get to a reliable level of confidence.

 

Similarly, in cybersecurity, the “legit purchased ticket” is a validated, secure configuration. A vulnerability scan, in its typical form, is like that cursory glance at the perimeter. It might easily spot some obvious issues, but it doesn’t come close to validating the security of each individual component from the inside out. And so, it may be useful for an expert to take a quick, cursory glance at the crowd to make some educated assumptions,  but nowhere close to validating the entire audience nor the validity of each presented ticket. In fact, it may even create additional risks to communicate those observations and create a false sense of security. Is this how your business identifies risk? 

 

you can fool some people some time,
but you can’t fool all the people all the time. 

 

What kind of reliability am I getting from this?

You may wonder about this, and you absolutely should. After testing a null hypothesis (H₀), there are basically four possible outcomes based on the reality of the hypothesis and the decision(s) made from the test results.

These four basic outcomes include the following;
  1. Correctly Accept H₀: The null hypothesis is true, and the statistical test correctly leads to its acceptance (or failure to reject). This is a correct decision.
      • Example: Open Ports in a scan show that Telnet (TCP Port 23) is open. This is usually reliable, because it should never be open. 
  2. Type I Error (False Positive): The null hypothesis is true, but the statistical test leads to its rejection. An example is concluding a treatment has an effect when it does not. (see patching example above.)
  3. Type II Error (False Negative): The null hypothesis is false, but the statistical test leads to its acceptance (or failure to reject). An example is concluding a treatment has no effect when it does.
      • Example: The scan results appears clean, but you are not protecting your business in any useful way (using MFA, EDR, Encryption, etc.), and without evidence to the contrary you conclude there is no vulnerability. This is very common, and leads to complacency and overconfidence. 
  4. Correctly Reject H₀: The null hypothesis is false, and the statistical test correctly leads to its rejection. This is also a correct decision.
      • Example: The scan reports are clean, but fail to identify the absence of a password policy nor use of password management tools, a major weakness that makes your business an easy target. 

Why Is This Method So Pervasive?

You may wonder why so many well-trained, highly-knowledgeable cybersecurity professionals and business executives lean so heavily on the results gathered from regular vulnerability scanning. While ongoing scanning will always be one-piece of the puzzle, it is far from a reliable standalone solution. If it isn’t challenged at each measurement with baseline data for comparison, it may be as little as 25% reliable, if not completely irrelevant. Its mostly used as a quick indicator, and is popular for two main reasons;

  • Speed– a vulnerability scan can be initiated remotely, and its results can often be shared instantly. This is highly dependent on the vendor and the methods of distribution, but is used frequently to make general decision like insurability, supply chain reliance etc.
  • Cost– it is quite inexpensive and can be managed electronically, so no real cybersecurity expertise is required on a case by case basis. Many vendors offer this and have different reporting packages, but the technique leverages the same technology in almost all cases.

The Missing Piece: Inside-Out Validation

 

So, how do we challenge the “null hypothesis” of a secure system? The answer lies in validated data, obtained from the inside out. Instead of just looking for known vulnerabilities from an external perspective, a more effective approach is to continuously validate the security posture of systems from within.
This “inside-out” approach involves several things that represent a proper culture of cyber wellness:

 

CONTINUOUS MONITORING

Instead of exclusive reliance on scans which provide only a snapshot in time, continuous monitoring provides a real-time view of your security landscape. Large companies use systems like a Security Operations Center (SOC) or a Security Information & Event Management (SIEM) system. These can be sizable investments that require your own in-house team to run them properly.
For small to medium-sized businesses (SMBs), these may be overkill. Solutions like Remote Monitoring & Management (RMM), Endpoint Detection & Response (EDR), and Posture Management tools are low-cost, pragmatic ways to continuously monitor your business. 

 

CONTEXTUAL ANALYSIS

Understanding the business context of an asset is crucial. A vulnerability on a non-critical server is not the same as one on a production database. However, if you’re an SMB and everything is production, there are pragmatic things you can do to make simple, inexpensive changes. A fractional CISO service can help you spot these issues and make some quick shifts to better protect your business. 

 

CONDUCT PERIODIC RISK ASSESSMENTS

Not all vulnerabilities are created equal. An inside-out cyber risk assessment performed periodically can be fundamental. Using this approach helps determine which risks are actually exploitable, allowing for effective prioritization and cost-effective remediation. Learn how our TEKCHEK® 30-minute assessments bring scale and low cost to the practice of assessing risk. 

 

 

DATA-DRIVEN DECISIONS

By gathering and analyzing data from within your systems, you can make informed, risk-based decisions, rather than reacting to scan reports or the here-say of a vendor who doesn’t take the time to understand your businesses and its unique risks.

The Path Forward is Clear

The over-reliance on vulnerability scanning is a deeply ingrained habit. By embracing a data-driven, inside-out validation model, you can build a more resilient and intelligent defense against the ever-evolving threat landscape. Let tekrisq provide a quick cyber risk assessment to help you create a clear path forward.

 

 

cyber risk assessment fast easy affordable SMB TPRM third-party CISO compliance security review service flaw hypothesis methodology define RMM high assurance guard insurance cybersecurity best practices