Why Cognitive Biases and Heuristics Lead to an Under-investment in Cybersecurity

Darren Ting
10 min readJan 16, 2021
Photo by Arget on Unsplash

Recently, both technology and big data have quickly become more and more involved within companies and businesses. According to a digital marketing study done by Smart Insights, 31% of companies have undergone a digital transformation and another 34% of companies are currently in the process of a digital transformation. Thus, the integration of technology into businesses is becoming more and more frequent. The ever increasing importance of user data was highlighted in a 2014 study done by Accenture where 82% of company executives across different corporations had agreed that Big Data provides a substantial source of value to the company

Many companies seem to agree that both a better integration with technology and utilization of data are important to increasing profits. Thus, these companies should value the data they obtain as well as the new technology they are developing; however, the increasing relevance of data and technology across different verticals has led to increases in attacks in the form of data and security breaches.

According to the identity Theft Resource Centre, the number of data breaches has been steadily increasing. It is important to note that this graph does not specify the number of records compromised or how much these breaches are costing businesses. These variables can also serve as metrics for how secure data in recent years.

When breaches, whether it be security or data, occur, a company will generally not attribute the event to faulty management decisions, but rather attributed errors to technical departments; however, these mishaps may have been indirectly caused by managerial decisions that did not take security as seriously as it should have been. As a result, certain cognitive conditions that are common in psychology and behavioral economics can lead a company to becoming vulnerable to security breaches. Here, I will delve into some of the psychological factors in decision making that can lead to weak cybersecurity among companies.

Photo by Sebastian Herrmann on Unsplash

Why would psychology help cybersecurity?

In a 2019 podcast by the Harvard Business Review, Thomas Parenty and Jack Domet, cofounders of Archefact Group state that the current approach by organizations to cybersecurity is faulty in that it simply focuses on the technical aspects of cybersecurity. The weight of all security placed on just the IT department has had a negative impact on the cybersecurity workforce as well as magnified the gap of understanding security knowledge among cybersecurity professionals and the rest of the company.

An article in 2010 lamented that the emphasis on cybersecurity training was far too low, and that there was a rising demand in the market for security employees that could not be met. If there is a shortage in cybersecurity workers, it would mean that the current cybersecurity professionals now shoulder much more burden than they have been prepared for. This has resulted in considerable job fatigue that has continued to the present.

Research conducted by ESG/ISSA in 2018 found that not only do 63% of cybersecurity professionals say that “cybersecurity skills shortage has increased the workload on existing staff”, but also that 38% of cybersecurity professionals believe that “the cybersecurity skills shortage has led to high burnout rates and staff attrition”. If these professionals are expected to do much more work with the same pay, it is almost guaranteed that there will be unhappy employees and a high attrition rate.

The clear and continuing frustration within the cybersecurity community may potentially show that perhaps a different approach needs to be made when evaluating cybersecurity decisions and that this continuing issue needs to be analyzed through different lenses. Perhaps the issue with cybersecurity does not only lie in improving the IT hiring of cybersecurity professionals but also to educate all professionals within a company about security and the faulty logic used when humans try to make decisions in regards to security without training. Understanding how cognitive biases affect decision making will spread awareness that cybersecurity is not just a technical field, and improving it will require a multitude of diverse perspectives.

An Interesting Study

Photo by Bret Kavanaugh on Unsplash

The ways cognitive biases and heuristics affect decision making in cybersecurity mentioned in this paper will refer to the results of the research paper Decision-Making and Biases in Cybersecurity Capability Development: Evidence from a Simulation Game Experiment by Mohammad S. Jalali, PH.D. The experiment consisted of a simulation game where players make investment decisions regarding cybersecurity capabilities of a company, and then see the impact of their decisions over the course of five years within the game. The experiment had a group of cybersecurity professionals, who had worked in IT and cybersecurity for an average of 15 years, and a control group of inexperienced graduate students.

The game itself had two levels, a deterministic level where the impacts and times were fixed, and a second level where the events occurred at random. The study keeps track of the player’s profits as well as three security investment variables: prevention, detection and response. The results showed that the two groups performed similarly, and that experience did not make as much an impact on performance in the game as one would expect. I’ll delve deeper into the meaning of these results in later sections.

A Lack of Counterfactual Thinking

Photo by Diego PH on Unsplash

In cybersecurity, there is a multitude of reasons as to why a company has not been hacked yet. An executive may attribute the lack of breaches to good decision making in hiring and fund allocation; however, correlation does not always equal causation. Another view of the situation could be that they did not detect any hackers, or that they were simply lucky that year. A Harvard Business Review article attributes this type of thinking to “wrong mental models to help [decision makers] determine how much investment is necessary and where to invest…they may assume that complying with a security framework like NIST or FISMA is sufficient security” . The thinking that meeting the standards of cybersecurity and then halting improvements shows that executives may view cybersecurity spending as a temporary problem as opposed to an ongoing danger. The dismissal of an absence of a breach or hack as positive result directly due to the current spending on security perpetuates an under-investment in the field.

A data breach cost study done by the Ponemon Institute revealed that in 2017, the average time it takes to detect a data breach was 206 days, which was an increase from the previous year. This results in a sort of feedback delay in decision making. The study done by Jalali (referenced earlier) noted that in the non-deterministic level in the game, participants struggled with adapting to the time delays especially when the time gap was significant. Unfortunately, this is closer to reality as the best hackers will not try to hack a company in a predictable fashion, and the struggle with understanding feedback delay can be seen through cognitive biases and faulty heuristics.

This delay in time between security decisions and its consequences paired with an overconfidence in security based on a lack of breaches present a dangerous combination of ignoring cybersecurity changes until it is too late. The cycle of ignoring cybersecurity by executives can be explained by the faulty logic of the availability heuristic as well as different biases such as the confirmation bias and the sample bias.

Photo by Natasha Connell on Unsplash

The availability heuristic is a heuristic that uses examples that come to mind to evaluate a certain subject in its entirety. Because of this heuristic, managers of companies that did not detect breaches in recent years may not believe the risk of hacks to be very high even though a breach may have already occurred undetected or that they simply have not been targeted yet.

The lack of counterfactual thinking by managers and executives due to a lack of breaches is aggravated by the confirmation bias, which is a tendency for an individual to interpret events in a way that confirms that individual’s beliefs. As a result, an absence of detected breaches and vulnerabilities could confirm an executive’s belief that his or her company is secure, thus believing that correlation is causation. Another bias exasperating this belief is the sample bias.

The sample bias essentially is one seen in studies and in statistical analyses: a poor sample size or sample choice will skew the results to a non accurate conclusion. If a company executive bases security spending decisions based on a small sample size made up of companies following similar standards as well as the security breaches on the news, then the decision may not be made based on accurate data. A potential explanation for the increase in breaches recently could be that many executives in previous years did not invest enough in cybersecurity because of the lack of breaches on the news and among peers.

Photo by Alexander Mils on Unsplash

Through the unpredictable feedback delay, this lack of investment across different companies resulted in a weaker cybersecurity that is now being exploited by hackers leading to a substantial increase in security breaches. While this is simply a claim, it does highlight a danger in the way executives have approached cybersecurity in the past, and how decision making in security is psychologically difficult.

Risk Mitigation vs Risk Management

Weaknesses in security and vulnerabilities are viewed as risks. Security requires trade-offs in that by investing more in security, a company misses out on potential profit; consequently, it is impossible to be both a feasible, profitable company and have perfect security. Inevitably, companies will have security that need to be managed through a preparation of responses to said risks. In the study done by Jalali, runs of the simulations can be divided into two types: proactive and reactive.

In proactive runs, a player invests in the three variables early on and make lower profits in the beginning; however, a proactive player will sustain profits in the long run

Worryingly, the results of the study found that the professional subjects do not act more proactively than inexperienced subjects. In addition, under “Limitations and future related directions”, the author states that “observations show that many organizations that develop cybersecurity capabilities seem to take prevention and detection capabilities into consideration while ignoring response capabilities”. This highlights the issue that cybersecurity professionals tend to focus too much on risk mitigation when risk management through response is equally important.

A reactive player makes more profit in the beginning because he or she will not invest in security capabilities at all. These players would then start losing money because of the attacks.

In the Harvard Business Review article mentioned previously, the vice president of a behavioral science consulting firm states that cybersecurity has been treated “as a finite problem that can be solved, rather than as the ongoing process that it is. No matter how fortified a firm may be, hackers, much like water, will find the cracks in the wall”.

The bias towards focusing on prevention and risk mitigation can be explained by the zero risk bias. Within psychology, zero risk bias is defined by the tendency to prefer the value of certainty and opt for zero-risk solutions, even if this results in a less favorable outcome. The idea of preventing security breaches seems more appealing because it almost implies that breaches would be impossible when prevention is done correctly. It remains a pessimistic truth that vulnerabilities are inevitable, but it is one that executives must understand in order to manage risk and lower the chances of expensive breaches occurring. The absence in risk management paired with innovation can aggravate the damage it can cost to users, as seen in the example of a certain new technology.

How Can We Fix This?

Photo by Igor Peftiev on Unsplash

Since cybersecurity is an ongoing issue a company must manage, there is always a chance for improvement. Many of the issues mentioned here stem from cognitive errors in evaluating information and the resulting decisions made. Awareness of the existence of these cognitive biases can be a first step to avoiding them. An article in the Cybersecurity Review suggests that teaching employees about cognitive biases and logical constructs, learning how to take and receive feedback, and evaluating information objectively are some ways to combat cognitive bias.

Changing the current perspective on cyber risks would also somewhat alleviate the issues stated in this paper. An article written by a former senior advisor for public affairs at the Department of Homeland Security not only called for viewing cybersecurity as not just a technical issue, but also compared it to disease management in that it “requires ongoing care, not one-time intervention” . By viewing cybersecurity as a risk management process instead of a risk mitigation process, it can pave the way to better responses to breaches.

To Conclude…

Photo by Markus Spiske on Unsplash

Because technology has become more omnipresent, the understanding of its security needs to be just as important in more than just technological roles. Faults in a company’s security, in the form of breaches, are attributed to technical vulnerabilities; however, security issues can stem from the executives making decisions revolving around allocation of funds. Since humans are making these choices, cognitive biases must be taken into account. These biases have led to a lack of counter-factual thinking as well as a view of cybersecurity as a problem to be solved, not a risk to be managed. People need to be more wary of new technology and its undiscovered vulnerabilities before believing in its potential as to combat the bandwagon effect and pro innovation bias. Approaching the field of cybersecurity through different lenses, including a behavioral lens, is vital to its future in order to grow and thrive.

Thanks for reading!

Photo by Elena Cordery on Unsplash

--

--

Darren Ting
0 Followers

Current MEng in Computer Science at Cornell University, and Tufts Alum