In the wake of the Colonial Pipeline cyberattack and an ensuing Executive Order, a renewed focus is being given to the strength and resiliency of our infrastructures, punctuating two truths that have haunted our cybersecurity industry.

First, the growing number of infrastructures we deem critical were not designed, built or integrated with cybersecurity as a priority. And second, the software and operating systems supporting those structures were not designed, built or integrated with the prowess necessary to thwart the sophistication and volume of today’s cyberattacks.

Hindsight is always 20/20. Take, for example, the Lye-poisoning cyberattack in Florida. It’s tempting to declare, given what we know today, that as a cybersecurity community, we should have anticipated that adversaries would attempt to poison our drinking water by breaching an industrial control system in order to manipulate the chemical composition of the water (which is precisely what happened in that attack). There is no shortage of other examples, such as the 2016 attack on Ukraine’s power grid and the 2010 Stuxnet attack.

In each of these instances, the defense circumvented was what is commonly referred to as “air gapping.” This means that the physical target and its supporting systems had no connection to the internet. While much progress and no small investment have been made in addressing the possibility of artificial intelligence (AI) and machine learning (ML) supported and quantum-based cyberattacks, defense against cyberattacks targeting physical assets continues to lag. It’s an issue that has plagued the cybercommunity for decades and still does to this day.

 

The Asymmetry Effect

Asymmetric warfare typically refers to instances where one side, generally an advanced nation-state, may have invested heavily in the establishment of sophisticated defensive and offensive capabilities only to discover that an adversary — often a lesser-developed country, criminal enterprise or terrorist organization — can with little or nothing in the way of a similar investment nullify the intended benefits of the gap. In the cyber world, this imbalance has been notably chronicled by David E. Sanger in his book “The Perfect Weapon.”

Building sufficient cyber resilience and security to defend industrial control, supervisory control and data acquisition (SCADA), and Internet of Things (IoT) systems have historically come with a high price tag and introduced levels of inextricable technological complexity. At the same time, the cost and difficulty involved in executing attacks on these systems have diminished as the tools required have grown ever more easy to acquire via conduits on the dark web. This asymmetry undercuts the advantage advanced nations would otherwise command, leveling the battlespace for terrorists, criminals and less advanced nation-states.


 The potential significance of this asymmetry is exacerbated when it comes to critical infrastructures, as many nation-states we’d consider adversarial, e.g., North Korea, have yet to digitize much of their infrastructures, leaving them immune to the cyberattacks that they can advance against their more advanced adversaries. Disconcertingly, the number of entities potentially positioned to enjoy or exploit this asymmetry is growing and has put the cybersecurity community in an extremely precarious position.

Today, a single application may have hundreds of thousands of vulnerabilities. A threat actor needs to find only one to execute a damaging attack. Cybersecurity operators must wrap their defensive arms around an incredibly immense, evermore porous terrain in their efforts to ensure viable security. And if that terrain itself didn’t pose a daunting enough challenge, our adversaries are quick also to exploit that which we can never escape… our humanity. While hardware and software are usually the end targets they are technically pursuing, they do so through what they often perceive as the weakest link in getting there — us.  

This human variable is manifest in the modern instances of heuristics. These cognitive “shortcuts” that our brains take to alleviate the burden of complex decision-making while helping us solve problems more efficiently and learn new things more quickly can lead to cognitive biases, which, in turn, offer potential vulnerabilities that cyberattackers can exploit.  

More than 99% of cyberattacks rely on some form of human interaction at a critical juncture. We are forced to appreciate that cybercriminals are targeting people just as much, if not more, than the systems that underlie an infrastructure. This is why the trusted insider conundrum is exacting renewed attention. In most instances, they represent a cheaper and more accessible conduit to achieve one’s objective.

The fact that we cannot shed our humanity forces us to come to grips with the stark reality that we make mistakes, notwithstanding our best efforts. Making mistakes is a core part of the human experience — it is how we grow and learn. Unfortunately, this fundamental aspect of being a human is often demonized in the world of cybersecurity. On the other hand, our adversaries are keenly aware of this and recognize it as an opportunity. Almost every rogue cybercriminal appreciates they don’t need to defeat our world-class technology — they need only defeat us.

As I have intimated in earlier columns, the solution to this problem does not lie in removing the human. Such a goal should never be championed nor adopted. There are, however, ways we can bolster humans as our historic weakest link: 1) reducing opportunity, 2) integrating AI and other emerging technologies into security advancements and 3) continuing the never-ending task of educating users. Through these efforts, we can cast the human in a different light as partners with technology in the battles ahead.