Page 173 - Cyber Terrorism and Extremism as Threat to Critical Infrastructure Protection
P. 173
GRAEME BALLARD: IF THE FACE FITS: IS IT POSSIBLE FOR ARTIFICIAL INTELLIGENCE TO ACCURATELY PREDICT THREATS TO
PROTECT CRITICAL INFRASTRUCTURE?
What can be said with certainty is that Artificial Intelligence machines are difference engines,
capable of making specific types of mathematical calculations, at far higher speeds and volumes
3
than the human brain is capable of doing. Due to relatively recent developments in microchip
technology, and with the creative use of standard Boolean algebra, various types of algorithm(s)
and multivariate statistical analysis, machines capable of quickly performing large volumes of
calculations can now be taught to do tasks that might look like intelligence and might include
prediction. This is the essence of Machine Learning (ML) – a more correct term for the current,
commonly applied misnomer Artificial Intelligence (AI). These are machines using programmed
tools to teach themselves how to identify relationships and correlations, at speeds faster than a
human being is able to achieve. It is not really intelligent; it just might look that way.
ML, therefore, is no different from any other kind of analysis – it is just more efficient. Whether
from ML or traditional analysis, the results depend on the accuracy and creativity of the
researcher/programmer, using the relevant mathematics and statistics to solve the particular
problem at hand. In this way, the analysis is still subject to both statistical Type 1 and Type 2
errors, and researcher bias.
In other words, ML has the same potential to yield bad results and/or ever-increasing reams
of meaningless data as traditional analysis. Recent examples include issues of accuracy and
racism relating to facial recognition software, increasingly used by police forces (BBC, 2019);
sexism relating to the human-resource ML deployed at Amazon (The Guardian, 2018); and the
general trend towards vast volumes of meaningless data that has become the bane of modern
organisations (Baker, 2014). Particularly worrisome is the potential for a machine to appear as if
it has calculated a meaningful result but, in reality, it is an error that remains unrecognised by its
human overlords. These issues will be developed further, towards the end of the paper.
In the nuclear industry, how can we teach a machine to understand something as irrational and
unpredictable as human behaviour, in order to identify and predict potential threats? When
faced with such a problem, it is often advantageous to return to first-principles thinking. If
we wanted to identify threats from humans within the nuclear industry, how would we do it,
using legacy tools?
3 Monitoring Stress with a Wrist Device
As a general rule of thumb, the psychological, sociological and philosophical literature all
separate the basis of human behaviour into biological, cognitive and, sometimes, conative
components (e.g. Engel et al., 1993; Franken, 1988; Kinnear and Taylor, 1991). The precise
definitions and descriptions of these components (and the theories contained within them)
is beyond the scope of this paper. What is relevant is that, up to this point in time, research
within the security industry has focused mostly, it seems, on biological theories, because these
are the easiest to reliably measure. Arousal, in particular, has yielded interesting results, and
has been the basis of, for example, lie detector testing, since the mid 20th century.
3 NB The microchip is at its technological limit. Other, more efficient developments in the pipeline include
biomorphic circuitry and, the more distant, quantum computing. Such developments might negate the necessity
for Boolean algebra and even statistical analysis as we currently understand it. They will, undoubtedly,
improve the speed and efficiency of calculations – looking ever-increasingly like intelligence – and make better
predictions.
173