Page 178 - Cyber Terrorism and Extremism as Threat to Critical Infrastructure Protection
P. 178
SECTION II: CYBER TERRORISM AND SECURITY IMPLICATION FOR CRITICAL INFRASTRUCTURE PROTECTION
As is common with the consumer purchase decision-making process, the traveller will evalu-
ate their decision after consumption, which then provides further information to the individual
to be used in future potential travel evaluation situations.
Although a marketing example, the principles of decision-making – the integration of social,
biological and psychological cues – will be the same for establishing threats within a work en-
vironment, such as nuclear power stations. From this example, one can see that, with consider-
able care and attention, it is relatively easy to programme a machine learning (ML) tool, using
either an established psychometric model relating to stress (and/or deviance for that matter) or
programming the ML after creating and testing a new, specific, stress-model, and then combine
these cognitive results with the biological data. With ample time and budget, such an ML tool
would, undoubtedly, result in a significant improvement on the work of Gjoreski et al. The prob-
lem with using this approach, however, was alluded to at the very beginning of this paper. The
problem is that this approach is based on current assumptions in the psychology and sociology
literature. The weakness is not in the theory (although the theory will not be perfect), but in the
very fabric of the philosophical approach underpinning the scientific process itself.
4.1 The Inherent Weakness of Quantitative Methods – Methodological Issues When
Researching Human Subjects
There are some very specific academic considerations when researching human subjects
rather than inanimate objects. Research methods, especially the quantitative methods found
in computer science, involve re-modelling a theory into a smaller manageable component (an
analogy) and then making up variables with corresponding questions and limiting the answers,
in the form of a scale. This allows the scientist/programmer to fit the variables, questions and
answers to a scientific theory, in a methodological process termed functional unity (Fletcher,
1974). Statistical techniques are then used to measure how much these variables conform to,
or deviate from, the given theory.
The aim of the quantitative researcher, therefore, is to gauge the truth of part of this analogy,
rather than to examine the whole issue. Once this is achieved, the research results are published
and re-tested by others in a process known as falsification (Hammersley, 1989). In this process,
the theory is only rejected once it has been falsified a number of times and in different ways.
This process represents a fundamental weakness in research methodologies relating to human
subjects (Fletcher, 1974), and is an impediment to any research on human behaviour because
the method cannot reflect the true character of the social world (Hammersley, 1989). Until
the development of ML, however, it was the only methodology that could be employed in
computer science.
The alternative, in the social sciences, is normally to engage in some form of qualitative
research methodology, such as participant observation or ethnography (Hammersley, 1989),
but this too, has its weaknesses – usually relating to overcoming researcher bias and the
unverifiable accuracy of the results. Blumer (1989) and others argue, however, that qualitative
methods, employing a process of symbolic interaction and recognising the fundamental
idiosyncrasies of human interaction, yield results on human subjects that aid and enhance
understanding, rather than simply identifying trends.
The meticulous and creative use of various types of algorithm, combined with multivariate
statistical analysis, however, does yield the very real possibility of being able to replicate the
178