Information systems with social autonomy

The first computer-based IS were essentially data processing systems designed to assist organisations with the processing and storage of the vast amount of data generated in the course of contemporary business activity (Somogyi and Galliers, 1987). The extension of the scope of IS implementations to encompass the installation of systems capable of acting as autonomous organisational agents has been so gradual as to be almost imperceptible. It has, however, been the case at least since the introduction of ATMs, that some IS directly substitute for, rather than simply support, humans in selected interactive organisational roles. The extent to which this type of substitution has occurred is probably much greater than is obvious on the surface, for it is surely correct to say that there are many organisations in which front-of-house staff are supporting the systems that are actually taking the decisions, rather than the reverse. As might be expected during the early stages of what is in effect a quiet revolution, many situations where systems and humans share the decision-making powers can be somewhat ambiguous. This ambiguity can have awkward consequences, as in the case of Australia’s Centrelink, an agency responsible for managing unemployment matters where, during the course of a recent furore over errors, some were blamed on systems taking decisions, and others on systems not taking decisions (McKinnon, 2004).

An autonomous IS is postulated as showing three behavioural characteristics that have traditionally been associated with the possession of intelligence and the capacity to use language effectively; it can understand meaningful input, it can be meaningfully responsive to that input, and it can take socially significant decisions that are responsive to the meanings developed in the interaction. The fact that one party may believe that the other party does not in any sense ‘understand’ what is going on does not seem to invalidate the perspective taken since the interaction occurs despite this. The effect is the same as if the transaction had involved two people, a meaningful conversation had taken place, and a mutually satisfactory outcome achieved.

The issue of meaning needs to be explicitly addressed, if only because it is difficult to see who or what within the organisation ‘knows’ what has happened once the transaction has been completed. To argue that the organisation itself is what ‘understands’ in effect simply shifts the problem up a level. Yet the conclusion that the actions taken have been socially meaningful seems inescapable; value has been exchanged in consequence of entering into the transaction, and the outcomes are fully binding on both parties. Clearly the original system designers would once have understood the process, and it is certain that company accountants and others will understand the nature of the relevant transactions in general terms, but this is not the same as having direct knowledge or an unmediated understanding of actual events. In a contemporary organisation it is in any case quite possible that the original programmers will have left or forgotten the details of the system. In a downsizing world there is no guarantee that anybody still working in the company will know any more about an autonomous system’s activities than they would about those of any other colleague carrying out responsible work. The possibility that an autonomous system will carry on doing business on behalf of an organisation long after the last person to leave has turned off the lights is a real one.

It is notable that it is precisely the issue of understanding, or rather what or who understands, that has been at the heart of many an esoteric, acerbic debate in the cognitive science and artificial intelligence (AI) arenas (Rey, 1997). In his now famous ‘Chinese Room’ thought experiment, the philosopher John Searle postulates a system comprising various entities including people (who cannot speak Chinese) that is able to reliably and instantaneously translate English into Chinese by applying a set of categorical rules; Searle asks who or what it is that understands Chinese (Searle 1980, p. 422). No definitive answer (i.e. satisfactory to both proponents and critics of AI) has been forthcoming (Rey, 1997, p. 271). Alan Turing fell back on a purely behaviourist perspective when proposing the ‘Turing test’ (Turing, 1950), taking the view that if a system is able to fool its interlocutors about whether or not it is a person, then it should be taken as being able to think, but that was an approach that has caused more debates than it has resolved (Rey, 1997, p. 153).

Autonomous IS are small fry in comparison with the kind of complex and often threatening entity that is usually postulated when AI is discussed (Crevier, 1993). Yet the issue seems to be the same in principle, a view that is strengthened by the clear possibility that the interactional capacities of autonomous systems will continue to increase. With this in mind, the theoretical approach followed here is to adhere to a precedent from cognitive science, and for analytical purposes to ascribe the capacity to understand to the system ‘[this] does not say that intentional systems really have beliefs and desires, but that one can explain and predict their behaviour by ascribing beliefs and desires to them … the decision to adopt [this] strategy is pragmatic and not intrinsically right or wrong’ (Dennett, 1978, p. 7 emphases in original). The surrounding context makes it quite clear that ATMs were not the type of entity that Dennett had in mind when making his argument, but the logic seems equally applicable.