Understanding information system failures from the complexity perspective.

Author:Mukherjee, Indranil
 
FREE EXCERPT

INTRODUCTION

Human beings have long tried to ease the uncertainty so widely prevalent in their lives by predicting the future. This effort has been bolstered by the epoch-making progress in science and technology over the last century. In fact, it is the way in which hazards are being handled that has distinguished modern 'man' from 'his' predecessors, as Peter Bernstein (1) argues in his much acclaimed work Against the Gods. For modern man, hazards have to be controlled and consequent problems overcome by himself, through systematic application of science, technology and most of all-mathematics. God, in the modern era, has gradually been replaced by equations.

The domain of computer based information systems is one where this systematic/engineering rationale of regulating and/or managing risks has seen its greatest manifestation. But paradoxical as it may sound, the risk involved with and the failure rates of computer based information systems have proved to be of significant concern in this modern technological era.

Major disasters have often hit the world due to the failures of the associated information systems, not to mention the large failure rates of small to medium size software projects. Some of the prominent examples of such failures are: (a) London Ambulance Service Computer Aided Dispatch System in 1992 which cost the city of London approximately 2.3 million dollars and led to the loss of around twenty odd lives (2); (b) Denver airport baggage system in 1995 which got nearly 24 months late; (c) e-Bay site failure on June 10-11, 1999; (d) hardware flaw undermining the accuracy of Intel Pentium chips in 1994 which eventually forced Intel to take a $475 million write-off to account for replacing defective chips (3). A very large number of projects are delivered with missing functionality (promised for delivery in later versions). Between 30 to 40% of all software projects are "runaway" projects that far exceed original schedule and budget predictions and fail to perform as originally specified (4). It is no wonder then that a substantial gamut of Information Systems research over the past three decades has remained primarily focused on the exploration of possible causes of IS failures, ranging from failure to fulfil users' hopes as well as designers' promises (5) to operational problems and other unanticipated difficulties. This research ranges from mere technological explanation of success and failure to a rich and complex analysis of human organizational systems. Friedman and Cornford (6) in their premier account of the history of the development of computer based information systems, perceive failures as playing a pivotal part in shaping the dynamics of information and communication technologies (ICTs). Characterized as persistent, pervasive and pernicious (7), IS failures have continued to attract the attention of a number of academic scholars and practitioners in the field who have tried to analyze and understand the problem from different perspectives. The multidimensional character of the subject area is reflected in the number of issues related to it--(a) conceptual: concerned with the very nature of the subject; (b) empirical: attempting to establish a causal relationship between different factors and probing into the effects of those factors; (c) normative: designing developed tools and techniques for building and maintaining successful IS applications (8) (Sauer, 1993), which the different approaches continue to address.

There is probably no unified framework to understand failures of information systems. Whatever research-conceptual, empirical or normative-has been carried out in the past and is still in the process of being undertaken, can broadly be seen as representative of different schools of thought, with fundamentally different epistemologies inherent to them. In this paper, an attempt is made to apply the complexity framework to Information System failures. The paper is organized as follows. The theoretical background necessary to discuss about information system failures is first developed. Different research frameworks that are currently being utilized in the study of information system failures are introduced next. Fundamental elements of complexity are then introduced and discussed. The next section takes a first look at information systems through the lens of complexity, emphasizing the structural and functional similarities between information systems on one hand and complex systems on the other. The last section probes information system failures from the complexity perspective, spells out useful lessons for the system designer, discusses the issue of recurrent failures, points out the inadequacy of systems theory in explaining IS failures, and concludes by analyzing the London Ambulance Service case.

Theoretical Background: IS failure is not a well-defined concept. It covers different types of experiences and outcomes. Three types of IS failure were initially characterized by Lyytinen and Hirschheim (9): process, correspondence and interaction failure. Process failure refers to outcomes of the system development process such as project abandonment, schedule slippages, budget blowouts, or financial crises during the implementation stage. In correspondence failure, IS projects are completed but fail to meet specified objectives such as monetary savings, improved efficiency of resource allocation, greater productivity etc. Interaction failure occurs during or after project completion. It refers to failure to use an IS due to users' non-acceptance because of low level of user involvement or low degree of user satisfaction. Each of these three concepts was criticized as being limited in that one takes no account of the forms of failure defined by the other two. Lyytinen and Hirschheim thus proposed the fourth concept of expectation failure (inability of an information system to meet a specific stakeholder group's expectations). Their work basically stressed on the fact that IS failures occur due to the existence of a gap between an existing situation and a desired situation for a particular stakeholder group in an organization.

Given the rather heterogeneous background in the area of IS failures, research in such failures has been diverse in nature. This variety is clearly manifested as the focal point of the analysis shifts from technological understanding to a systems level approach, considering individuals, stakeholders and organizations as entities of interest and probing into organizational culture, the unique situational contexts and power conflicts germane to the situation. Each of these outlooks stems from different schools of thought, which are at the origins of the different epistemologies used in research into IS failures, and in a more general context, in any sort of IS research.

Currently, research into IS failures is concentrating on the different types of failures outlined above, different varieties of systems, different types of sectors and a number of different organizational change initiatives. The chief problem in formulating feasible theoretical models about IS failures is the high degree of complexity arising out of the intricate combination of technical, human and organizational characteristics of any information system. It is extremely difficult or even impossible to explain, let alone predict, the behaviour of such systems. Another related problem is the fact that previous studies of IS failure generally emphasized single failure common to all failure contexts. This concept of single failure was elucidated through the perspectives of organizational politics, organizational culture, institutional theory and organizational learning (10). However, failure is not truly a single phenomenon, but rather a diverse set of phenomena including certain failures which are recurrent in nature. New kinds of failure have surfaced in recent years like those related to BPR and BPO and thereby led to more complications (11), (12). Researchers are trying to grapple with the problem of containing IS failures by looking at both risk containment and risk control strategies, and also by giving up the holistic approach to risk management and rather adopting more specialized techniques.

Researchers are exploring different methodologies in order to analyze and understand IS failures more effectively. In general, a research methodology may be regarded as a philosophical framework or "point of view" within which a set of methods can be systematically applied. There are three broad philosophical perspectives in relation to qualitative research in the domain of Information Systems (13), viz. positivist, interpretive and critical research. The stance taken by positivist researchers in Information Systems is based upon the assumption that reality is objectively given and that it can be described by reference to measurable properties that are independent of the researcher. Interpretive IS researches, on the other hand, consider that reality can only be accessed through social constructions such as language, consciousness and shared meanings (14). Critical research relies on the assumption that social reality is historically constituted and that people are constrained in their actions by different forms of cultural and political domination (15). Efforts are being made to extend the existing research approaches in the area of information systems. Brey (16) proposes that technological change--which is at the heart of all information system projects--can be most easily understood in the context of technological controversies, disagreements and difficulties with which the actors involved in the change are concerned. Brey tries to adopt an approach using some form of social constructivism in which the researcher does not need to evaluate claims made by different groups about any "real" properties of the technology being studied. Brey classifies social constructivist approaches...

To continue reading

FREE SIGN UP