Steering the Reverberations of Technology Change on Fields of Practice:

Laws that Govern Cognitive Work

 

Download pdf of this page

 

David D. Woods (woods.2@osu.edu)

Institute for Ergonomics

The Ohio State University

1971 Neil Ave

Columbus, OH 43210 USA


"Now all scientific prediction consists in discovering in the data of the distant past and of the immediate past (which we incorrectly call the present), laws or formulae which apply also to the future, so that if we act in accordance with those laws our behavior will be appropriate to the future when it becomes the present.”  Craik, 1947, p. 59

Abstract

Research on cognitive work in context has abstracted a set of common patterns about cognitive work and about the relationship of people and computers. I offer four families of Laws that Govern Cognitive Work plus Norbert’s Contrast as a synthesis of these findings to guide future development of human-computer cooperation. These Laws are one prong of a general strategy to avoid repeats of past "automation surprises".

1. Patterns of Reverberations

Observational studies of cognitive work in context have built a body of work that describes how technology and organizational change transforms work in systems. Points of technology change push cycles of transformation and adaptation (e.g., Carroll’s task-artifact cycle; Carroll and Rosson, 1992; Winograd and Flores, 1987; Flores, Graves, Hartfield, and Winograd, 1988). The review of the impact of new technology in one operational world effectively summarizes the general pattern (Cordesman and Wagner, 1996, p.25):

Much of the equipment deployed ... was designed to ease the burden on the operator, reduce fatigue, and simplify the tasks involved in operations.  Instead, these advances were used to demand more from the operator. Almost without exception, technology did not meet the goal of unencumbering the personnel operating the equipment
... systems often required exceptional human expertise, commitment, and endurance.
... there is a natural synergy between tactics, technology, and human factors ... effective leaders will exploit every new advance to the limit.  As a result, virtually every advance in ergonomics was exploited to ask personnel to do more, do it faster and do it in more complex ways.
... one very real lesson is that new tactics and technology simply result in altering the pattern of human stress to achieve a new intensity and tempo of operations. [edited to rephrase domain referents generically]

This statement could have come from studies of the impact of technological and organizational change in health care or air traffic management or many other areas undergoing change today (see Billings, 1997, and Sarter and Amalberti, 2000, for the case of cockpit automation). Overall, the studies show that when “black box” new technology (and accompanying organizational change) hits an ongoing field of practice the pattern of reverberation includes (Woods and Dekker, 2000):

New capabilities, which increase demands and create new complexities such as increased coupling across parts of the system and higher tempo of operations,

New complexities when technological possibilities are used clumsily,

Adaptations by practitioners to exploit capabilities or workaround complexities because they are responsible to meet operational goals,

The complexities and adaptations are surprising, unintended side effects of the design intent,

Failures occasionally break through these adaptations because of the inherent demands or because the adaptations are incomplete, poor, or brittle,

The adaptations by practitioners hide the complexities from designers and reviewers after-the-fact who judge failures to be due to human error.

The pattern illustrates a more general law of adaptive systems that has been noted by many researchers (e.g., Rasmussen, 1986; Hirschhorn, 1997)

The law of stretched systems:
every system is stretched to operate at its capacity; as soon as there is some improvement, for example in the form of new technology, it will be exploited to achieve a new intensity and tempo of activity.

Under pressure from performance and efficiency demands, advances are consumed to ask operational personnel “to do more, do it faster or do it in more complex ways” (see NASA’s Mars Climate Orbiter Mishap Investigation Board report, 2000, for a example).

2. Watching People Engineer Cognitive Work: Claims and Myths

People as advocates for investment in and adoption of new technology make claims about how these changes will affect cognitive work and the processes and products of practice. Claims about the future of practice if objects-to-be-realized are deployed represent hypotheses about the dynamics of people, technology and work (Woods, 1998). Observations at points of technology change find that these hypotheses can be and are often quite wrong—a kind of second order automation surprise (Sarter, Woods, and Billings, 1997). Envisioning the future of operations, given the dynamic and adaptive nature of the process, is quite fragile.

What patterns emerge from observations of people engineering cognitive work or of people’s claims about how various advances-in-process will enable the re-engineering of cognitive work? Remarkably consistently, we observe over-simplifications (Feltovich et al., 1997) that claim the introduction of new technology and systems into a field of practice substitutes one agent for another, essentially, computer capabilities as substitute for erratic human performance. Yes, the claims of opposition of human and machine come cloaked in different and often quite sophisticated forms, yet underneath inter-substitutability or Fitts’ List remains the core—people and machines are or can be equivalent so that new technology (with the right capabilities) can be introduced as a simple substitution of machines for people—preserving the system though improving the results. This oversimplification fallacy is so persistent it is best understood as a cultural myth—the Substitution Myth (Woods and Tinapple, 1999).

The myth creates difficulties because it is wrong, empirically—adding or expanding the machine’s role changes the cooperative architecture and changes human roles, introduces capabilities and complexities that are part of multiple adaptive cycles as human actors and stakeholders jostle in the pursuit of their goals. But moreover, the myth is unproductive as it locks us into cumbersome trial and error processes of development, blocks understanding the demands of cognitive work in context and how people in various roles and groups adapt to those demands, and channels energy away from processes of innovating use from the continually expanding power of machine information processing.

How can we better calibrate and ground claims about the future of cognitive work to avoid past cycles where change exacerbated clumsy use of technology and limited adaptations from people responsible to meet system goals?  One possible tactic is to develop generalizations or ‘laws’ that govern cognitive work by any cognitive agent or any set of cognitive agents from the empirical base.  Such Laws could serve as a guide to enhance the use information processing technology in a practice–centered R&D process (Woods and Christofferesen, in press).

3. Predicting and Steering Change in Cognitive Work

Based on patterns about cognitive work and about the relationship of people and computers abstracted from research on cognitive work in context, I offer four families of Laws that Govern Cognitive Work as a synthesis to guide future development of human-computer cooperation (the approach is a deliberate play off Conant’s 1976 laws of information that govern systems). I also offer Norbert’s Contrast (Wiener, 1950) as an alternative conception of the relationship between people and computers. The current draft set of Laws is available from the author.

These laws are built on a foundation of agent-environment mutuality. Agents' activities are understandable only in relationship to the properties of the environment within which they function and an environment is understood in terms of what it demands and affords to potential actors in that world. Each is mutually adapted to the other.

The Laws fall into four families plus Norbert's Contrast. First, Laws of Adaptation build on original insights of cybernetics and control (Ashby, 1957; Conant, 1976). The driving force here is how cognitive systems adapt to the potential for surprise in the worlds of work, i.e., the foundational slogan for Cognitive Systems Engineering from Jens Rasmussen adaptations directed at coping with complexity and surprise (Rasmussen and Lind, 1981; Woods, 1988; Woods and Christoffersen, in press).

Laws of Models are concerned with how we understand and represent the processes we control and the agents we interact with. The driving force here is the mystery of how expertise is tuned to the future, while, paradoxically, the data available is about the past.

Laws of Collaboration address how cognitive work is distributed over multiple agents and artifacts. The driving force here is the fact that cognitive work always occurs in the context of multiple parties and interests as moments of private cognition punctuate flows of interaction and coordination.  The idea that cognition is fundamentally social and interactive, not private, radically shifts the basis for analyzing and designing cognitive work and reconsidering the relationship between people and computers.

Quite surprisingly, Laws of Responsibility are the fourth family, driving home the point that in cognition at work, whatever the artifacts and however autonomous that are under some conditions, people create, operate, and modify these artifacts in human systems for human purposes.

Fifth, based on these Laws, Norbert's Contrast goes behind our fascination with increasing the power of the computer to remind us of the limits of literal minded agents and the unique competences of human cognition to handle the tradeoffs and dilemmas of a changing, finite resource, uncertain world (Wiener, 1950).

Norbert’s Contrast

Artificial agents are literal minded and disconnected from the world, while human agents are context sensitive and have a stake in outcomes.

The key is people and computers start from different opposite points and tend to fall back or default to those points without the continued investment of effort and energy from outside the system.

Each of these families of Laws  and Norbert's Contrast is quite surprising even shocking given conventional beliefs about cognition, organizations, and computers.  The Laws allows us to see past these conventional beliefs to re-consider relationships across people, computers, the goals of various stakeholders and the complexities and variations in the worlds of human activity as we envision and create the future of operations.

Laws that Govern Cognitive Work have an odd quality–-they appear optional. Designers of systems that perform cognitive work do not have to follow them. In fact, we notice these laws through the consequences that have followed repeatedly when design breaks them in varying episodes of technology change.  The statements are law-like in that they capture regularities of control and adaptation of cognitive work, and they determine the dynamic response, resilience, stability or instability of the distributed cognitive system in question.  While developers may find following the laws optional, what is not optional is the consequences that accrue predictably from breaking these laws, consequences that block achieving the performance goals developers and paractitioners, technologists and stakeholders set.

Respect for the Laws is essential, for in the final analysis: in design, we either hobble or support people's natural ability to express forms of expertise.

Acknowledgments

This piece is a companion and follow up to a previous address to the Cognitive Science Society in 1994, Observations from Studying Cognitive Systems in Context.

 

Many thanks to the various colleagues who in one way or another helped identify how generalizations like these operate in cognitive work.

Prepared in part through participation in the Advanced Decision Architectures Collaborative Technology Alliance sponsored by the Army Research Laboratory under Cooperative Agreement DAAD 19-01-2-0009.

References

Ashby, W. R. (1957). An Introduction to Cybernetics.  Chapman and Hall, London.

Billings, C. E. (1997). Aviation Automation: The Search For A Human-Centered Approach. Hillsdale, N.J.: Lawrence Erlbaum Associates.

Carroll, J.M. & Rosson, M. B.  (1992).  Getting around the task-artifact cycle: How to make claims and design by scenario.  ACM Transactions on Information Systems. 10, 181-212.

Conant, R. C. (1976). Laws of information which govern systems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-6, 240-255.

Cordesman, A. H. & Wagner, A. R. (1996).  The Lessons of Modern War, Vol.4:  The Gulf War, (Boulder, CO: Westview Press).

Craik, K. J. W. (1947)  Theory of the operator in control systems:  I. The operator as an engineering system. British Journal of Psychology, 38, 56-61.

Feltovich, P.J., Spiro, R.J., & Coulson, R.L (1997).  Issues of expert flexibility in contexts characterized by complexity and change.  In P.J. Feltovich, K.M. Ford, & R.R. Hoffman (eds.), Expertise in context: Human and machine.  Menlo Park, CA.  AAAI/Mit Press.

Flores, F., Graves, M., Hartfield, B. & Winograd, T. (1988). Computer systems and the design of organizational interaction. ACM Transactions on Office Information Systems, 6, 153-172.

Hirschhorn, L. (1997).  Quoted in Cook, R. I., Woods, D. D. and Miller, C.  (1998). A Tale of Two Stories:  Contrasting Views on Patient Safety.  National Patient Safety Foundation, Chicago IL, April 1998 (available at  www.npsf.org).

NASA, Mars Climate Orbiter Mishap Investigation Board. (2000).  Report on Project Management at NASA, March 13, 2000.

Rasmussen, J. (1986). Information processing and human-machine interaction:  An approach to cognitive engineering. Amsterdam: North-Holland.

Rasmussen, J. & Lind M. (1981). Coping with complexity (Risø-M-2293). Risø National Laboratory, Roskilde, Denmark: Electronics Department.

Roesler, A. Feil, M. & Woods, D.D. (2002).  Design is Telling (Sharing) Stories about the Future.  Draft Working MediaPaper at url: http://csel.eng.ohio-state.edu/animock

Sarter, N. & Amalberti, R., eds.  (2000). Cognitive Engineering in the Aviation Domain, Erlbaum, Mahwah NJ.

Sarter, N., Woods, D.D. & Billings, C.E. (1997). Automation Surprises.  In G. Salvendy, editor, Handbook of Human Factors/Ergonomics, second edition, Wiley, New York.

Wiener, N.  The Human Use of Human Beings: Cybernetics and Society, Doubleday NY,1950.]

Winograd, T. and Flores, F. (1986). Understanding computers and cognition.  Norwood, NJ, Ablex.

Woods, D.D. (1988). Coping with complexity: The psychology of human behavior in complex systems. In L.P. Goodstein, H.B. Andersen, and S.E. Olsen, editors, Mental Models, Tasks and Errors, Taylor & Francis, London, (p. 128-148).

Woods, D. D. (1998). Designs are hypotheses about how artifacts shape cognition and collaboration. Ergonomics, 41, 168-173.

Woods, D. D. & Christoffersen, K. (in press). Balancing Practice-Centered Research and Design.  In M. McNeese and M. A. Vidulich (editors), Cognitive Systems Engineering in Military Aviation Domains.  Wright-Patterson AFB, OH: Human Systems Information Analysis Center.

Woods, D. D. & Dekker, S. W. A.  (2000). Anticipating the Effects of Technological Change:  A New Era of Dynamics for Human Factors. Theoretical Issues in Ergonomic Science, 1(3), 2000.

Woods, D. D. & Tinapple, D. (1999).  W3:  Watching Human Factors Watch People at Work.  Presidential Address, 43rd Annual Meeting of the Human Factors and Ergonomics Society, September 28, 1999. Multimedia Production at http://csel.eng.ohio-state.edu/hf99/