Article Image
News Link • Science, Medicine and Technology

"Getting Started" Guide to Cybernetics

CYBERNETICS — A Definition

Artificial Intelligence and cybernetics: Aren't they the same thing? Or, isn't one about computers and the other about robots? The answer to these questions is emphatically, No.

Researchers in Artificial Intelligence (AI) use computer technology to build intelligent machines; they consider implementation (that is, working examples) as the most important result. Practitioners of cybernetics use models of organizations, feedback, goals, and conversation to understand the capacity and limits of any system (technological, biological, or social); they consider powerful descriptions as the most important result.

The field of AI first flourished in the 1960s as the concept of universal computation [Minsky 1967], the cultural view of the brain as a computer, and the availability of digital computing machines came together to paint a future where computers were at least as smart as humans. The field of cybernetics came into being in the late 1940s when concepts of information, feedback, and regulation [Wiener 1948] were generalized from specific applications in engineering to systems in general, including systems of living organisms, abstract intelligent processes, and language.

Origins of "cybernetics"

The term itself began its rise to popularity in 1947 when Norbert Wiener used it to name a discipline apart from, but touching upon, such established disciplines as electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology. Wiener, Arturo Rosenblueth, and Julian Bigelow needed a name for their new discipline, and they adapted a Greek word meaning "the art of steering" to evoke the rich interaction of goals, predictions, actions, feedback, and response in systems of all kinds (the term "governor" derives from the same root) [Wiener 1948]. Early applications in the control of physical systems (aiming artillery, designing electrical circuits, and maneuvering simple robots) clarified the fundamental roles of these concepts in engineering; but the relevance to social systems and the softer sciences was also clear from the start. Many researchers from the 1940s through 1960 worked solidly within the tradition of cybernetics without necessarily using the term, some likely (R. Buckminster Fuller) but many less obviously (Gregory Bateson, Margaret Mead).

Limits to knowing

In working to derive functional models common to all systems, early cybernetic researchers quickly realized that their "science of observed systems" cannot be divorced from "a science of observing systems" — because it is we who observe [von Foerster 1974]. The cybernetic approach is centrally concerned with this unavoidable limitation of what we can know: our own subjectivity. In this way cybernetics is aptly called "applied epistemology". At minimum, its utility is the production of useful descriptions, and, specifically, descriptions that include the observer in the description. The shift of interest in cybernetics from "observed systems" — physical systems such as thermostats or complex auto-pilots — to "observing systems" — language-oriented systems such as science or social systems — explicitly incorporates the observer into the description, while maintaining a foundation in feedback, goals, and information. It applies the cybernetic frame to the process of cybernetics itself. This shift is often characterized as a transition from 'first-order cybernetics' to 'second-order cybernetics. Cybernetic descriptions of psychology, language, arts, performance, or intelligence (to name a few) may be quite different from more conventional, hard "scientific" views — although cybernetics can be rigorous too. Implementation may then follow in software and/or hardware, or in the design of social, managerial, and other classes of interpersonal systems.

Origins of AI in cybernetics

Ironically but logically, AI and cybernetics have each gone in and out of fashion and influence in the search for machine intelligence. Cybernetics started in advance of AI, but AI dominated between 1960 and 1985, when repeated failures to achieve its claim of building "intelligent machines" finally caught up with it. These difficulties in AI led to renewed search for solutions that mirror prior approaches of cybernetics. Warren McCulloch and Walter Pitts were the first to propose a synthesis of neurophysiology and logic that tied the capabilities of brains to the limits of Turing computability [McCulloch & Pitts 1965]. The euphoria that followed spawned the field of AI [Lettvin 1989] along with early work on computation in neural nets, or, as then called, perceptrons. However the fashion of symbolic computing rose to squelch perceptron research in the 1960s, followed by its resurgence in the late 1980s. However this is not to say that current fashion in neural nets is a return to where cybernetics has been. Much of the modern work in neural nets rests in the philosophical tradition of AI and not that of cybernetics.

Philosophy of cybernetics

AI is predicated on the presumption that knowledge is a commodity that can be stored inside of a machine, and that the application of such stored knowledge to the real world constitutes intelligence [Minsky 1968]. Only within such a "realist" view of the world can, for example, semantic networks and rule-based expert systems appear to be a route to intelligent machines. Cybernetics in contrast has evolved from a "constructivist" view of the world [von Glasersfeld 1987] where objectivity derives from shared agreement about meaning, and where information (or intelligence for that matter) is an attribute of an interaction rather than a commodity stored in a computer [Winograd & Flores 1986]. These differences are not merely semantic in character, but rather determine fundamentally the source and direction of research performed from a cybernetic, versus an AI, stance.

(c) Paul Pangaro 1990

Join us on our Social Networks:


Share this page with your friends on your favorite social network:

Agorist Marketplace