THE CHALLENGE OF SOCIOCYBERNETICS

006 THE CHALLENGE OF SOCIOCYBERNETICS

by

 FELIX GEYER
July 18-24, 1994

 

A Service of
the RedFeather
Institute for
Advanced Studies

redfeth.gif (6856 bytes)

ALL RED FEATHER MATERIALS ARE ALWAYS FREE TO STUDENTS AND TO THOSE WHO TEACH THEM....T R Young


4-DIMENSIONMAP.GIF (27960 bytes)
A 4-Dimensional Bifurcation Map


THE CHALLENGE OF SOCIOCYBERNETICS

 by
 FELIX GEYER

NON-LINEAR SOCIO-DYNAMICS: blink.gif (995 bytes)Explicationsblink.gif (995 bytes) Implications blink.gif (995 bytes)Applicationsblink.gif (995 bytes)


Paper prepared for Symposium VI: "Challenges to Sociological Knowledge",  Session 04: "Challenges from Other Disciplines", 13th World Congress of Sociology, Bielefeld, July 18-24, 1994


1. Introduction:
The term cybernetics derives from the Greek word for steersman, and could thus roughly be translated
as the art of steering. In the forthcoming half hour, you will hopefully become convinced that the 
cybernetic approach can offer some interesting concepts and models to sociology, and may also in that
sense have a steering function. There are two difficulties, however:
1) We are comparing two fields which both have a large degree of internal differentiation and fuzzy
boundaries: cybernetics and sociology. Both involve many different approaches, schools of thought,
paradigms, etc. You will all undoubtedly be aware of the many different approaches in sociology. As
to cybernetics, it is used here rather loosely as referring to a set of related approaches: general systems
theory, information theory, catastrophe theory, some forms of model-building by means of simulation,
and lately chaos theory.
2) Over the last few decades, cybernetics and the social sciences have started influencing one another,
although still to a limited extent. We therefore certainly cannot and do not claim that cybernetics as
a whole forms a challenge to sociology as a whole, but do argue that it would be an intellectually
stimulating and profitable experience for many sociologists to get acquainted with some of the more
recent developments in cybernetics. 
3) Sociocybernetic studies have generally appeared in cybernetic journals rather than sociological ones,
which may be one of the reasons why up till now cybernetics has had relatively little influence on
mainstream sociology.1) Since the 1960s several social scientists have nevertheless been inspired by
cybernetics  and have applied it to the social sciences: Deutsch2) and Easton3) in political science;
Buckley4), and later Baumgartner, Burns and DeVill‚5) in sociology and economics, to name but a few.
Nevertheless, and in spite of the above caveats, it seems worth while to reconnoiter what recent
developments in cybernetics could mean for sociology. In sociological theorizing, the focus has slowly
shifted, over the last few decades, from trying to explain the structure and stability of social systems
to analyzing the processes that cause them to change and evolve towards greater levels of complexity,
from trying to help maintain homeostasis in a top-down fashion to explaining morphogenesis as a result
of interpenetrating bottom-up processes. Cybernetics has always concentrated on both: the results of
input-output transformation processes may be explained by the structure of the system, while that
structure can in turn be conceived as the resultant of previous processes. Recent developments in
cybernetics, however, have increasingly concentrated on the analysis of interacting processes, including
even the observers of these processes, and thus the possibility of a potentially fertile theory transfer
should certainly not be excluded. 
Within cybernetics, it is customary to distinguish first- and second-order cybernetics, and to prevent
misunderstanding we will keep to this usage, although classical vs. modern cybernetics or first-
_________________________________________________________
*) The author is especially indebted to Johannes van der Zouwen of the Vrije Universiteit, Amsterdam,
 who has commented in great detail on a first draft of this paper, and with whom a paper on a related 
subject was recently completed (see ref. 34). Our intensive collaboration over nearly two decades in 
organizing sociocybernetics sections at different international cybernetics congresses has resulted 
in a number of co-edited books, and a certain amount of scientific symbiosis. Responsibility for the 
contents of the present paper, however,is all mine. Thanks are also due to Kitty Verrips of SISWO, 
who commented specifically on the potential usefulness of second-order cybernetics for sociological
theorizing, and to the members of SISWO's Working Group on Sociocybernetics where this paper
was first presented, especially to discussants Cor van Dijkum and Loet Leydesdorff.              
generation vs. second-generation cybernetics might be a preferable terminology. The issue here is 
that second-order cybernetics originated in reaction to what were seen as the deficiencies of first-order
cybernetics, and has the tendency - as often happens - to create its own niche by overstressing the
differences with first-order cybernetics. In order to clarify the differences between the two, we will
do the same, with the caveat that they are largely a matter  of relative stress, and that much of what
is now known as second-order cybernetics was already adhered to by first-order cyberneticians.

2. Classical (first-order) cybernetics:
First-order cybernetics originated in the 1940's 6), and indeed tried to steer observer-external systems.
Although it had an interdisciplinary orientation, it might be called an engineering approach, and
focussed on studying feedback loops and control systems, and on constructing intelligent machines7).
A few examples: Norbert Wiener8), often considered the father of cybernetics, was engaged in
automating the operation of anti-aircraft batteries, which led to the construction of ILLIAC, the world's
first computer. Shannon and Weaver9), working at Bell labs in the late 1940s, were confronted with
the problem how to reduce noise in telephone lines, and developed information theory. MIT's Marvin
Minsky10) constructed amongst others M. Speculatrix, a small robot that could find its way out of a
dark room, dexterously moving around objects towards the light; he initiated the now flourishing field
of Artificial Intelligence, or AI as it is known for short. 
With its stress on efforts to steer especially technological devices, and developing all kinds of control
systems, it is not amazing that first-order cybernetics was especially interested in negative feedback
loops, rather than positive ones. When a negative feedback loop either naturally occurs, or is
constructed, the performance or output of a system is compared with a preset goal, and corrective
action is taken whenever there is a deviation from that goal. The thermostat of a central heating system
may serve as an example: there is a feedback loop from the thermostat to the heater, whenever room
temperature rises above a certain maximum, or falls below a certain minimum. It is noteworthy that
even in this simple example, although it clearly is a control system, there is no specific controlling
agent; control is dispersed through the system, and any part of it could be said to control the rest of
the system.
As a result of the above, first-order cybernetics - with its engineering approach and the corresponding
stress on constructing control systems, and with its predilection for negative rather than positive
feedback phenomena - was interested primarily in homeostasis or equilibrium-maintenance, or at least
in restoring a system's equilibrium whenever it was disturbed by external influences impinging on that
system. As is still the case in much of science, environment mastery was an implicit goal, based on
the Newtonian conception of an in principle orderly universe: admittedly complex, but knowable by
means of a continuing and cumulative effort to discover its basic laws. Positive feedback loops, which
cause morphogenesis rather than homeostasis, and are the motor behind change, were paid much less
attention to. A simple example of a positive feedback loop is cumulative interest, or to put it more
esoterically: "the devil shits on the big heap", recently formulated in economics11) as the law of
increasing returns.
Early efforts to apply this homeostasis-oriented type of cybernetics or systems theory to the field of
the social sciences, as for example those of Parsons12), met with the resistance of a social science
community which by then had turned overwhelmingly liberal, and considered the systems or cybernetic
approach to be not only conservative, but also too simplistic, mechanistic and linear to be really
applicable to the world of human interaction.  
Leaving in the middle for the moment to what extent this left-wing or liberal criticism is generally
correct, one can certainly say that some applications of first-order cybernetics like system dynamics - a
simulation procedure originally developed by Forrester and Meadows13) to simulate the behavior of
systems with several feedback loops - have made remarkable inroads in the general scientific
community. One need only think of the enormous popularity of the Club of Rome Report, even among
laymen14), where systems modeling was applied to an extremely complicated problem area, with many
interacting variables.
The liberal criticism is understandable - the stress of first-order cybernetics was indeed largely, though
not exclusively on constructing mechanical control systems - but not quite correct. In the Club of Rome
report, for example, positive feedbacks were certainly important, and the same goes for many other
technological systems: one of the world's perhaps most impressive technological achievements, the
atom bomb, would be unthinkable without positive feedbacks. As Van der Zouwen15) put it succinctly:
without negative feedback loops the organism cannot maintain itself in its environment, and without
positive feedback loops it has no chance to survive as a species in view of environmental changes to
which it has to adapt by setting new goals.
3. The potential usefulness of the classical (first-order) cybernetic approach for sociology:
While the concepts and procedures of second-order cybernetics, to be discussed later, have some
specific advantages for sociology, dealing as they are with the interactions of self-organizing, self-
referential systems, and thus with the continuous emergence of new levels of complexity, the principles
of first-order cybernetics can certainly also be fruitfully applied to the field of the social sciences.
3.1 System boundaries: 
Cybernetics or systems theory - we use these terms interchangeably, as they pertain to virtually the
same fields of inquiry - holds that one can mentally and arbitrarily carve out any part of the universe16)
and call it a system. However, once one has thus delineated the boundaries of the system, one should
keep to them, at least for a while. This follows from the so-called black box approach, named after
the early metal boxes containing electrical wiring and circuitry, which were invariably painted black.
The black box approach presupposes that the external observer can never really observe the system
from within, but can only determine what goes in (the input) and what comes out (the output). From
the differences between the two, inferences can then be made about the way the system works,
depending of course on the mindset of the observer. 
The way system boundaries are drawn is obviously observer-dependent, time-dependent and most
importantly also problem-dependent. In other words: two observers may be inclined to draw slightly
different boundaries when talking about the same problem; and the same observer may draw the
boundaries of a system to be studied differently tomorrow than today. And finally, even when the
boundaries are not drawn differently as a result of time- or observer-dependence, they may be drawn
in a different way because a different problem needs to be studied. It is necessary to be fully aware
of this when one has to determine what falls inside and what falls outside one's field of enquiry, and
when one has to formulate a research design.
For example, one can define an individual as a system confined by his or her body. But when one
is looking at that individual from a medical perspective, it may be relevant to include the input: all
the food, drugs and alcohol that have been ingested lately, not to mention the output. When looking
at that individual as a scientist, it may be relevant to include at the very least his word processor, if
not the entire support structure of his institute or department. And when looking at his emotional
problems,
one may have to enlarge the system by including his family members, as is done in systems-oriented
family therapy17). An extreme example of problem-dependence occurred at a cybernetic conference,
where the speaker produced a saw, convinced everyone it was indeed a normal saw, and then started
using it quite passably as a fiddle. 
Even when a problem has been more or less specified - for example the adaptation of ethnic minorities
-one still has to decide how wide the system boundaries should be drawn in order to promise the most
interesting research results. One might focus on the interactions between individuals and their
environment if one wants to determine what Sennett18) called "the hidden injuries of class", between
families and their environment if one wants to focus on defensive or adaptive joint strategies, or
between the group as a whole and its environment if one wants to compare the adaptation problems
of different ethnic groups.
3.2 Systems, subsystems and suprasystems:
Since one can arbitrarily draw the boundaries of a system when developing a research design, one
can not only decide how one wants to define the system under consideration, bu one is also free to
decide how one wants to define the subsystems - i.e., component parts that should be especially looked
at - and the suprasystem(s) it forms part of. In the above case, for example, and obviously depending
on one's research goals, one might select the ethnic family as the system under consideration, the family
members as the subsystems, and the ethnic minority group and the nation in which it lives as the
suprasystems. One obviously needs to be extremely careful how one defines such a hierarchically nested
set of systems, as this will determine the kind of research results one will obtain.
3.3 Circular causality:
Many of us have still been educated to consider circular reasoning as being wrong: a mistake known
in logic as the "circulus vitiosus". Something cannot cause itself; but cybernetics says it indeed can.
One of the important contributions of first-order cybernetics has been to increase the awareness of
ubiquitous circular processes, in technology, in nature, and in society. The circular causal cycle may
be short - like A causes B and B causes A - or it may be long and cycle through the entire alfabet or
more, in which case it will be harder to discover.  
It may be interesting to speculate on what has caused the narrowing of this circular model of causal
thinking during the cumulative buildup of the Newtonian-Laplacean world image, with its clockwork
model of the universe, its mechanistic rather than organic bias, and its stress on linear causal chains
unfolding through time, from past to future19). This resistance against circular causality may itself be
part of a circular process where every success of the developing natural sciences since the 17th century
strengthened the conviction that hard work could lay bare the implicit rules, the hidden clockwork,
of the universe, which in turn attracted new scientists to engage in this prestigious area, whose research
results then further strengthen the Newtonian image of the world. 
For whatever reason, much of empirical sociology still follows the linear model: admittedly,
multivariate analysis may demonstrate that a phenomenon has many different causes, and many different
consequences, but this is still a far cry from concentrating on finding circular causal chains, whereby
the phenomenon in question helps to create itself. It barely needs comment, of course, that such a
concentration on unraveling circular causal chains would enormously complicate empirical research.
The idea of circular causality has a certain intuitive attractiveness, and a logical one as well, but to
prove its existence empirically is quite another matter. 
3.4 Positive and negative feedback loops:
Both positive and negative feedback loops are examples of circular causality. They can either occur
spontaneously, in nature as well as in society, or they can be engineered. As was mentioned before,
negative feedback loops are of special interest to first-order cybernetics, since its purpose generally
was to steer systems by keeping them on a certain course, rather than have them change direction,
i.e., to let them fluctuate within a specified margin around an equilibrium. However, positive feedback
loops were certainly already recognized in the engineering efforts of first-order cybernetics, but it was
the originally biology-based second-order cybernetics that gave them special attention. Logically so,
as they cause change rather than stability.
As an aside, it should be noted here that Magoroh Marayama20) spoke already in 1963 of the "second
cybernetics" (not: second-order cybernetics), thus designating the cybernetics which concentrates on
deviation-amplifying mutual causal systems, where positive rather than negative feedback, and
morphogenesis rather than morphostasis are at issue.
Often, negative feedback loops will spontaneously emerge in human interaction when that interaction
continues over a certain period of time. A famous example is formed by the well-known prisoner's
dilemma which, when played over several cycles, changes from a non-zero-sum game to a zero-sum
one: at first, both prisoners tend to betray one another to maximize their own profit. Rapoport21)
discovered that both partners start empathizing with the other's position after a while, and then both
converge to what he calls a tit-for-tat strategy: an honest move will be rewarded by an honest counter-
move, and a dishonest one will be punished by a dishonest counter-move. 
3.5 Simulation:
While a technique rather than a concept, one can certainly say that simulation, originally a technique
of first-order cybernetics, is widely used now also in second-order cybernetics to study phenomena
of emergence, and has become a much-used tool in the social sciences as well as in most other
disciplines. With the increasing mass scale availability of high-speed computing equipment, even on
PC's, it becomes possible to realistically simulate ever more complex problems, with the possibility
to incorporate an increasing number of interacting variables in one's models. The obvious advantage
of such simulations is that one can investigate the effects of changing some of the variables without
actually changing them in reality, i.e., without engaging in policy action. Also, simulations with
complex models allow one to discover latent consequences of certain intended actions, and to forecast
the emergence and the effects of counter-intuitive behavior.

4. Modern (second-order) cybernetics:
Second-order cybernetics originated some thirty years later than first-order cybernetics, in the early
1970s. The term was coined by Heinz von Foerster22) in a paper for the 1970 meeting of the American
Society for Cybernetics, entitled "Cybernetics of cybernetics". He defined first-order cybernetics as
the cybernetics of observed systems, and second-order cybernetics as the cybernetics of observing
systems.  
Indeed, one of the main differences with first-order cybernetics is that second-order cybernetics
explicitly includes the observer(s) in the systems to be studied. Moreover, it generally deals with living
systems, and not with developing control systems for inanimate technological devices. These living
systems range from simple cells all the way up the evolutionary scale to human beings; while the
observers themselves are obviously also human beings. Thus, in contrast to the engineering approach
of first-order cybernetics, most of second-order cybernetics could be said to have a mainly biological
approach, or at the very
least a biological basis. As Umpleby23) states, this difference has important consequences: 
   1) Living systems, no matter how primitive, have a "will" of their own. They exhibit what
Maturana and Varela have termed autopoiesis or self-production: they not only reproduce, but also
produce their own "spare parts" whenever necessary, generally utilizing elements from their
environment. Living systems thus are organizationally closed, but informationally open.  
   2) As a result, living systems are inherently more difficult to steer; their interactions with their
environment are more difficult if not impossible to forecast more than a few moves ahead. Thus,
second-order cybernetics has become realistic about the possibilities for steering, and has concentrated
more on understanding the evolution of biological and social complexity than on controlling it. 
   3) Given this, it is understandable that second-order cybernetics is more interested in
morphogenesis and positive feedback loops, than in homeostasis and negative feedback loops. 
   4) Although first-order cybernetics certainly included important biologists - like Von
Bertalanffy24), one of the founders of General System Theory - the impetus for second-order cybernetics
came largely from biology and neurophysiology, and in a later stage also from epistemology. This
is not to say that biology does not profitably use first-order cybernetic concepts: homeostasis, for
example, remains an important concept in biology to explain different processes like hormonal balance,
maintenance of temperature, etc. However, many biological phenomena that have to do with growth,
change and emergence demand an explanation in terms of second-order cybernetics. Maturana, for
example, considers knowledge to be a biological phenomenon.25) Attention was thus focused on the
observer, and the biological basis of perception and knowledge acquisition processes. In epistemology,
second-order cyberneticians became interested in the nature of knowledge, language, cognition and
communication.
   5) It was thus logical that the concept of self-reference was developed and stressed, especially
biological and linguistic self-reference, A satisfying theory of biology should account for the existence
of theories of biology; likewise, an adequate theory of cognition should give an understanding of
understanding. The view of language changed from language as a string of  symbols representing
external "reality" to language as actions for coordinating actions. Umpleby16) gives the example of
"performative utterances" like "I now pronounce you husband and wife", where the social status of
two people is tranformed, while this transformation is described at the same time.  
   6) Summing up: in second-order cybernetics, the system - whether an individual or a group -
is defined as having the ability to reflect on its own operations on the environment, and even on itself.
These operations generate variety in the environment, or in itself, which can reflexively be recognized
as being due to systemic variation, which makes them recursive: observations can be observed,
communications can be communicated, etc.
Apart from von Foerster, several other authors haven given concise definitions of the differences
between first-order and second-order cybernetics. These differences refer to, respectively, the purpose
of a model vs. the purpose of the modeler (Pask), controlled systems vs. autonomous systems (Varela),
the interaction among variables in a system vs. interaction between the observer and the system
observed (Umpleby), and theories of social systems vs. theories of the interaction between ideas and
society (Umpleby). The latter difference seems to reflect the respective approaches of Parsons as a
first-order systems theorist interested in system stability and system maintenance, and Luhmann as
a second-order cybernetician interested more in change and morphogenesis.
4.1 Second-order cybernetics and the philosophy of science:
While second-order cybernetics agrees with many of the tenets of the mainstream Newtonian philosophy
of science - like the need to distinguish science fom non-science, e.g., by means of Popper's
falsifiability criterion, the principle of verification through experimentation, the procedure of refuting
conjectures by trial and error, etc. - it goes against it in some important respects:26)
1) It disagrees with the idea that observations are independent of the characteristics of the observer.
Von Glasersfeld27) developed the philosophy of constructivism as an alternative to the realism of
mainstream philosophy of science, i.e., the idea that every individual constructs his or her own reality
to fit personal experience28). The knowledge built up this way fits, but does not match the world of
experience. It is considered an advantage of this approach that it supposedly increases tolerance, by
leading to what De Bono 29) prefers to call "proto-truth", with all the relativistic implications this
entails, rather than truth in any absolute sense. Perhaps the best illustration of such an inbuilt sense
of relativity is presented in the Mel Brooks movie "History of the World", where Moses comes down
Mt. Sinai carrying three topheavy tablets with the fifteen Commandments, stumbles and drops the last
tablet to smithereens, then relaxes, shrugs, and says: "Well, ten left". 
2) In classical philosophy of science, theories do not affect the phenomena they describe; it would be
preposterous to assume that the Second Law of Thermodynamics would speed up the decay of the
universe. But in second-order cybernetics, interaction between social theories and social systems is
taken for granted30). Economic theories do change economic systems, and often entire societies, as
any East-European in the audience can testify. And they are often developed precisely because the
theorists do want to change social systems.
3) A core point of disagreement, however, at least with some extreme proponents of second-order
cybernetics, is that they reject the necessity, claimed amongst others by Popper, of the unity of method.
The methods used for the physical sciences cannot be used for the biological and social sciences, if
only because they are self-organizing, self-referential and autopoietic. However, mainstream second-
order cyberneticians have a more moderate viewpoint, i.e., that there is at least some unity of method
across the sciences.
Clearly, second-order cybernetics strongly disagrees with the still neo-positivist, Newtonian mainstream
philosophy of science in the abovementioned respects31), although this disagreement is certainly not
the prerogative of second-order cybernetics: Norbert Wiener, for example, devoted a fascinating
chapter32) to the difference between Newtonian and Bergsonian time. The changes suggested by second-
order cybernetics amount to a scientific revolution in Kuhnian terms; but, as Umpleby33) suggests, the
time has come perhaps to return to a period of normal science. The way to do this is by stressing the
correspondence principle: i.e. the new theory, second-order cybernetics, should reduce to the old
theory, the mainstream philosophy of science, to which it corresponds for those cases in which the
old theory is known to hold. In other words, a new and previously neglected dimension is added. 
5. The usefulness of second-order cybernetic concepts for sociology
Before dealing with some of the main concepts of second-order cybernetics - self-organization, self-
reference, self-steering, autocatalysis, and autopoiesis - it is interesting to note that Norbert Wiener,
the father of cybernetics, was quite ambivalent himself on the applicability of cybernetics to the social
sciences and to society.34) On the one hand, Wiener was thoroughly convinced that the behavior of
humans, animals and machines could be explained by making use of the same cybernetic principles:
communication, control of entropy through learning by means of feedback, etc. This is evident already
from the titles of his two best-known books: "The Human Use of Human Beings" 35) and "Cybernetics:
or Control and Communication in the Animal and the Machine"36). On the other hand, Wiener was
quite pessimistic about the applicability of cybernetics to social systems, and this for at least two
reasons.
   1) Social science data usually exemplify short statistical time series, affected by varying
environmental conditions, while one would ideally need long runs under invariant conditions.
   2) Wiener moreover considered the social sciences as the discipline where the coupling between
observer and observed is hardest to minimize in both directions: apart from the obvious disadvantages
of observer dependence, the researcher also inevitably influences the subjects of his research, and can
sometimes even act as a catalyst in processes of sudden change: how many strikes have not broken
out just after the researchers measuring job satisfaction left the premises? 
In a sense, one might say these two objections are interrelated: to the extent that the observer tends
to influence his subjects more than in the natural sciences, he thereby contributes to a disruption of
the constancy of the conditions needed for longer statistical time series.37)  This is of course not to
deny that many examples can be found of the inverse: content analysis or non-participant observation
does not influence the subjects of the research, while bombarding protons or vivisection clearly does.
The difference between first- and second-order cybernetics was described before, but it should be clear
already from the main concepts: while the concepts of first-order cybernetics do not point specifically
to either the system or its environment, the important concepts of second-order cybernetics all start
with "self", if not in English, then in Greek ("autopoiesis"). While it may be interesting to speculate 
about the extent to which this "selfishness" is related to the increasingly rapid social change taking
place since the late 1960s, this would fall outside the bounds of this paper.

5.1 Self-organization
Due to the increasing complexity of today's world, and the seeming intractability of many of its
problems, contemporary scientific interest in many disciplines has centered on the emergent, self-
organizing properties of certain complex aggregates. Especially developments in biology in this respect
have stimulated second-order cybernetics, although first-order cybernetics was certainly also aware
of self-organizing systems, but did not pursue this line of inquiry. Norbert Wiener, for example,
mentions the synchronization of the behavior of fireflies38), while Ashby also stressed self-organization
in his "Design for a Brain" 39).
Recent developments in cognitive science demonstrate the emergence of self-organization (itself a matter
of emergence) as a core concept. Cognitive science can be viewed as the result of an interdisciplinary
effort which includes neuroscience, AI, linguistics, philosophy, and cognitive psychology. Though
it is barely three decades old, one can already distinguish two schools: cognitivism and connectionism.
Cognitivism and cognitive technology gave the impetus for the development of artificial intelligence
(AI). Cognitivism conceived mind as a manipulation according to the rules of deductive logic,  as
localized and serial information processing of symbols which were supposed to represent an external
reality.40) In this respect, it seems to be close to first-order cybernetics, although during the Macy
conferences of the late fourties41) it was already hypothesized that brains have no central logical
processor, and no firm rules or specific spots to store information, but rather operate on the principle
of distributed intelligence.
In spite of this, cognitivism was the mainstream paradigm in cognitive science until well into the 1970s,
when its drawbacks became more apparent:42)
- At some stage, symbolic information processing encounters the so-called "Von Neumann bottleneck":
it is based on rules that are sequentially applied, which obviously gives problems with large numbers
of sequential operations, as in weather forecasting or image analysis.
- Moreover, symbolic information processing is localized, rather than distributed, and local
malfunctioning of some of the symbols or rules therefore tends to result in overall malfunctioning,
without the resilience towards disturbances which distributed processing offers.
Connectionism, in clear contradistinction to cognitivism,  incorporates many of the views of second-
order cybernetics. It is explicitly bottom-up rather than top-down: it eschews abstract symbolic
descriptions, but assumes simple, "stupid", neuron-like components that develop cognitive capacities
when appropriately connected. In other words: the intelligence is in the structure, in the connections
made - hence the name connectionism - and not in the components. Thus, it stresses emergence and
self-organization.
Hebb's rule, formulated in 1949, occupies a central place in connectionism. It states that learning is
based on postulated changes in the brain as a result of correlated activity between neurons. If two
neurons tend to be active together, their connection is strengthened; if not, then it is weakened. In
this way history, which is always a history of transformations, is incorporated. Simulations of a simple
network of neurons according to Hebb's rule demonstrate that pattern recognition is possible, after
a learning phase in which some of the connections are strengthened or weakened, when the number
of patterns presented is no more than about 15% of the participating neurons.43) 
The components of neural networks do not need an external processing unit which guides their
operations; they work locally according to local rules, but since they are part of a network, global
cooperation emerges when the states of all participating neurons reach a mutually satisfactory state. 
This passage from local rules to global coherence is the essence of self-organization - otherwise also
denoted as emergent or global properties, network dynamics, nonlinear networks, complex systems,
etc. Once one cares to look for it, emergence abounds. In different domains, like vortices and lasers,
chemical oscillations, genetic networks, population genetics, immune networks ecology and geophysics,
self-organizing networks give rise to new properties.
Varela describes simulation experiments with extremely simple cellular automata, whose components
are arranged in a circle, can only have only two states ( 0 or 1), and change their states according
to simple rules (i.e. the states of their two  neighbouring components). These experiments have
demonstrated that they can be divided into four classes or attractors:
- in simple attractors all components end up either being all active (1) or being all inactive (0);
- in somewhat more complex cyclical attractors spatial periodicities emerge: some components remain
active, while others do not;
- a third type of attractor, also cyclical, runs through at least two cycles before returning to the same
state;
- finally, for a few rules the resulting dynamics give rise to the chaotic attractors, studied especially
in chaos theory, where one cannot detect any regularities either in space or time, although such systems
may unexpectedly return to perfect order, as has been Prigogine's point of departure in this theory
of complexity.
When these simple cellular automata are structurally coupled to an external world, for example by
dropping them in a simulated "primal soup" of 0s and 1s, with the rule that each component takes
over the state of the environmental element it encounters, then nothing happens with the first and fourth
class of attractors: they go back to their homogeneous state, respectively remain in their chaotic state.
The cyclical attractors, however, demonstrate an admittedly  primitive kind of intelligent behavior,
for which they were definitely not programmed: for example, some types react ony to double
perturbations from the environment; others, the so-called odd-sequence recognizers - only to an uneven
number of perturbations.
These experiments show that a simple system with a form of closure (the network's internal dynamical
emergences), which is structurally coupled to an environment (replacement of each component by the
state of the element it encounters) selects (or enacts) a domain of distinctions from a world of
randomness which is relevant to its structure. On the basis of its autonomy the system selects or enacts
a domain of significance, which involves some minimal interpretation. While admittedly a far cry from 
describing anything like human intelligence, these experiments present a minimal example of how
autonomous systems can draw significance from a random background, which may not be totally unlike
the way humans draw significance from an after all neutral universe: i.e., by being autonomous, or
having operational closure, and by being structurally coupled.
The implications of these developments in cognitive science for sociology are potentially interesting
because of the possible analogies that may exist with the many cases of self-organization in human
societies:
1) autonomous systems, though in this case simulated on a computer whose coupling with the
environment is specified by input-output relations, and thus by an outside source, give meaning to their
interactions on the basis of their own history, rather than on the basis of the intentions of the
programmer - or should one say a manipulating environment?
2) neural networks, and possibly other networks like human networks as well, produce emergent
phenomena as a result of both simultaneous processes (the emergent pattern itself arises as a whole)
and sequential ones (participating components have to engage in back-and-forth activity to produce
the pattern.44)
5.2 Self-reference
The phenomenon of self-reference is assumed to be typical of human beings, both on the individual
and the group level, although recent work with apes seems to open up the possibility that they too may
have some degree of self-reference. Nevertheless, self-reference - at least in the sense used here - 
is not a concept in first-order cybernetics, which - as Norbert Wiener so explicitly stressed - concerns
itself with the commonalities between man, animals and machines, rather than with the differences
between them. 
Three meanings of self-reference may be distinguished in this respect:
1) the "neutral" meaning, which is used also and especially in first-order cybernetics, and is also
applicable to non-biological systems, where "self-referencing control" indicates that any changes in
the state of a system are dependent upon the state of that system at a previous moment, like birth rate
being dependent upon population size;
2) the "biological" meaning, where senses and a memory are the minimum requirements, and where
a self-referential system can be defined as a system that contains information and knowledge about
itself, that is, its own state, structure, and processes; like, e.g., human beings45);
3) the "stronger" second-order cybernetics meaning used here, where the system - whether an individual
or a social system - collects information about its own functioning, which in turn can influence that
functioning; minimal requirements in this case are self-observation, self-reflection and some degree
of freedom of action.
One of the main characteristics of social systems, distinguishing it from many other systems, is indeed
their potential for self-referentiality in the latter sense. This means not only that the knowledge
accumulated by the system about itself in turn affects both the structure and the operation of that
system, but it also implies that in self-referential systems like social systems, feedback loops exist
between parts of external reality on the one hand, and models and theories about these parts of reality
on the other hand.
Concretely, whenever social scientists systematically accumulate new knowledge about the structure
and functions of their society, or about subgroups within that society, and when they subsequently
make that knowledge known, through their publications or sometimes even through the mass media -
in principle also to those to whom that knowledge pertains - the consequence often is that such knowledge will
be invalidated, because the research subjects may react to this knowledge in such a way that the
analyses or forecasts made by the social scientists are falsified. In this respect, social systems are
different from many other systems, including (most?) biological ones. There is a clearly two-sided
relationship between selfknowledge of the system on the one hand, and the behavior and structure of
that system on the other hand.
Biological systems, like social systems, admittedly do show goal-oriented behavior of actors, self-
organization, self-reproduction, adaptation and learning. But it is only psychological and social systems
that arrive systematically, by means of experiment and reflection, at knowledge about their own
structure and operating procedures, with the obvious aim to improve these. This holds true on the
micro-level of the individual, as in psychoanalysis or other self-referential activities, and on the macro-
level of world society, as in planning international trade or optimal distribution of available resources. 
For social scientists, the consequences of self-referentiality are interesting not only for gaining an
insight in the functioning of social systems, but also for the methodology and epistemology used to
study them. There is a paradox here: the accumulation of knowledge often leads to a utilization of
that knowledge -both by the social scientists and the objects of their research - which may change the
validity of that knowledge.46), 47) 
The usual examples of self-referential behavior in social science consist of self-fulfilling and self-
defeating prophecies. Henshel46), for example, has studied serial self-fulfilling prophecies, where the
accuracy of earlier predictions, themselves influenced by the self-fulfilling mechanism, impacts upon
the accuracy of the subsequent predictions. In much of empirical social science research, however,
self-referential behavior does not loom large - which is rather amazing in view of its supposedly being
an essential characteristic of individual human functioning. However, if one does not see UFO's, this
neither means they are not there, nor that one is blind. In this case the research methodology used
may be an issue: survey research, where people are asked what they think or feel, offers little
opportunity to bring out self-referential behavior, while depth interviews, which concentrate more on
the "why" than the "what" of people's opinions have a better chance to elicit self-referential remarks
in this respect.
           
5.3 Self-steering
The futility of large-scale and detailed planning efforts has led to the increasing realization that both
individuals and organizations are to a large extent self-steering. After all, perfect planning would imply
perfect knowledge of the future, which in turn would imply a totally deterministic universe in which
planning would not make any difference. While recognizing the usefulness of efforts to steer societies,
a cost-benefit analysis, especially in the case of intensive steering efforts, will often turn out to be
negative: intensive steering implies intensive social change, i.e. a long and uncertainty-increasing time
period over which such change takes place, and also an increased chance for changing planning
preferences and for conflicts between different emerging planning paradigms during such a period.
Nevertheless, given a few human cognitive predispositions, there unfortunately seems to exist a bias
for oversteering rather than understeering.48)
A historical overview of planning efforts concludes that, in spite of intensified theorizing and energetic
attempts to create a thoroughly planned society during the last two centuries - the different answers
given so far regarding the possibility of planning cancel each other out. There is even no consensus
about a formal definition, though usually planning is seen as more comprehensive, detailed, direct,
imperative or expedient when compared with other steering activities that are not defined as planning.
Increased knowledge about human (i.e. self-referential) systems often does not help us to improve our
planning of such systems.
Aulin49) tried to answer two basic questions in this respect:
   1) Should one opt for the "katascopic" or the "anascopic" view of society; in other words, should
the behavior of individuals and groups be planned from the top down, in order for a society to survive
in the long run, or should the insight of actors at every level, including the bottom one, be increased
and therewith their competence to handle their environment more effectively and engage more
succesfully in goal-seeking behavior?
   2) What should be the role of science, especially the social sciences, in view of the above choice:
should it try mainly to deliver useful knowledge for an improved steering of the behavior of social
systems and individuals, or should it strive to improve the competence of actors at grass roots level,
so that these actors can steer themselves and their own environment with better results?   
To answer these questions, Aulin followed a cybernetic line of reasoning that argues for non-
hierarchical forms of steering. Ashby's Law of Requisite Variety indeed implies a Law of Requisite 
Hierarchyin the case where only the survival of the system is considered, i.e. if the regulatory ability of the regulators is
assumed to remain constant. However, the need for hierarchy decreases if this regulatory ability itself improves -
which is indeed the case in advanced industrial societies, with their well-developed productive forces and
correspondingly advanced distribution apparatus (the market mechanism). Since human societies are not simply
self-regulating systems, but self-steering systems aiming at an enlargement of their domain of self-steering, there
is a possibility nowadays, at least in sufficiently advanced industrial societies, for a coexistence of societal
governability with ever less control, centralized plannning and concentration of power.
As the recent breakdown of the Soviet Union and its gigantic planning apparatus demonstrates, this is not only
a possibility, but even a necessity: when moving from a work-dominated society to an information-dominated one,
less centralized planning is a prerequisite for the very simple reason that the intellectual processes dealing with
information are self-steering - and not only self-regulating - and consequently cannot be steered from the outside
by definition. In other words: there should be no excessive top-down planning, and science should help individuals
in their self-steering efforts, and certainly should not get involved in the maintenance of hierarchical power systems.
Of course, this is not to deny that there is a type of system within a society that can indeed be planned, governed
and steered, but this is mainly because such systems have been designed to be of this type in the first place, i.e.
to exemplify the concept of the control paradigm of first-order cybernetics - although even in first-order cybernetics
control does not necessarily imply hierarchy, as even the simple case of the thermostat mentioned before
demonstrates. After all, in most developed countries, and even in many underdeveloped ones, the railways run
on time, in spite of self-steering employees being involved. Most armies, though also replete with self-steering
individuals, are still based on strict hierarchical control and nevertheless function reasonably well, although it has
to be admitted that modern, technologically sophisticated and information-driven armies50) seem to thrive more
on bottom-up initiative, while armies that explicitly incorporate such self-steering principles and bottom-up initiative
in their training - like the Israel Defense Forces - are among the most successful. Thus, while there certainly still
is a limited place for organizations that exemplify a more or less hierarchical control paradigm, modern, complex
multi-group society in its entirety, conceptualized as a matrix in which such systems grow and thrive, can never
be of this type.
If one investigates a certain system with a research methodology based on the control paradigm, the results are
necessarily of a conservative nature; changes of the system as such are almost prevented by definition. According
to De Zeeuw,51) a different methodological paradigm is needed if one wants to support social change of a
fundamental nature and wants to prevent "post-solution" problems; such a paradigm is based on a multiple-actor
design, does not strive towards isolation of the phenomena .                                  - 13 -
to be studied, and likewise does not demand a separation between a value-dependent and a value-independent part
of the research outcomes.
5.4 Autocatalysis and cross-catalysis
Laszlo52) describes two varieties of (chemical) catalytic cycles: autocatalytic cycles, in which the product
of a reaction catalyzes its own synthesis, and crosscatalytic cycles, where two different products or
groups of products catalyze each other's synthesis. An example of a model of a cross-catalytic cycle,
developed by Prigogine and colleagues, is the Brusselator:
(1) A   X
(2) B + X   Y + D
(3) 2X + Y   3X
(4) X   E
With X and Y as intermediate molecules, there is an overall sequence in which A and B become D
and E. Step 3 can be seen as autocatalysis, while steps 2 and 3 in combination describe cross-catalysis.
Autocatalytic sets (A   B   C   ......   Z   A) bootstrap their own evolution, provided the
complexity of interactions is rich enough; the system then changes from a subcritical to a supercritical
state, and autocatalysis follows. Kaufmann53) even uses the concept to explain the origins of life from
a "primal soup" of simple chemical elements as an inevitable production of order, rather than as a
unique and extremely unlikely historical accident. Simple chemical laws coupled with the presence
of a sufficient number of frequently interacting elements produce ever more complex elements, with
new characteristics, that often turn out to be part of new catalytic processes at higher levels of
molecular complexity - processes which in turn boost the emergence of still higher levels of complexity.
Along similar lines, Swenson54) likewise maintains that - in spite of the Second Law of Thermodynamics
- "the world is in the order production business". 
The economist Arthur55), collaborating closely with Kaufmann at the Sante Fe Institute, applied the
concept of autocatalytic sets to the economy: the economy too bootstraps its own evolution, as it grows
more complex over time. Beyond a certain critical threshold, phase transitions occur; stagnant
developing countries can enter the take-off stage when their economy has diversified sufficiently.
Increased trade between two countries in a subcritical state can similarly produce a more complex and
interwoven economy which becomes supercritical and explodes outward. Catalytic effects might also
operate in phase transitions that are considered negative, where critical thresholds of violence are passed
as in Northern Ireland or Bosnia.
5.5 Autopoiesis
Autopoiesis, or "self-production", is a concept introduced in the 1970s by the biologists Maturana and
Varela,56) with the aim to differentiate the living from the non-living. An autopoietic system was defined
as a network of interrelated component-producing processes such that the components in interaction
generate the same network that produced them.
Although Maturana and Varela considered the concept applicable only in biology, and not in the social
sciences, an interesting "theory transfer" was made by Luhmann.57) He defended the quite novel thesis
here that, while social systems are self-organizing and self-reproducing systems, they do not consist
of individuals or roles or even acts, as commonly conceptualized, but of communications. It should not
be forgotten that the concept of autopoiesis was developed while studying living systems. When one tries to
generalize the usages of this concept to make it also truly applicable to social systems, the biology-based theory
of autopoiesis should therefore be expanded into a more general theory of self-referential autopoietic systems.
 It should be realized that social and psychic systems are based upon another type of autopoietic 
organization than living systems: namely on communication and consciousness, respectively, as 
modes of meaning-based reproduction.
While communications rather than actions are thus viewed as the elementary unit of social systems,  the concept
of action admittedly remains necessary to ascribe certain communications to certain actors. The chain of
communications can thus be viewed as a chain of actions - which enables social systems to communicate about
their own communications and to choose their new communications, i.e. to be active in an autopoietic way. Such
a general theory of autopoiesis has important consequences for the epistemology of the social sciences: it draws
a clear distinction between autopoiesis and observation, but also acknowledges that observing systems are themselves
autopoietic systems, subject to the same conditions of autopoietic self-reproduction as the systems they are studying.
The theory of autopoiesis thus belongs to the class of global theories, i.e. theories that point to a collection of
objects to which they themselves belong. Classical logic cannot really deal with this problem, and it will therefore
be the task of a new systems-oriented epistemology to develop and combine two fundamental distinctions: between
autopoiesis and observation, and between external and internal (self-)observation. Classical epistemology searches
for the conditions under which external observers arrive at the same results, and does not deal with self-observation.
Consequently, societies cannot be viewed, in this perspective, as either observing or observable. Within a society,
all observations are by definition self-observations.
One of the first efforts to apply the concepts of both autopoiesis and autocatalysis to the social sciences was made
by Gierer.58) He demonstrated "empirically" - by computer simulation - that inequality can be explained as resulting
from the cumulative interaction over time of the auto-catalytic, self-enhancing effects of certain initial advantages
(e.g. generalized wealth, including education) with depletion of scarce resources. It then turns out that striking
inequalities can be generated from nearly equal initial distributions, where slight initial advantages tend to be self-
perpetuating within the boundary conditions of depleting resources.

6. The science of complexity: a convergence of paradigms
Since complex modern societies - as compared to simpler ones - are highly dynamic and interactive,
and thus change at accelerated rates, they are generally in a far-from-equilibrium situation. According
to Prigogine and Stengers59) - who distinguish systems in equilibrium, systems fluctuating near
equilibrium through feedback, and systems far from equilibrium - non-linear relationships obtain in
systems that are far from equilibrium, where relatively small inputs can trigger massive consequences.
At such "revolutionary moments" or bifurcation points, chance influences, but does not take over from
determinism and the direction of change is inherently impossible to predict: a desintegration into chaos,
or a "spontaneous" leap to a higher level of order or organization - a so-called "dissipative structure",
because it requires more energy to sustain it, compared with the simpler structure it replaces.
 
In stressing this possibility for self-organization, for "order out of chaos", Prigogine comes close to
the concept of autopoiesis. In modern societies, the mechanistic and deterministic Newtonian world
view - emphasizing stability, order, uniformity, equilibrium, and linear relationships between or within closed
systems - is being replaced by a new paradigm. This new paradigm is more in line with today's accelerated social
change, and stresses disorder, instability, diversity, disequilibrium, non-linear relationships between open systems,
morphogenesis and temporality. Prigogine indeed calls it the science of complexity.60) It is exemplified amongst
others by Prigogine himself, Maturana and Varela,47) Laszlo,43)
and "second-order cybernetics" in general: i.e. the (non-mechanistic) study of open systems in
interaction with their observers.
Social scientists, often still thinking in terms of linear causality, would be well-advised to really study
Prigogine's theoretical approach and try out the explanatory powers of his conceptual vocabulary -
fluctuations, feedback amplification, disssipative structures, bifurcations, (ir)reversibility, auto-and
cross-catalysis, self-organization, etc. - on the phenomena they study. This holds true as well for the
concepts and methods of second-order cybernetics in general, as discussed in the foregoing. However,
it is already quite difficult to apply first-order cybernetics - which also fully recognizes non-linearities -
to social science data sets, and it may seem virtually impossible to do the same with second-order
cybernetics; we will come back later on the reasons why this is the case. But indeed, second-order
cybernetics is a paradigm that does more justice to the constantly emerging novel complexities of
ongoing human interaction, and does not postulate simplistic assumptions about the constancy of human
behavior. 
What name one gives to this paradigm, or rather this convergence of paradigms over the last two
decades, is a matter of secondary importance. What is reassuring in this novel and therefore risky field
of research is that there seems to be indeed a convergence of paradigms: all the blind men seem to
have their hands on the same elephant. We have generally called this field here second-order
cybernetics; but it might also be designated by other names like cognitive science, general systems
theory, artificial intelligence, artificial life, or perhaps indeed most aptly the science of complexity.
Its main points should be clear by now:
1) Complexity is in the software, not in the hardware; it is in the structure rather than in the elements
making up the structure, in the way simple building blocks are organized as a result of simple laws,
and not in the building blocks themselves.
2) The emergence of complexity is a bottom-up process, without any central controller leading it, rather
than a top-down one; it is a matter of local units, acting according to local laws, producing new levels
of complexity by interacting.
  
The interesting new field of AI (Artifical Life)61) demonstrates these points by means of computer
simulation. The flocking behavior of birds, for example, has been simulated with amazing accuracy
by computer "boids" following three simple rules: 1) maintain minimum distance from other "boids",
2) match velocities with other boids, 3) move towards the center of the mass of birds. 
Artificial life is the opposite of conventional biology: it tries to understand life by means of synthesis
rather than analysis. It assumes, as stated above, that life is not a property of matter, but of organization
of matter. Living systems are viewed as machines, with one difference from other machines: that they
are constructed from the bottom up. Complex behavior does not need to have complex roots. On the
contrary, top-down systems are forever running into combinations of events they do not know how
to handle. Lindenmayer62) simulated leafs of totally different plants by only slightly changing the
bottom-up rules for their construction. There is no ghost in the machine: a population of simple
elements following equally simple rules of interaction can behave in always  surprising ways. The AL
people are convinced that life is not like a computation, but that it is a computation.
3) What can easily be demonstrated in computer simulations of neural networks goes for human
networks as well: the more densely they are interconnected, the less likely they are to cycle through
a limited number of states, or to ever repeat the same state. The more interdependence grows, the less
likely it becomes that history will ever repeat itself, and can therefore be more or less forecasted on
the basis of previous experience. 
Growing interdependence implies increasing communication. As Leydesdorff63) has stated,
"Communication systems change by communicating information to related communication systems;
co-variation among systems, if repeated over time, can lead to co-evolution (rather than evolution per
se). Conditions for stabilization of higher-order systems are specifiable: segmentation, stratification,
differentiation, reflection, and self-organization can be distinguished in terms of developmental stages
of increasingly complex networks."
Departing from Luhmann's conception of society as consisting of communications, rather than actions
of participating actors, and commenting on Giddens's structuration theory, Leydesdorff64) cogently
argues that the mutually conditioning relationship between structure and action can best be empirically
studied by using the model of parallel distributed processing as employed in Artificial Intelligence.
"The network networks, and the actor acts", i.e., the network performs its own self-referential loops,
independent of the specific actors involved. 
4) It should be clear by now that we have not been talking merely about complex systems in isolation
-which probably do not even exist - but about complex adaptive systems, interacting with an
environment. They are everywhere one cares to look: brains, immune systems, ecologies, cells,
developing embryos, but also sociocultural systems like political parties, economic systems, and even
scientific communities. Holland,65) who was one of the first to simulate neuronal networks in 1951,
mentions the following characteristics:66)
1) They have many agents acting in parallel, and their control is highly dispersed, with any coherent
behavior resulting from competition and cooperation amoong the agents themselves.
2) They have many levels of organization, with agents at one level serving as building blocks for the
agents at the next higher level.
3) These building blocks are constantly rearranged as a result of what one might call either learning,
experience, evolution, or adaptation.
4) They all anticipate the future to some degree, making "predictions" on the basis of "mental" models
of their environment that act like computer subroutines under certain triggering conditions and then
execute certain behaviors - no matter how simple, as in the case of bacteria.
5) They all have many niches they can exploit, whereby filling one niche often opens up new ones
that can be filled; complex adaptive systems, in other words, always create new opportunities.
6) As a consequence, it is meaningless to talk about complex adaptive systems being in equilibrium:
they can never get there, but are always in transition. If they would get there, they would be dead.
7) Likewise, the agents in complex adaptive systems cannot optimize their fitness, utility, etc.: the
space of possibilities is simply too vast in an environment which is also complex and rapidly changing;
they can at best improve on some dimensions, but never optimize.

7. Second-order cybernetics: a bridge too far?
If one accepts these criteria as being valid for complex adaptive systems, and realizes that the social
sciences indeed mainly study those systems - self-organizing, self-referential, autopoietic, and thus
with their own strategies and expectations, with intertwining processes of emergence and adaptation -
then one is confronted with one of the core problems of sociology, economics, and other social
sciences: how to make a science out of studying a bunch of imperfectly smart agents exploring their
way into an essentially infinite space of possibilities which they - let alone the social scientists
researching them -are not even fully aware of. 
There is indeed quite a methodological problem here. It is already very difficult to apply the principles 
and methods (e.g., feedbacks and non-linearities) of first-order cybernetics to empirical social research,
much more so than to sociological theory, and nearly impossible to incorporate a second-order
cybernetics approach in one's research design. Indeed, as far as empirical research is concerned,
second-order cybernetics may be a bridge too far, given the research methodology and the mathematics
presently available.
Applying the principles of first-order cybernetics in empirical research already poses heavy demands
on the data sets and the methods of analysis: every feedback (Xt   Y   Xt+1), every interaction between
variables [Z   (X   Y)], and every non-linear equation (Y = cX2 + bX + a), let alone non-linear
differential equation (Y' = cY2 + bY + a), demands extra parameters to be estimated, and quickly
exhausts the information embedded in the data set. Admitting on top of that the second-order notions
that the research subjects can change by investigating them, let alone being aware of the fact that these
subjects may reorganize themselves on the basis of knowledge acquired by them during the research,
exceeds the powers of analysis and imagination of even the most sophisticated methodologists: it equals
the effort to solve an equation with at least three unknowns.
In the case of second-order cybernetics these problems indeed multiply: how does one obtain reliable
data within such a framework, where nothing is constant and everything is on the move, let alone base
policy-relevant decisions on such data? How can one still forecast developments when at best
retrospective analysis of how a new level of complexity has emerged seems possible? Certainly, these
are problems that are far from solved, and a lot of work lies ahead before hypotheses derivable from
second-order cybernetics will be fully testable. Nevertheless, the opportunities offered by this paradigm
to present a truly realistic analysis of the complex adaptive behavior of interacting groups of agents
seems to good to pass up.
But the inherent problem remains: the more realistic - and therefore less parsimonious - a theory, the
more complex it becomes, and the more difficult to test the hypotheses and subhypotheses derived
from it which are used in collecting and interpreting the data. If one accepts that social systems have
a high degree of complexity, cybernetic theories become more relevant and fitting, but less testable
as they grow more complex themselves, as is the case with second-order cybernetics as compared to
first-order cybernetics. There is certainly a challenge here, for theorists and methodologists alike.
For the time being, sociology should perhaps model itself more on meteorology than on the natural
sciences, and force itself to give up the ambition to make accurate medium- and long-term predictions,
except in delimited areas of research where complexity is still manageable or can be more or less
contained. Ex post facto explanation of how things have come to be as they are is already difficult
enough for social scientists nowadays. The best they may do at the turn of the millennium is to get
a grip on the underlying laws of change, perhaps by a theory transfer from those subfields within
biology where second-order cybernetics was developed, and consequently to further develop the
theories, the non-linear mathematics and the simulation techniques required to investigate the growth
of complexity of human society.
This might ultimately result in adequate and empirically falsifiable models of self-referential, self-
steering and self-organizing actors on individual and supra-individual levels, interacting with each other
in ever more intricate networks to develop new and unforeseen higher levels of complexity, with new
actors engaging in new activities, speeding up the growth of complexity even more. The best one can
do as sociologists under these circumstances seems to accept that there is not any one desirable and
sustainable state for society - only near-continuous transition, often coupled with the impossibility to
forecast even the near future - and that consequently one can engage at best in some degree of damage
control, by pointing out the probability of future catastrophes to those who might be able to help
averting them.
Unfortunately, interdisciplinary and international-comparative research centers to study complexity-
related problems, though sorely needed, barely exist as yet, although the systemic rather than piecemeal
approach they could provide is required by the sheer complexity and interdependence of present-day
societal problems. This is partly due to a lack of political commitment to finance them on the part of
politicians who have to be re-elected every four years, but partly it is also caused by the fact that most
researchers are not yet educated to work in truly interdisciplinary teams that presuppose an open mind,
and at least a reasonable knowledge of the other disciplines involved.
Ashby's "Law of Requisite Variety" states that only variety within a system can force down the variety
due to the system's environment - at least if the system is to make fully sense of the latter and to be
able to steer it. Both the international political system and the international scientific system still have
a long way to go in this respect, and are barely organized yet as reasonably integrated systems that
are aware of their policy options. Grass roots pressure from below, such as is now visible regarding
environmental pollution problems in most of the Western "television-democracies", is probably required
to force politicians and scientists alike to finally get their act together and to start tackling other
complexity-related problems as well.


NOTES AND REFERENCES
1) Geyer, Felix, and Johannes van der Zouwen, "Cybernetics and Social Science: Theories and 
	Research in  Sociocybernetics", Kybernetes, Vol. 20, No. 6, 1991, pp. 81-92
2) Deutsch, Karl W., The Nerves of Government: Models of Political Communication and Control.
	New York: The  Free Press of Glencoe, 1963

3) Easton, David, A Framework for Political Analysis. Englewood Cliffs, NJ: Prentice Hall, 1965
4) Buckley, Walter, Sociology and Modern Systems Theory. Englewood Cliffs, NJ: Prentice Hall, 1967
   Buckley, Walter (ed.), Modern Systems Research for the Behavioral Scientist: A Sourcebook. Chicago: Aldine, 1968

5) Burns, Tom R., and Walter Buckley, Power and Control: Social Structures and their Transformation.
	London: Sage,   1976;
	Burns, Tom R., Thomas Baumgartner, and Philippe DeVill‚, Man, Decisions, Society: The Theory 
	of Actor System  Dynamics for Social Scientists. New York: Gordon and Breach, 1985
	Burns, Tom R., and Helena Flam, The Shaping of Social Organization - Social Rule System Theory 
	with Applications.   London: Sage, 1987

Baumgartner, Thomas, and Attle Midttun, The Politics of Energy Forecasting: A Comparative Study of
	 Energy  Forecasting in Western Europe and North America. Oxford: Clarendon, 1987

6) Rosenblueth, Arturo, Norbert Wiener, and Julian Bigelow, "Behavior, Purpose, and Teleology", 
	Philosophy of Science,   vol. 10, pp. 18-24, 1943; reprinted in Modern Systems Theory for the 
	Behavioral Scientist (W. Buckley, ed.). New  York: Aldine, 1968.

   and also: McCullough, Warren, and Walter Pitts, "A logical calculus of the ideas immanent in
	nervous activity",    Bulletin of Mathematical Biophysics, vol. 5, pp. 115-133, 1943

7) Ashby, W. Ross, Design for a Brain - The Origin of Adaptive Behavior. New York: Wiley and London: 
	Chapman  and Hall, 1952. 

8) Wiener, Norbert, The Human Use of Human Beings - Cybernetics and Society. Garden City, NY: 
	Doubleday, 1956.
9) Shannon, Claude E., and Warren Weaver, The Mathematical Theory of Communication (5th ed.). Chicago:
   University of Illinois Press, 1963
10)     Minsky, Marvin, and Seymour Papert, Perceptrons: An Introduction to Computational Geometry
	 (expanded ed.).    Cambridge, MA, MIT Press, 1988; and also:
	  Minsky, Marvin, The Society of Mind. New York: Simon and Schuster (Touchstone), 2nd ed., 1988

11)     Arthur, W. Brian, "Positive Feedbacks in the Economy", Scientific American, February 1990, 
		pp. 92-99
12)     Parsons, Talcott, Robert F. Bales and Edward A. Shils, Working Papers in the Theory of Action.
	  Glencoe, IL: The Free Press, 1953 
13)     Forrester, Jay W., World Dynamics (2nd ed.). Cambridge, MA: Wright-Allen Press, 1973; and:
	 Meadows, Donella H. and others, The Limits to Growth: A Report to the Club of Rome's 
	Project on the Predicament of Mankind, New York: Universe Books, 1972
14)     The Dutch translation sold no less than a quarter of a million copies.
15)     Van der Zouwen, Johannes, in comments on this paper
16)     Except for the universe itself, as Luhmann has noted, since it presumably has no boundaries.
17)     One of the first to develop this now flourishing field was Watzlawick. See: Watzlawick, 
	Paul, Janet Beavin and Don Jackson, The Pragmatics of Human Communication. New York: 
	Norton, 1967. 
18)     Sennett, Richard, and Jonathan Cobb, The Hidden Injuries of Class. Cambridge: 
		Cambridge University Press, 1977 
19)     To prevent misunderstanding, the following should be noted:  - In circular causality, 
	the causal feedback may be either linear or non-linear; some authors conceive of linear thinking
	 as merely implying that the later elements of the causal chain are not the cause of previous
	elements; in other words,  no causal loop is postulated. In chaos theory, however, non-linearity 
	means that the feedback in the differential equation is non-linear.
	Newton's differential equations are not all necessarily linear, as in the case of the movement of
	three masses; these analytically unsolvable equations led Poincar‚ to his chaos theory
	avant la lettre.
20)   Maruyama, Magoroh, "The Second Cybernetics: Deviation-Amplifying Mutual Causal Processes",
	 pp. 304-316 in  Modern Systems Research for the Behavioral Scientist: A Sourcebook 
	(Walter Buckley, ed.), op. cit.
21)     Rapoport, Anatol, and Albert M. Chammah, Prisoner's Dilemma: A Study in Conflict and 
	Cooperation. Ann Arbor:  University of Michigan Press, 1965 
22)     Von Foerster, Heinz, "Cybernetics of Cybernetics", paper delivered at 1970 annual meeting 
	of the American Society for Cybernetics 
23)     Umpleby, Stuart A., "The cybernetics of conceptual systems", paper prepared for the Institute 
	of Advanced Studies,  Vienna, March 8, 1993.
24)  Von Bertalanffy, Ludwig, General System Theory: Foundations, Development, Applications. 
	New York: Braziller,  1975
25)     Maturana, Humberto, "Neurophysiology of cognition", pp. 3-24 in: Cognition: A Multiple View 
	(Paul Garvin, ed.).   New York: Spartan Books, 1970. See also:
	Maturana, Humberto, and Francisco Varela, Autopoiesis and Cognition: The Realization of the
	Living.   Dordrecht/Boston: Reidel, 1980
	Maturana, Humberto, and Francisco Varela, The Tree of Knowledge: The Biological Roots of 
	Human Understanding.   Boston: New Science Library, 1988
26)     Umpleby, Stuart A., "The science of cybernetics and the cybernetics of science", Cybernetics 
	and Science, 21:109-121,     1990.
27)     Von Glasersfeld, Ernst, The Construction of Knowledge. Salinas, CA: Intersystems Publications.
28)     Von Foerster, Heinz, "On constructing a reality", originally published in 1974, reprinted in 
	Observing Systems (H.  von Foerster, ed.). Salinas, CA: Intersystems Publications.
29)     De Bono, Edward, The Happiness Purpose. Harmondsworth: Penguin Books, 1977, ch. 6 
30)     Soros, George, The Alchemy of Finance. New York: Simon and Schuster, 1988.
31)     Weber, Bruce, "Implications of the application of complex systems theory to ecosystems", 
	pp. 21-30 in The  Cybernetics    of Complex Systems - Self-organization, Evolution,
	Social Change (R.F. Geyer, ed.). Salinas, CA:  Intersystems   Publications.
32)     Wiener, Norbert, Cybernetics, or Control and Communication in the Animal and the Machine.
	 Cambridge, MA:        MIT Press, 1948/1961, ch. 1: "Newtonian and Bergsonian Time"

33)     Umpleby, Stuart A., "Strategies for winning acceptance of second-order cybernetics", paper
	 presented at the  International Symposium on Systems Research, Informatics and Cybernetics, 
	Baden-Baden, Germany, August 12-18,   1991.
34)     Geyer, Felix, and Van der Zouwen, Johannes, "Norbert Wiener and the Social Sciences", 
	Kybernetes, vol. 23, No.7,  1994 (in press)
35)     Wiener, Norbert, The Human Use of Human Beings; Cybernetics and Society, Houghton Mifflin, 
	Boston, 1950/1954;    second edition. New York: Da Capo, 1988
36)     Wiener, Norbert, Cybernetics, or Control and Communication in the Animal and the Machine. 
	Cambridge, MA:   MIT Press, 1948/1961 
37)     Wiener, Norbert, op.cit. (ref. 29), pp. 24-25
38)     Wiener, Norbert, (reference to fireflies)
39)     Ashby, W. Ross, Design for a Brain - The Origin of Adaptive Behavior. New York: 
	Wiley and London: Chapman  and Hall, 1952. 
	Ashby, W. Ross, An Introduction to Cybernetics. London: Chapman & Hall, 1956
40)     Varela, Francisco, Evan Thompson, and Eleanor Rosch, The Embodied Mind - Cognitive Science
	and Human  Experience (3rd ed.). Cambridge, MA: MIT Press, 1993, pp. 4-5 
41)     Heims, Steven J., The Cybernetic Group. Cambridge, MA: MIT Press, 1991

42)     Varela, Francisco, a.o., The Embodied Mind, op. cit., pp.85-86 

43)     Varela, Francisco, a.o., The Embodied Mind, op. cit., pp.87-88

44)     Varela, Francisco, a.o., The Embodied Mind, op. cit., p. 98

45)     Geyer, Felix, and Johannes van der Zouwen (eds.), Sociocybernetic Paradoxes Observation, 
	Control and Evolution of Self-steering Systems.  London: Sage, 1988

46)     Henshel, Richard L., "Credibility and Confidence Loops in Social Prediction", pp. 31-58 in 
	F. Geyer and J. van der Zouwen (eds.) Self-referencing in Social Systems. Salinas, CA:
	 Intersystems Publications, 1990
47)     Van der Zouwen, Johannes, "The Impact of Self-referentiality of Social Systems on Research
	Methodology", pp.59-68 in Self-referencing in Social Systems, op. cit.
48)     Masuch, M., "The Planning Paradox", pp. 89-99 in Sociocybernetic Paradoxes, op. cit.

49)     Aulin, Arvid, "Notes on the Concept of Self-steering", pp. 100-118 in Sociocybernetic Paradoxes, 
		op. cit.; see also:
	Aulin, Arvid, The Cybernetic Laws of Social Progress: Towards a Critical Social Philosophy and a 
	Criticism of Marxism, Pergamon Press, Oxford, 1982

50)     Toffler, Alvin, and Heidi Toffler, War and Anti-war: Survival at the Dawn of the 21st Century.
	Boston: Little, Brown  and Company, 1993,
51)     De Zeeuw, Gerard, "Social Change and the Design of Enquiry", pp. 131-144 in Sociocybernetic 
	Paradoxes, op. cit.

52)     Laszlo, Ervin, Evolution - The Grand Synthesis. Boston: Shambala, The New Science Library, 
	1987; and also:
	 Laszlo, Ervin, "Systems and Societies: The Basic Cybernetics of Social Evolution",
	pp. 145-172 in Sociocybernetic    Paradoxes, op. cit.
53)     Kauffman, Stuart A., "Antichaos and Adaptation", Scientific American, August 1991, pp. 78-84
   Kaufmann, Stuart A., Origins of Order: Self-Organization and Selection in Evolution, Oxford: Oxford University
Press,  1992
54)     Swenson, Rod, "End-directed Physics and Evolutionary Ordening: Obviating the Problem of the Population of One",
   pp. 41-60 in The Cybernetics of Complex Systems, op. cit.
55)     Arthur, W. Brian, "Positive Feedbacks in the Economy", Scientific American, February 1990, pp. 92-99
56)     See for example:
   Maturana, Humberto R. (1980), Man and Society, pp. 11-31 in: Autopoiesis, Communication and Society: The Theory
   of Autopoietic Systems in the Social Sciences (F. Benseler, P.M. Hejl and W. K”ck, eds.). Frankfurt: Campus.
   Maturana, Humberto R. (1981), Autopoiesis, pp. 21-30 in: Autopoiesis: A Theory of Living Organization. New York:
   North Holland.
   Varela, Francisco J. (1979) Principles of Biological Autonomy. New York: North Holland.
57)     Luhmann, Niklas, "The Autopoiesis of Social Systems", pp. 172-192 in Sociocybernetic Paradoxes, op. cit.
58)     Gierer, A., "Systems Aspects of Socio-economic Inequalities in Relation to Developmental Strategies", pp. 23-34
        in R.F. Geyer and J. van der Zouwen (eds.), Dependence and Inequality - A Systems Approach to the Problems
        of Mexico and Other Developing Countries. Oxford: Pergamon Press, 1982
59)     Prigogine, I. and Stengers, I. (1984), Order out of Chaos - Man's New Dialogue with Nature. London: Flamingo,
        1984
60)     Prigogine, I. and Stengers, I., Order out of Chaos, op. cit., p. 209  
61)     Langton, Christopher G. (ed.), Artificial Life. Santa Fe Institute Studies in the Sciences of Complexity, Proceedings,
        vol. 6 (Proceedings of first Artificial Life workshop, 1987). Redwood City, CA: Addison-Wesley, 1989
   Langton, Christopher G., Taylor, C., Farmer, J.D., and Rasmussen, S. (eds.), II. Sante Fe Institute Studies in the
   Sciences of Complexity, Proceedings, vol. 10 (Proceedings of second Artificial Life workshop, 1990). Redwood City,
   CA: Addison-Wesley, 1992
62)     Lindenmayer, Aristid, and Grzegorz Rozenberg (eds.), Automata, Languages, Development: At the Crossraods of
        Biology, Mathematics and Computer Science. Amsterdam: North Holland, 1976
   Prusinkiewicz, Przemyslav, and James S. Hanan, Lindenmayer Systems, Fractals and Plants. New York: Springer,
   1989
   Prusinkiewicz, Przemyslav, and Aristid Lindenmayer, The Algorithmic Beaty of Plants. New York: Springer, 1990
63)     Leydesdorff, Loet, "The Evolution of Communication Systems", International Journal for Systems Research and
        Communication Science, 6:219-230,1994
64)     Leydesdorff, Loet, " 'Structure'/'Action' Contingencies and the Model of Parallel Distributed Processing", Journal
        for the Theory of Social Behavior, 23(1):47-77, 1993
65)     Holland, John H., Adaptation in Natural and Artificial Systems. Ann Arbor: University of Michigan Press, 1975
   Holland, John H., Holyoak, K.J., Nisbett, R.E., and Thagard, P.R., Induction: Processes of Inference, Learning,
   and Discovery. Cambridge, MA: MIT Press, 1986     
66)     Waldrop, M. Mitchell, Complexity - The Emerging Science at the Edge of Order and Chaos. New York: Simon and
        Schuster (Touchstone), 1992; see pp. 145-147



 Copies obtainable by writing, phoning, faxing or e-mailing to the author:
Felix Geyer, SISWO, Plantage Muidergracht 4, 1018 TV Amsterdam, The Netherlands
Phone: 3120 527 0652/3120 527 0600   Fax: 3120 622 9430    E-mail: SISWO at SARA.NL                                   - 0 -
                ABSTRACT: THE CHALLENGE OF SOCIOCYBERNETICS
This paper summarizes some of the important concepts and developments in cybernetics and
general systems theory, especially during the last two decades. Its purpose is to show show
how they indeed can be a challenge to sociological thinking. Cybernetics is used here as an
umbrella term for a great variety of related disciplines: general systems theory, information
theory, system dynamics, dynamic systems theory, including catastrophe theory, chaos theory,
etc.
A distinction is made between first-order and second-order cybernetics. First-order cybernetics
originated in the 1940's, exemplified an engineering approach, and was interested in system
stability, and thus in feedback processes in automata and other machines which further
equilibrium conditions and make them amenable to steering efforts. Second-order cybernetics
originated in the 1970s, was based on biological discoveries, especially in neuroscience, and
was interested more in the interaction between observer and observed than in the observed
per se. It has led to a re-evaluation of many of the tenets of mainstream philosophy of science,
which was implicitly based on a rather mechanistic and Newtonian clockwork image of the
universe, stressed linear causality, and had a preference for order rather than disorder. 
Many of the concepts and procedures of first-order cybernetics admittedly seem useful for
sociology: system boundaries; the distinction between systems, subsystems and suprasystems;
the stress on circular causality; feedback and feedforward processes; auto- and cross-catalysis,
etc. However, second-order cybernetics is more likely to influence sociological thinking in
the future.
This is due, first of all, to its insistence that the interactions between the observer and his
subject matter should be included in the system to be studied, which leads to increased
attention for phenomena like self-reference. Second, its basis in biology furthers its predilection
for change rather than stability, for morphogenesis rather than homeostasis, and this may lead
to an increasing stress on self-organization, and to a realistic awareness that sociological
phenomena often cannot be forecast, but at best understood. Third, this is caused by
autopoiesis (Greek for self-production), the recognition of the fact that all living organisms
are self-steering within certain limits, and that their behavior therefore can be steered from
the outside only to a very moderate extent. Fourth, this leads to the continuous emergence
of new levels of organized complexity within society, at which new behavior can be
demonstrated and new interactions with the environment become posible.
Finally, attention is devoted to the emerging "science of complexity" - including neural
networks, artificial intelligence, artificial life, etc. - while the methodological drawbacks of
especially second-order cybernetics are discussed.