SOCIOCYBERNETICS



Felix Geyer and Johannes van der Zouwen





Published in Handbook of Cybernetics (C.V. Negoita, ed.). New York: Marcel Dekker, 1992 , pp. 95-124




1. Introduction

Sociocybernetics can be defined as the application of concepts, methods, and ideas of the so-called new cybernetics or second-order cybernetics to the study of social and sociocultural systems, but also vice versa: second-order cybernetics is certainly enriched by the often unexpected results of social science studies in which the concepts of second-order cybernetics are applied.

This chapter cannot give more than a rough impression of the wide variety of innovative theoretical and empirical research that is executed within the sociocybernetics paradigm, and an overview of the main trends and developments since the late seventies. It will be based on the work of what is conceivably the most important international forum devoted explicitly to sociocybernetics, and to a relatively sustained critical assessment of issues, priorities and directions for further work in the field: the Sociocybernetics Sections at the triannual International Congresses of Cybernetics and Systems of the WOGSC (World Organization of General Systems and Cybernetics), co-organized by the authors.

After each of these congresses, the organizers co-edited a volume with a selection of papers; for reasons of space, only some of these papers can be discussed in more detail. They are selected both to give the reader an impression of the widely divergent subject matter covered by sociocybernetics, and also to show the direction in which sociocybernetics is developing. The present contribution is an adapted version of a recent overview of the field (Geyer and Van der Zouwen, 1990b).


2. Sociocybernetics: an overview of developments during the last decade

Quite a lot has happened in the field of sociocybernetics since the first two volumes with that title appeared (Geyer and Van der Zouwen, 1978). An effort will be made to sketch these developments, and show where we started, and where the frontiers of the field are now.

Buckley, as one of the first pioneers to correctly apply systems concepts to the social sciences, reckoning with the specific nature of social systems, stressed already in the mid-sixties (Buckley, 1967) the as such hardly surprising fact that social systems are essentially different from biological and technical ones, the most frequently studied systems up till then - and studied largely with the aid of classical first-order cybernetics. It took almost a decade since then for systems concepts to be applied to the social sciences.

In the introduction to the above volumes, the term sociocybernetics was chosen to refer to the interpenetration of general systems theory and the social sciences - and not merely to the one-way traffic of applying concepts from general systems theory without further reflection to the social sciences. The authors then were, and still are, convinced that the emergence of the so-called second-order cybernetics was largely due to this increasing focus, within general systems theory, on the social sciences - a field where the inapplicability of first-order cybernetics soon became evident. These were intellectually exciting days, although the systems movement within the social sciences still had to gather steam, and pronouncements still had a defensive ring -towards the social science community rather than towards the colleagues in systems theory.

Indeed, the themes in these 1978 volumes could still be described as refutations of the frequently voiced objections against the application of systems theory to the social sciences: for example, the reproach of implicit conservatism that was largely caused by the fact that the Parsonian systems approach, with its stress on homeostasis rather than morphogenesis, was virtually the only one known in social science (cf. Buckley, 1967). Other objections voiced against the systems approach were technocratic bias and unwarranted reductionism; in view of the prevalence of the rather mechanistic type of first-order cybernetics then in fashion, these perhaps somewhat stereotypical objections among social scientists only superficially acquainted with the systems approach were certainly understandable (Lilienfeld, 1978).

Less defensively and more positively, we tried to define the main themes of sociocybernetics as aspects of the emerging "new cybernetics", known in the meantime as second-order cybernetics:

1) Sociocybernetics stresses and gives an epistemological foundation for science as an observer-observed system. Feedback and feedforward loops are not only constructed between the objects that are observed, but also between them and the observer. The subjective and time-dependent character of knowledge is emphasized by this approach: information, in the broadest sense of the word, is neither seen as inherently "out there", waiting to be discovered by sharp analytical minds, nor is it entirely viewed as a figment of the observer's own imagination, or as an environment-independent automatic end-result of his own inner cognitive processes. Knowledge is constructed - and continually reconstructed - by the individual in open interaction with his environment.

2) The transition from classical, rather mechanistic, first-order cybernetics to modern, second-order cybernetics is characterized by a number of interrelated problem shifts:
a) One shift is from the system that is being controlled to the actively steering system, and consequently:

- to the nature and genesis of the norms on which steering decisions are based;
- to the information transformations, based on both observations and norms, that are necessary to arrive at steering decisions;
- to the learning processes behind repeated decision-making.

b) Especially when several systems try to steer each other, or an outside system, attention is focussed on the nature of, and the possibilities for, communication or dialogue between these systems.

c) When the behavior of a system has been explained in the classical way, through environmental influences and systemic structure, the problem is raised of the "why" of this structure itself, qua origin and development, and the "why" of its autonomy with regard to the environment. In systems terminology: the questions of morphogenesis and autopoiesis.

3) These problem shifts in cybernetics involve an extremely thorough reconceptualization of many all too easily accepted and taken for granted concepts - which yields new notions of stability, temporality, independence, structure vs. behavior, and many other concepts.

4) The actor-oriented systems approach, promulgated in 1978 as part of sociocybernetics, makes it possible to bridge the "micro-macro" gap - the gap in social science thinking between the individual and society, between freedom and determinism, between "anascopic" explanations of society that depart from the activities of individuals conceived as goal-seeking, self-regulating systems, and "katascopic" explanations that view society "from the top down" and see individuals as subservient to system-level criteria for system stability.


In 1982, a volume was published containing largely empirical work applying systems concepts to an integrated, cross-disciplinary study of the problems of underdeveloped countries (Geyer and Van der Zouwen, 1982). A general theme here was: how can general systems theory and general systems methodology contribute towards an improved understanding of the problems of social systems in transition, particularly those of developing countries? Does the application of the systems approach in this area indeed lead to new insights, and especially to new solution alternatives, over and above those of the traditional disciplines?

However, what concerns us here is the conceptual-theoretical advances that were made. For example, the actor-oriented approach to dependency theory employed in this volume not only elucidated the individual vs. society problem by concretely demonstrating the existing links between motivations and actions of individual actors and large-scale societal processes, thus explaining how a certain historical process has led to a certain result: present-day dependency of Third World Nations. This approach also differentiates hierarchically between the game itself and the "meta-game": i.e. the capability of certain actors to determine the rules of the game, and therewith largely its contents (unequal exchange) and its outcome (perpetuation of inequality).

Another interesting theoretical development, demonstrated "empirically" by computer simulation (Gierer, 1982), lies in the explanation of inequality as resulting from the cumulative interaction over time of the auto-catalytic, self-enhancing effects of certain initial advantages (e.g. generalized wealth, including education) with depletion of scarce resources. It then turns out that striking inequalities can be generated from nearly equal initial distributions, where slight initial advantages tend to be self-perpetuating within the boundary conditions of depleting resources; it is here that the concept of autopoiesis, developed in the mid-seventies in cellular biology by Maturana and Varela (1980), finds one of its first applications in social science.

It became increasingly clear around this time, the early 1980's, that it is precisely general systems theory, paradoxically, that does not recognize the existence of systems, at least not as immutable and objectively existing entities with fixed boundaries. Unlike many of the traditional disciplines, modern systems theory is in this sense explicitly opposed to reification: the tendency to ascribe a static "thing" character to what really are dynamic processes. Especially when trying to apply second-order cybernetics to the investigation of social systems - in this case developing countries - the way in which one can analytically distinguish systems turns out to be problem-dependent (and hence implies relativism), observer-dependent (and hence ultimately subjective or intersubjective) and time-dependent (and hence implying a dynamic rather than static character).

Already implicit in the themes of this 1982 volume were the main concerns of our third volume (Geyer and Van der Zouwen, 1986): the sociocybernetic paradoxes inherent in the observation, control and evolution of self-steering systems - especially the paradox important to policy-makers worldwide: how can one steer systems that are basically autopoietic and hence self-referential as well as self-steering? The authors of a number of empirical studies in this volume were rather pessimistic about the possibilities of planning and steering a number of specific social systems, while a theoretical study by Masuch drew attention to the planning paradox:

Perfect planning would imply perfect knowledge of the future, which in turn would imply a totally deterministic universe in which planning would not make any difference. While recognizing the usefulness of efforts to steer societies, a cost-benefit analysis, especially in the case of intensive steering efforts, will often turn out to be negative: intensive steering implies intensive social change, i.e. a long and uncertainty-increasing time period over which such change takes place, and also an increased chance for changing planning preferences and for conflicts between different emerging planning paradigms during such a period. Nevertheless, given a few human cognitive predispositions, there unfortunately seems to exist a bias for oversteering rather than understeering.

A historical overview of planning efforts concludes that - in spite of intensified theorizing and energetic attempts to create a thoroughly planned society during the last two centuries - the different answers given so far regarding the possibility of planning cancel each other out. There is even no consensus about a formal definition, though usually planning is seen as more comprehensive, detailed, direct, imperative or expedient when compared with other steering activities that are not defined as planning. In our most recent volume (Geyer and Van der Zouwen, 1990) we have gone further into the reasons why increased knowledge about human (i.e. self-referential) systems often does not help us to improve our planning of such systems.

In our 1986 volume, apart from discussing the possibilities of planning, we tried to answer two other important questions:

- Should one opt for the "katascopic" or the "anascopic" view of society; in other words, should the behavior of individuals and groups be planned from the top down, in order for a society to survive in the long run, or should the insight of actors at every level, including the bottom one, be increased and therewith their competence to handle their environment more effectively and engage more succesfully in goal-seeking behavior?

- What should be the role of science, especially the social sciences, in view of the above choice: should it try mainly to deliver useful knowledge for an improved steering of the behavior of social systems and individuals, or should it strive to improve the competence of actors at grass roots level, so that these actors can steer themselves and their own environment with better results?

To answer these questions, Aulin followed a cybernetic line of reasoning that argues for non-hierarchical forms of steering. Ashby's Law of Requisite Variety indeed implies a Law of Requisite Hierarchy in the case where only the survival of the system is considered, i.e. if the regulatory ability of the regulators is assumed to remain constant. However, the need for hierarchy decreases if this regulatory ability itself improves -which is indeed the case in advanced industrial societies, with their well-developed productive forces and correspondingly advanced distribution apparatus (the market mechanism). Since human societies are not simply self-regulating systems, but self-steering systems aiming at an enlargement

of their domain of self-steering, there is a possibility nowadays, at least in sufficiently advanced industrial societies, for a coexistence of societal governability with ever less control, centralized plannning and concentration of power.

As the recent history of the Soviet Union demonstrates, this is not only a possibility, but even a necessity: when moving from a work-dominated society to an information-dominated one, less centralized planning is a prerequisite for the very simple reason that the intellectual processes dealing with information are self-steering - and not only self-regulating - and consequently cannot be steered from the outside by definition. Our answer to the above questions, in other words, was quite straightforward: there should be no excessive top-down planning, and science should help individuals in their self-steering efforts, and certainly should not get involved in the maintenance of hierarchical power systems.

Of course, this is not to deny that there is a type of system within a society that can indeed be planned, governed and steered, but this is mainly because such systems have been designed to be of this type in the first place, i.e. to exemplify the concept of the control paradigm. Modern, complex multi-group society in its entirety, conceptualized as a matrix in which such systems grow and thrive, can never be of this type.

If one investigates a certain system with a research methodology based on the control paradigm, the results are necessarily of a conservative nature; changes of the system as such are almost prevented by definition. According to De Zeeuw (1986), a different methodological paradigm is needed if one wants to support social change of a fundamental nature and wants to prevent "post-solution" problems; such a paradigm is based on a multiple-actor design, does not strive towards isolation of the phenomena to be studied, and likewise does not demand a separation between a value-dependent and a value-independent part of the research outcomes.

Our 1986 volume also analysed the emerging broader context of the steering problematique, and thus contributed to the development of a systems epistemology for the social sciences, the necessity of which we argued already in 1978. Two interesting "theory transfers" from other disciplines should be mentioned in this respect:
In a fascinating contribution by Laszlo, comparing the evolution of social systems with the wider context of the basic cybernetics of evolution per se, use was made of Prigogine's (1984) theoretical framework. The thesis was defended here that, while evolution admittedly may follow widely divergent paths in different fields of enquiry, there are unitary principles underlying the concrete course of evolution in different domains - i.e. basic invariances in dynamics rather than accidental similarities in morphology - and that discovering them has a survival value in highly complex modern societies with their uncertain futures. Contradictory theories of evolution (e.g. classical thermodynamics based on particles in or near equilibrium vs. Darwin's theory of the origin of species) have uneasily co-existed for more than a century.

With the development of non-equilibrium thermodynamics - which also considers particles far from equilibrium, and can therefore deal with cross-catalytical chemical oscillators - by Prigogine and others, these contradictions turn out to be only apparent. Evolution occurs when open systems are exposed to massive and enduring energy flows. It now turns out that evolution - in physics, biology and the social sciences - goes together with increasing size and complexity and decreasing bonding energy. Strongly bonded, but relatively simple particles - whether atomic nuclei, cells, or human individuals - act as building blocks for more weakly bonded, but larger and more complex entities.

Another interesting "theory transfer" was the reconceptualization of the autopoiesis concept developed by Maturana and Varela (1980) to make it applicable to the field of the social sciences. Luhmann (1986) defended the quite novel thesis here that, while social systems are self-organizing and self-reproducing systems, they do not consist of individuals or roles or even acts, as commonly conceptualized, but of communications. It should not be forgotten that the concept of autopoiesis was developed while studying living systems. When one tries to generalize the usages of this concept to make it also truly applicable to social systems, the biology-based theory of autopoiesis should therefore be expanded into a more general theory of self-referential autopoietic systems. It should be realized that social and psychic systems are based upon another type of autopoietic organization than living systems: namely on communication and consciousness, respectively, as modes of meaning-based reproduction.

While communications rather than actions are thus viewed as the elementary unit of social systems, the concept of action is admittedly necessary to ascribe certain communications to certain actors. The chain of communications can thus be viewed as a chain of actions - which enables social systems to communicate about their own communications and to choose their new communications, i.e. to be active in an autopoietic way. Such a general theory of autopoiesis has important consequences for the epistemology of the social sciences: it draws a clear distinction between autopoiesis and observation, but also acknowledges that observing systems are themselves autopoietic systems, subject to the same conditions of autopoietic self-reproduction as the systems they are studying.

The theory of autopoiesis thus belongs to the class of global theories, i.e. theories that point to a collection of objects to which they themselves belong. Classical logic cannot really deal with this problem, and it will therefore be the task of a new systems-oriented epistemology to develop and combine two fundamental distinctions: between autopoiesis and observation, and between external and internal (self-)observation. Classical epistemology searches for the conditions under which external observers arrive at the same results, and does not deal with self-observation. Consequently, societies cannot be viewed, in this perspective, as either observing or observable. Within a society, all observations are by definition self-observations.


3. Self-referencing

It is in our most recent volume (Geyer and Van der Zouwen, 1990) that we have concentrated on this emerging problem area: the often unexpected consequences of the fact that all observations within a society are self-observations. One of the main characteristics of social systems, distinguishing it from many other systems, is their potential for self-referentiality. This means that the knowledge accumulated by the system itself about itself, in turn affects the structure and operation of that system. This is the case because, in self-referential systems like social systems, feedback loops exist between parts of reality on the one hand, and models and theories about these parts of reality on the other hand.

Concretely, whenever social scientists systematically accumulate new knowledge about the structure and functions of their society, or about subgroups within that society, and when they subsequently make that knowledge known, through their publications or sometimes even through the mass media - in principle also to those to whom that knowledge pertains - the consequence often is that such knowledge will be invalidated, because the research subjects may react to this knowledge in such a way that the analyses or forecasts made by the social scientists are falsified.

In this respect, social systems are different from many other systems, including biological ones. There is a clearly two-sided relationship between knowledge about the system on the one hand, and the behavior and structure of that system on the other hand. Biological systems, like social systems, admittedly do show goal-oriented behavior of actors, self-organization, self-reproduction, adaptation and learning. But it is only social systems that arrive systematically, by means of experiment and reflection, at knowledge about their own structure and operating procedures, with the obvious aim to improve these.

In our 1986 volume, we already dealt in detail with several aspects of the specific character of social systems. The accent then, however, was rather on the degree of governability of those systems: our core area of interest there was the paradox of steering self-steering systems. Our 1990 volume, on the other hand, reflects a shift to the present preoccupations of sociocyberneticians. The accent in this case lies on the consequences of self-referentiality, in the sense of self-observation, both for the functioning of social systems and for the methodology and epistemology used to study them. We do have a paradox here too: the accumulation of knowledge often leads to a utilization of that knowledge - both by the social scientists and the objects of their research - which may change the validity of that knowledge.


3.1 Self-referencing and prediction

This trend is illustrated for example by Henshel, who analyzes what he terms credibility and confidence loops in social prediction. Self-fulfilling prophecies have of course been studied before. Merton (1948) defined the self-fulfilling prophecy as "an unconditional prediction or expectation about a future situation such that, first, had it not been made, the future situation envisaged would not have occurred, but because it is made, alterations in behavior are produced which bring about that envisaged situation, or bring that envisaged situation to pass." The notion of a self-fulfilling prophecy was later supplemented by its mirror opposite: the self-defeating prophecy.

The novelty of Henshel's approach lies in the fact that he extends the notion of self-fulfilling prophecies to serial self-fulfilling prophecies, where the accuracy of the earlier predictions, themselves influenced by the self-fulfilling mechanism, impacts upon the accuracy of the subsequent predictions. He distinguishes credibility loops and confidence loops.

In credibility loops, source credibility, i.e. the credibility of the forecaster, becomes significant, because it is the same forecaster who is issuing repeated predictions. There is a deviation-amplifying positive feedback loop here between: 1) a self-fulfilling mechanism, 2) the accuracy of the prediction, and 3) the credibility of the forecaster. Several examples are given, in widely varying fields like pre-election polling, stock market predictions, intelligence testing, etc.

Confidence loops have certain features in common with credibility loops; the critical difference between the two lies precisely in what is held constant, or uniform, across the repeated prediction iterations. In the case of the credibility loop it is the person of the predictor which must remain the same, in order for the associated credibility to rise or fall. In the confidence loop, continuity across predictive iterations in the prediction itself is at issue. The prediction in the confidence loop must exhibit constancy in either rank-order or direction on successive pronouncements. Such uniformity in the direction of the prediction, together with the postulated self-fulfilling mechanism, produces increased accuracy, which in turn produces increased confidence in the prediction as iterations of the loop unfold. Examples are given here in fields like inflationary spirals, validation of criminality theories, attribution theory, etc.

Of course, feedback loops involving a self-defeating mechanism lower rather than increase predictive accuracy over several iterations. When inserting a self-defeating dynamic in the system, an oscillating system is created in which the time paths of the key variables now oscillate instead of assuming a monotonic form. The so-called cobweb cycle is a good example here.


Henshel's analysis has fascinating implications in two different areas:

1) He demonstrates two "nested" differences between the natural world and the social world: self-fulfilling or self-defeating prophecies exist only within the social world, while moreover these self-fulfilling or -defeating tendencies are magnified by the feedback loops in which they are embedded, and impact directly upon the accuracy of the predictions made.

2) He also demonstrates differences between prediction in the natural vs. the social sciences: The existence of credibility loops and confidence loops suggests that, on certain occasions at least, the social sciences can pull themselves up by their own bootstraps, in terms of improving their predictive accuracy. Such a "bootstrap" enhancement of accuracy is not possible for prediction in the natural sciences. The social sciences appear to be aided especially with respect to the accuracy of directional and ordinal predictions, in ways which are impossible for natural phenomena. If a social scientist issues a directional or ordinal prediction, he may be aided by self-fulfilling dynamics. On the other hand, if the same social scientist issues a quantified prediction, he may be damaged in ways which do not apply to the natural science world. That is, for quantified prediction his accuracy may be damaged by the same self-fulfilling dynamics.

Self-defeating tendencies necessarily reduce rather than increase the accuracy of directional and ordinal predictions, and again have equivocal but usually damaging effects on quantified accuracy. Considering both tendencies, self-fulfilling and self-defeating, we find that the weaker forms of prediction (directional and ordinal) are sometimes aided, sometimes damaged. Quantified predictions, long taken as the hallmark of mature science, are ordinarily damaged. In terms of obtaining precision and high accuracy in quantified forecasts, the social sciences are therefore uniquely disadvantaged as a result of the existence of self-fulfilling and self-defeating tendencies in the social world as opposed to the natural world.


3.2 Self-referencing and methodological research

Van der Zouwen (1990) addresses a similar problematique, looking at the consequences of self-referentiality for research methodology in the social sciences. Methodological research is defined here as research aimed specifically at the evaluation and improvement of the performance of research methods. This contribution deals with the following questions, especially within the area of survey research:

1) Can a feedback loop be observed between the available, and presumably valid, knowledge about the quality of particular methods of data collection on the one hand, and the way in which these methods are used on the other hand?

2) To what degree do public opinion researchers anticipate on the outcomes of their research when choosing and implementing their research methods?

3) What are the consequences of this anticipation for the possibilities to conduct methodological research? What are the consequences of the self-referentiality of the social system called "the survey industry" for methodological research aimed at improving the operation of that system?

In order to answer these questions, Van der Zouwen deals with a subset of methodological research: methods research, i.e. the development of particular types of justification for research methods, conceived as prescriptions and recommendations for the activities of researchers.

In experimental methods research, research efforts are focussed on problems like: the effect of a personal vs. a formal interviewing style on the accuracy and amount of information obtained from the interviewees; the effects of question wording on responses obtained: e.g. open vs. closed questions; adding "don't know" as a separate category; the order in which response categories are presented, etc. Usually, there is a "split ballot" design here, with respondents randomly assigned to the experimental conditions relating to either the questionnaire or the interviewers. This experimental design has optimal internal validity: differences on the dependent variables, i.e. the response distributions, can unequivocally be attributed to differences in the experimental conditions. However, the drawback of this type of research is that it excludes feedback from the dependent variables to the independent ones; in other words, self-referentiality cannot be observed with this type of research design. Moreover, the results are difficult to generalize: experiments with the wording of a specific question cannot be generalized to the wording of questions in general.

Non-experimental methods research deals, ex post facto, with the statistical relationships in current opinion research between the ways in which questions are formulated and the characteristics of the response distributions obtained; or with the correlations between characteristics of the interview situation and the behavior of interviewers and respondents. This type of research demonstrated, for example, that variance increases with the number of response categories, while the proportion "don't know" responses increases when this response category is explicitly offered by the interviewer.

While such results sound rather obvious, there is a tricky problem here: this type of methods research has to assume that the designers of the questionnaires, i.e. the public opinion pollers, were not aware of these wording effects while formulating the questions, or at least did not reckon with them. In other words: that they did not formulate their questions in such a way that the response distribution obtained would meet certain criteria, like being not too skewed. If this assumption is invalid, then the causal interpretation of the correlations found becomes dubious: is it the question wording that has produced this particular response distribution, or are feedback loops resulting from (previous) research results on the researcher involved, i.e. is it rather the researcher's need for a particular kind of response distribution that has led to a specific formulation of the question?

Van der Zouwen tried to find an answer to this question in a research project where verbatim transcripts of six quite different survey projects were analyzed, making use of a cybernetic model of interviewer-respondent interaction. The hypotheses themselves are not at issue here. The interesting point is that, when testing these hypotheses, a number of unexpected statistical relations between variables turned up, which can best be typified as consequences of anticipation: those researchers who expect problems regarding the task-related behavior of their interviewers, or regarding the quality of the information to be gained from the respondents, will understandably take countermeasures. They design the questionnaire more carefully and in more detail than usual, spend much time selecting and instructing their interviewers, decrease the "distance" between them and the interviewers by intensive monitoring of the fieldwork, etc.

While such counter-control measures look plausible and quite rational at first sight, their effect is that the correlations between the independent variables (the above points: complexity, distance, experience and difficulty) and the dependent ones (i.e. interviewer behavior and response quality) cannot be interpreted anymore in terms of one-way causality. The sophisticated, anticipating researcher actually reduces the interpretability and thus the utility of non-experimental methods research.

Similar results were obtained in an ex post facto meta-analysis of some 20 research projects regarding the effects of the mode of data collection (mail survey, telephone interview, face to face interview) on the information obtained.

Van der Zouwen's conclusion from all these different research projects is not only relevant for all social science research - as opposed to research in the exact sciences - but deals also with the core problematique of the 1990 volume: to what extent is the accumulation of valid knowledge about social systems possible, given the fact that they are self-referential, for researchers who, either as individuals or as a group, are themselves self-referential systems?

Van der Zouwen stresses the paradox that it is precisely methods research which hampers its own further development, in two different ways: by an increasing standardization of research practice, and by anticipatory behavior of survey researchers. Standardization reduces the variance in the data collection procedures used which results in unreliable estimates of the effects of the methods on the research outcomes. And when there are differences with respect to the methods used, these are largely caused by the anticipation of the researchers on the effects of their methodological decisions. As these anticipations become more frequent and more adequate, the relations between the characteristics of the methods used and the data obtained, increasingly become artefacts of these anticipations.


3.3 Self-referencing and political systems

Anderson (1990) concentrates on political systems, and he stresses their intelligence rather than their self-referentiality. This enables him to draw parallels with developments in artificial intelligence (AI). He considers present theory about complex organizations and political structures rather weak, and feels this to be the case because it focusses on stasis rather than on change and dynamics. New intellectual tools need to be developed for theories of social processes, but they should be formal and should fit the substantive domains of application. Artificial intelligence, unlike other formal tools borrowed by sociology from other disciplines, has developed techniques specifically geared to the study of human action and human capacities.

Political systems or "polities", i.e. subsystems of societies that specialize in solving certain kinds of problems, conceptualized as sets of roles rather than individuals, have goals, beliefs and knowledge about themselves and their environment, and inference rules. Roles are conceptualized, in AI-terminology, as frames, i.e. hierarchical structures in which objects at each lower level are related to objects at the next higher level by the transitive "is a (class inclusion)" relation. In frames, the objects are described through declarative rules, expressing their properties and rules for action. Roles have memories; the role-specific memories are abstracted events connected to present action options, including defaults, through so-called Minsky C-lines (Chomsky and Miller, 1963). Although each role implies a unique perspective on the system itself and its environments, roles are organized into role sets, within which one may expect to find shared subclasses of beliefs.

Political systems obviously need the relevant and correct facts to solve their specific problems. However, they perceive selectively and anyhow filter information - even apart from the possibility of distorting it. Their problem, also one of the central problems in AI, is how to select among large sets of possibly relevant facts that subset which can be used to reason about or solve the problem concerned, and do this in the limited amount of time available.

Three processes determine how a problem-relevant subset of facts gets determined:
- insulation: the environments of political systems are stratified systems, where especially power and social distance are relevant variables;
- learning: generally not too fast, since politicians have a tendency to scan their environments for facts that fit with the repertoire of familiar issues and problems;
- self-reflection: political systems are self-referential and through self-reflection try to develop new insights.

While it is relatively arbitrary where one draws the boundaries of political systems, it seems useful to anyhow distinguish the decision-making inner political apparatus, those engaged in implementing the decisions, and the clientele to whom the decisions pertain. Politics is obviously always competitive, and even within authoritarian political systems there is always internal competition for influence and power, which forces the actors to engage in self-referencing, by taking note of the different points of view of allies and adversaries alike. This capacity for self-referencing is what constitutes the intelligence of a polity; the necessity to learn to shift between multiple perspectives (Bråten, 1986) helps to explore the hidden potential of the system's goals, rules, beliefs and capacities.

Anderson then analyzes the role concept and political goal structures, conceptualizing roles and goals in political systems as hierarchical structures of production rules. At the top of the hierarchy are those rules that express the system's ideology or belief regime, and the elite's strategic rules for the reproduction and extension of its arena of power. The next level contains the rules of policy formation and communication, and the strategic rules followed by the incumbents of the different roles in the structure. At the bottom of the hierarchy we find rules for the selection of actions on the system's environments.

A frame can now be more clearly defined as a hierarchical structure that consists of rules, relations, and abstract objects. A national political system can be viewed as a frame, with a hierarchy going down from the nation state via sectors and sector-objects to agents and agencies. Properties defined by rules at the higher levels are inherited by the lower levels, and thus the "top rules" provide default assignments to the actions at the bottom level, although they can obviously be overridden by other instructions.

One of the problems with this type of modelling, as with all modelling, is that models of political systems require a high degree of resolution to be realistic - which makes them hard to comprehend and analyze. Moreover, frames were originally developed to model knowledge systems, where the rules at the different hierarchical levels are made to be consistent. However, it is typical for political systems that inconsistencies occur in the rule systems at all levels, while moreover these inconsistencies are strategically exploited by political actors. Also, rules in social science theory are context-dependent and undergo interpretation; they are not like the "if-then" statements in frame methodology, which carry precise and unambiguous instructions. An as yet unsolved problem is therefore how the frame model should be modified to fit the contextuality of human social rule use.

One should not forget that it is only events that occur in the environment; these events then give rise to a political problem through conceptualization. However, as Luhmann (1986) has also stressed, a political system can only recognize those problems that it is "programmed" to recognize. Problems sometimes become important because the means for their solution exist. The way of defining the problem, the choice of alternative solutions, and the means to implement these, may be different in different political systems, with the result that national styles of problem-solving may develop.

The self-referentiality of the political system comes out clearly in the fact that a successful solution of a high-priority problem or the failure to solve such a problem will strengthen or weaken the relevant part of the political system. When a new problem makes itself felt, previous successes or failures will have caused a change in the system's state. Detailed case studies are necessary to demonstrate how precisely such succcesses and failures in problem-solving affect the internal structure of polities. We see a variant here of the problem analyzed by Van der Zouwen; the successes as well as the failures of previous efforts at solving specific political problems feed back on present-day efforts and tend to produce a standardization of the solutions deemed possible in certain cases, while political decision-making obviously thrives on anticipatory behavior. If a politician has learned his lessons well, he knows what manipulative stimuli to give in order to elicit specific reactions from the public, not unlike the methodologist who more or less determines the answer distribution of his respondents by using certain methods.


3.4 Self-referencing and participatory democracy

Robinson (1990) reports an interesting experiment designed to improve the effective organization of participatory democracy in a cooperative organization. Especially during the last decade, participation problems have appeared - and been documented - in socialist as well as capitalist economies. In general, these problems fall into two broad categories:
- how can ordinary members exercise control over management?
- how can ordinary members exercise control with management?

It turns out that concern with control over management leads to a concentration on cooperative structure, while concern with control with management leads to a concentration on means. While some structures and means (techniques) certainly have been successful in some instances, co-ops that fulfil democratic criteria, and are felt by their members to do so, are generally small, with less than 20 members. Larger cooperatives usually find techniques like frequent general meetings, job rotation, etc. impracticable. Their formal structures may be sophisticated, but fail to in-still feelings of involvement on the part of their members.

Member participation in decision-making at all levels of an enterprise - requiring both control over, and control with management - is problematic. The effective managerial monopoly on information excludes the majority of cooperators from anything but token supervision of decision-making. Control with management is likewise almost impossible; the co-op members are not immersed in the information and value flows, but have other jobs to do. This managerial information monopoly is a self-reproducing process; the more information and power is centralized already, the greater becomes the ability of management to monopolize information and power even more. The result is the re-appearance of alienation, strikes, and management-labor conflict - even where ownership-labor conflict has been eliminated.

Robinson does see a way out of this dilemma. It is to recognize that agents, and especially collective agents, are only constituted and reproduced in relation to objects which they influence and control. If "the membership" is to become an agent, it must do so in relation to specific issues or projects. The problem now, under a primary management-worker role division, is that workers are not in a control relationship. Consequently, they do not produce and reproduce themselves collectively as "an agent". "Agents" have to engage in learning if they want to be effective. Now, the advantage of Robinson's approach is that this is fully recognized; member participation and worker control can be exploratory, experimental and partial. No one has the impossible task of knowing about everything; the objects of control can be changed. Yet, control is quite clearly there, and immediately so; this is not a partial or gradual process.

Basing himself on Bernstein's theory of economic return, and modifying its shortcomings - especially its demand that economic return should be directly related to what the workers themselves have produced -on the basis of Ashby's Law of Requisite Variety, Robinson then first discusses a production problem in one of the departments of a coooperative. He comes to the conclusion that control is only interesting when it is partial. Strategies of the department concerned have to reckon with the strategies of (inputs from) the other departments. Thus, a nested set of choices and outcomes emerges that can give rise to "meta-strategies": "if they do this, we'll do that" - and in this way an immersion in dialogue occurs that is characteristic for management. Clearly, this dialogue needs an object, or there would be nothing to discuss, and there would also be no basis on which an otherwise unstructured group, such as the membership at large of a cooperative, could form itself as an agent.

Robinson, recognizing the inherent limitations of general meetings in larger cooperatives, now developed a computer model that can serve as the forum of discussion in cases where such meetings are not practical. He implemented Ashby's definition of control on a number of computers, removing some restrictive assumptions from Ashby's account of control: the actions that determine the outcome are now themselves determined in the course of dialogue. This derestricted account of control is consistent with Howard's (1971) metagame theory and Pask's (1976, 1978) conversation theory, and is termed Ashby mapping.

Robinson illustrates his concepts with an experiment about wage negotiations between two different departments of a cooperative. In Ashby's original formulation (Ashby, 1956), two players made choices in a given order of play, whereby the intersection of their choices determined the outcome. In Asby mapping, the players select strategies rather than choosing options, while moreover the rules governing the order of play are relaxed. The nature of the original game is changed by making the outcome conditional on acceptance by the players. Thus, the situation moves from the original context of regulation and disturbance to a realistic imitation of a bargaining process. Reacting to each other's strategies, the players may now develop symmetrical meta-strategies, even though their basic strategies are not symmetrical. Both strategies and meta-strategies are stated by making moves that lead to conditional outcomes, and thus are public events; both the moves and the responses to these moves are known to both players. An Ashby map can thus be seen as a form of representation in which restrictions on moves are relaxed to allow strategies, and restrictions on order of play relaxed so that outcome is conditional on symmetrical strategy or meta-strategy. Using Ashby mapping, one can therefore move from an objective to a subjective control formalism; the outcome is no longer determined by the facts of the moves and the table of outcomes, but by the ability of the players to reach agreement.

Ashby mapping is thus a very useful technique to analyze self-referentiality, both of others and of oneself, in the dynamic context of an ongoing process of negotiation where implicit goals and values continually emerge; it is not primarily a way of representing an "objective reality", but rather an interpretation of the world by those who create it.


3.5 Self-referencing and health care planning

Hornung (1990) describes the construction of knowledge based systems for the analysis of development problems in health care planning, both at national and at regional and even local levels. We encounter here the same problem mentioned before: decision-making and planning in health care systems take place in between the extremes of spontaneous, intuitive decisions on the one hand, and decisions based on costly, time-consuming quantitative computer-assisted studies and operations research on the other hand. Cognitive systems analysis as understood here mediates between these two extremes; it integrates general theoretical knowledge about the structure of health care systems with the available empirical knowledge of experts and decision-makers about specific problems in specific countries or areas.

Health care systems are viewed as autopoietic (i.e. self-organizing and self-referential) sociotechnical systems, located at the intersection of social interaction systems, economic systems and natural (biological) systems. The fact that they are autopoietic implies, in a planning context, the need for an active and effective participation of all members of the system for which planning is done.

Self-reference enters at several levels:
1) The level of individual learning, exemplified by the interaction of the modeller with his cognitive model;
2) The level of generating group expertise about a problem, by an interaction between the modeller, the model, and other participants in a modelling or planning group;
3) The level of self-organization in the scientific subsystem, i.e. the interaction between the modeller or modelling group and the scientific community;
4) The level of management and policy-making in the health subsystem, a national system or even the international system, consisting of the interaction between modellers and decision makers at the corresponding levels.

In the context of self-reference and self-organization, computer-assisted tools of policy-making and development planning obviously have to meet two basic challenges, as has already become clear from Robinson, quoted above:

- they should allow for a participative planning process that takes into account the views and opinions (i.e. cognitive domains) of all the groups concerned;
- they certainly should not remain the exclusive domains of technical specialists who merely present the results to the decision makers, but should rather promote, and even require, interaction and feedback between the computer, the planner and the decision maker.

This is all the more important, as Hornung stresses, if one agrees with the thesis of Maturana and Varela (1980) that in any strict sense there is no flux of thought from one person to another, and that denotative functions of messages lie only in the cognitive domains of the observer. In this view, understanding results from cooperative behavior of two persons, and the participative interactive planning process envisaged here indeed implies such cooperation.

Hornung's qualitative systems analysis tries to utilize the advantages of both expert systems and simulation models, without the disadvantages of either. Expert systems can store a large quantity of knowledge about well defined problems. They can provide propositions for decision making, and explain how they arrive at them. The universe of possible answers is known beforehand; what is not known, however, is the answer in a particular case. Therefore, expert systems are very suitable tools for routine decision making, but not for policy planning and development planning which are concerned with non-routine decision making.

In simulation models, only the system itself and the principles of its dynamics are known, not the universe of possible events, i.e. the entire state space of the system. This becomes gradually known only when running the model and experimenting with it. Simulation models are excellent tools for communication, since models permit information transfer by making the other person do and experience things, instead of interacting by questions and answers. However, conventional simulation models do not provide a knowledge base in the detailed way that expert systems do, while on the other hand expert systems are usually not suitable for experimentation. Within the framework of his cognitive systems analysis, Hornung developed the so-called DEDUC-methodology for qualitative modelling, which distinguishes classificatory concepts (object structures), "if-then" statements (implications) and premises, and differentiating between an "orientor module" containing normative knowledge like the objectives, goals and values of the planner, and a "knowledge module" containing factual knowledge about the problem area, i.e. the internal model of the planner and, respectively, the experts.

Usually, cognitive domains imply both knowledge about reality and a normative assessment of facts. DEDUC models such cognitive systems and externalizes them in the form of computer models, such that the user is able to investigate his own externalized and objectified cognitive domain carefully and systematically. He can experiment self-referentially with a subset of his own cognitive domain turned into a computer model in order to resolve planning and policy problems. One of the advantages of Hornung's method is that models can be iteratively refined, so that construction can be started with a very simple model (rapid prototyping) which moreover can be constructed very quickly, since there is a hierarchical set of models such that the basic outline of systems models at lower hierarchical levels roughly follows from the models at the higher levels.

Contrary to classical cybernetics, which has stressed the importance of selecting the essential variables when engaging in model building, the autopoietic concept with its emphasis on dynamics insists on the importance of what Maturana and Varela have termed the essential relations.

Hornung then illustrates his modelling technique with a detailed example of the national system of Mexico and its different subsystems.

Like Van der Zouwen, he concludes that self-reference is at work on different levels. The science subsystem of a society brought forth cognitive systems modelling by means of which scientific knowledge is changing itself.



3.6 Self-referencing and psychological research

Hirsig (1990) and his colleagues set off for a journey into space. They developed the hardware and software for an extremely interesting experiment to collect reliable empirical data about emotionally-motivationally determined behavior, in which subjects had to operate a space craft simulator. Self-reports about emotions and motives are notoriously unreliable, affected as they are by factors like social desirability and moral-ethical value judgments. Projective tests admittedly do yield empirical material, but it is hard to code and evaluate. The reliability and validity of the data remains doubtful. Research by means of interviews, questionnaires, etc. usually tap imaginary emotional or motivational situations, and give ample opportunity for cognitive distortions. The simplest methods attempt to tap emotional states on the basis of physiological measures; but here too, interpretation of the data remains too unspecific for subtly differentiated research questions. Summarizing: the classical test-methodological conditions yield emotions only in cognitively processed form, while the direct manifestation of emotional and motivational behavior determinants presents problems in coding and evaluation.

Recent developments within the field of interactive TV and computer games suggest a way out of this dilemma; the high degree of ego-involvement observed in children and adults alike when playing these games suggests that interactive, computer-run experimental apparatus can be used in the empirical imvestigation of the emotional and motivational aspects of behavior.

Hirsig c.s. then set out to develop a systemic conceptual model for computer-aided, interactive experimental designs. This model contains the following elements and the interrelations between them: objective and subjectively perceived stimulus situation; the intended and actual actions of the subject and the subjectively experienced success in performing these actions; the reference values for the variables of experience and the difference with actual experience, and the possibility to modify these reference values; the behavioral goals and the possibility to modify one's behavioral strategies as well as one's perceptual filters.

This model is an operationalization of the basic hypothesis, derived from stability theory, that subjects dynamically regulate their actual experience by means of their actions, in such a way that it conforms to their reference values regarding this experience. As in Robinson's Ashby mapping, a basic premise is that subjects have a sufficient amount of internal variety - in this case variety in their behavioral repertoire and behavioral strategies - to influence the stimulus situation through their actions, thus stabilizing their experience by bringing the actual experience closer to the reference values for this experience.

The construct variable "subjectively experienced success" monitors the individual's longer term stabilization behavior; if it remains subjectively too low, the individual has three options to still stabilize his experience at a point near his reference values:
- modification of behavioral strategies, i.e. learning;
- modification of reference values assigned to the perceived situation, i.e. adaptation of expectations;
- modification of the subject's perception, i.e. modification of the input filter in such a way that no component of the subjectively perceived situation will continue to be significant for the critical experience dimension.

In setting up their experiment, Hirsig c.s. departed from the classic investigative paradigm in the field of attachment research: the opposed needs for security and exploration of small children within the context of conflict between their familiar, security-providing mothers on the one hand, and frightening but fascinating strangers on the other. The distances the children maintain to these two actors serve as indicators for the construct variables "security" and "arousal". For the present experiment, mother and stranger were of course substituted by more age-appropriate interaction partners. Subjects were trained to operate a space craft simulator, and were told the experiment was intended to test its efficiency. A home base with which radio contact was maintained served as a friendly, helpful partner, while a menacing, but stationary UFO took the role of a fascinating, but dangerous object. The cockpit was realistically designed with numerous instruments and control lights and gave a view into space by real-time computer-generated graphics.

Acoustically too, the situation was made as realistic as possible: home base became barely audible on the radio with increasing distance, while the roar of the UFO became deafening as it was approached. Warning signals from the on-board computer became louder whenever a meteorite approached the spacecraft. With meteorites, the subject could take several courses of action:
- home base offered unconditional help in any crisis situation, but in that case took over control of the space craft (supervision); with greater distance to home base, help took longer to arrive, but the subject's autonomy was greater;
- by changing course meteorites could be avoided;
- also, they could be blasted with the cannon.

When the subject had developed his own behavioral strategy in this respect (e.g. with individually different average distances kept to home base), he was confronted with an UFO of whose existence he had not previously been informed. And here again, individually characteristic behavior patterns developed, ranging from careful approach to outright flight. To check the core premise - that the stimulus situation, as measured by distance from home base and from unknown object, stands in a close and unambiguous relation to experienced security and experienced arousal -two physiological variables were measured during the flight: heart rate and galvanic skin response, while a hidden video camera recorded the subjects' facial expressions. Individual reference values for security and arousal were indicated by the mean distance towards home base, resp. the unknown object. A projective test administered after the experiment tapped the subjects' need for security and arousal; the results for the motivational scores on this projective test correlated highly with the results of the adventure experiment, and thus indicate the high external validity of this experimental approach.


3.7 Self-referencing and economic theory

DeVillé (1990) lands us firmly with our feet on the earth again. His contribution, entitled "Equilibrium versus reproduction", criticizes general equilibrium theory, considered by most economists to be an adequate theoretical description of a market-decentralized economy. Three important lessons can be learned from this critical analysis:

1) The effort to develop a theory of society which relies exclusively on methodological individualism presents unsolvable difficulties;
2) Once such methodological exclusiveness is abandoned, the sharp demarcation between economics and the other social sciences becomes untenable;
3) The construction of any adequate theory of society requires the elaboration of a dynamic theory of reproduction and transformation, combining human freedom and agency with structural constraints.

Economists often feel that neoclassical economics, since it is based on a well-developed and formalized rational choice theory, is the appropriate theoretical framework to deal with issues traditionally studied by sociologists; and some sociologists, especially those propounding rational choice theory, support this expansionist view. Reactions have come from economists and sociologists alike. Some economists, most notably the French "Regulation School", question the possibility to elaborate, on the basis of the neoclassical framework, a convincing macro-economic theory adequately representing the global functioning of a decentralized market economy. Sociologists have criticized the weaknesses of traditional economic models, where many sociological variables are either left out or kept exogenous.

DeVillé has developed, with amongst others Burns and Baumgartner (1982, 1986), the "actor-oriented systems approach", which is based on two key ideas:
1) that individual (or collectively structured actor's) behavior is fundamentally strategic, and therefore does not take its environment as given, as parametric;
2) that society can be conceived as a multi-level, hierarchically structured system; it can be viewed this way because of the existence of a complex set of rules (i.e. institutions as "rules of the games") that dominate each other according to, among other things, the power relations between social actors.

It then follows, paradoxically, that economic actors truly compete against each other precisely by trying to escape from the state of affairs defined by economists as "perfect competition". They do so also by engaging in socio-political competition, in ways that might even contradict the standard behavioral assumption of profit maximization. Competitive struggles are therewith structured as multi-level games. Assuming that indeed a more dynamic approach to competition requires quite different behavioral assumptions, it becomes difficult to maintain the present sharp dichotomy between economics and sociology.

The key issue in economic theory is to provide an adequate theoretical description of the global functioning of a decentralized market economy. Since Adam Smith, economic theory has tried to answer the question: can the pursuit of self-interest by free and independent agents through voluntary and not a priori coordinated exchanges result in order rather than anarchy? Neoclassical general equilibrium theory (NCGET) claims to be able to provide an affirmative answer. However, as DeVillé stresses, this claim is based on a number of unrealistic assumptions, on an equilibrium method which fails to clarify how the behavior of individuals in a non-equilibrium state of the system will spontaneously bring the system into an equilibrium state, and in a resulting ideal state that is no more than a thought experiment with a normative potential implication: if and only if the world would be like the one implied by the unrealistic assumptions made, then such a world would be characterized by "order".

NCGET started by postulating a price adjustment rule determined by the market excess demand functions: if, at a certain price level, there is an excess demand, prices will rise until an equilibrium is reached. However, since perfect competition is assumed, prices are parametric (i.e. non-influencable) for the agents operating in the market. To therefore answer the question of who then determines the price if no single actor can, an "auctioneer" (some centralized device) had to be postulated. This auctioneer announces prices of several commodities, then calculates excess demands at these prices given the answers received, and then adjusts the prices. However, during this process, no effective trading, consumption or production can take place.

Later theoretical developments relaxed these rather absurd and unrealistic assumptions; however, when the auctioneer and his actions are removed from the theory, no convergence from a non-equilibrium state could be proved, while even the dynamic interactions among agents were difficult to conceptualize. If one keeps the auctioneer as part of the theory, his task becomes more complicated: apart from implementing the price adjustment rule, he additionally has to decide upon and enforce a rationing scheme, allocating among buyers commodities in excess demand, and among sellers commodities in excess supply. Thus, on a more general level, NCGET demonstrates the difficulties inherent in the construction of a truly dynamic theory of social systems.

In the actor-oriented systems approach, the price system is a meta-level structure of the highest possible order, acting as a constraint imposed upon all individual agents, and beyond their reach. The adjustment principle is the "system need", since it is the necessary structural requirement for its reproduction. In realistic systems models of dynamic social systems, one has to be careful not to attribute knowledge to agents within the model (e.g. about prices in equilibrium states) that can only be acquired by the model designer. The rationality of actors also has to be defined differently than in the NCGET models; Simon's concept of bounded rationality, based on the realization of limited availability of information and equally limited computational abilities of human agents, comes closer to the mark here.

DeVillé then sketches the outlines of an alternative research program, roughly defined as an actor-oriented evolutionary theory of a decentralized market economy. No a priori equilibrium assumptions are made here. Economic agents operate in a complex environment, characterized by "radical uncertainty", and their behavior could be described as "strategic decision-making based on bounded rationality". In other words, there is no optimal strategy; "satisfactory" strategies are determined according to multi-level hierarchized criteria: e.g. "at least survive, possibly expand, or even diversify". Transactions occur through the confrontation of these strategic behaviors with bargaining procedures or rules. Such again hierarchicallly structured rules, with varying degrees of generality (from micro to macro), can take the form of explicit rules or institutions when they are beyond the range of individual decision-making.

NCGET makes a clearcut distinction between a theory of the existence of equilibria and a theory of convergence towards those equilibria which is secondary both conceptually and in terms of the sequence of research tasks to be performed. However, there is no reason to limit oneself to the study of economic processes that converge towards NCGET equilibria. Stability of economic systems - and social systems in general -could also be conceptualized as states of the system where its core structure and processes reproduce themselves, although micro-units like economic agents might find themselves in non-optimal situations.

What is needed is an economic theory of institutions, explaining what is the minimal set of institutional mechanisms necessary for the theoretical description of the dynamic processes of a capitalist, decentralized market economy. Institutions should not be dealt with as exogenous to the system; it should be recognized that they emerge from interactive processes among agents, while at the same time posing enduring constraints on individual behaviors, and thus shaping and structuring the interactive processes between these individuals.

The neo-classical equilibrium method should be abandoned; not because it is not valid in itself, but because it imposes an untenable dichotomy between static theories and dynamic theories, between the theory of equilibrium states and the theory of processes.

Institutions can be conceived as "equilibrium solutions" of coordination problems that cannot be solved through market processes. However, in the reproduction method advocated by DeVillé, an equivalent has to be found for the equilibrium conditions in the equilibrium method. Such an equivalent might be described as follows: a system is in a process of reproduction when the institutional framework and the selection processes it entails guarantee that possibly non-optimal but satisficing situations prevail for the "boundedly rational" individual (or collectively organized) agents - in such a way that they will not be induced to engage in strategic behavior in an attempt to change this institutional framework, but on the contrary will accept to bear the burden of its maintenance costs.


3.8 Self-referencing and economic models
Midttun (1990) also deals with the inadequacy of much of economic theory, but with more stress on the necessity for policy-makers to select sufficiently sophisticated, yet workable economic models to guide their policy decisions. The problem is that economic theory has developed ideal-typical constructs with a high degree of internal consistency and strong normative power, which however is bought to a large extent at the expense of realism. Often, existing economic models have a limited scope, while actor and structure assumptions make for considerable deviation from the "messy" real world. Advice sought from such idealized and limited models of the economy may be right within their limited scope, but nevertheless gives wrong guidance for the political economy as a whole.

Here, the paradox of self-referential systems, as analyzed specifically by Van der Zouwen for the case of methods research, comes to the fore again: one of the reasons for the above is that models may indeed be corroborated or falsified by the very policies based on policy advice derived from them. Applying Ashby's Law of Requisite Variety, one can argue that the political governance system must have models of the political economy that are sufficiently rich to map the relevant variety found in it, if at least it is to exercise successful control.

For pragmatic purposes, economic modelling is therefore faced with a difficult tradeoff between realism and analytical simplicity. The more extensive the policy ambitions, the stronger the need for comprehensive models of the political economy to guide policy decisions in a "collectively rational" way. But the more comprehensive the models become in terms of including the multidimensional complexity of interacting political and economic processes, the less founded is the policy advice that can be derived from them. Midttun then devotes his contribution to discussing this dilemma of the tradeoff between realism and analytical simplicity in three models of political economy:

- neoclassical marginalism with its paradigm of the self-regulating market (roughly DeVillé's NCGET);
- Keynesian macro-economics, with its paradigm of the planned mixed economy;
- negotiated political economy, which contains a number of post-Keynesian political science and political sociology critiques (approximately DeVillé's reproduction method).

The neoclassical paradigm of the self-regulating market is essentially a model of the parametric self-governance of the economic system, where a stable state is reached unintentionally through the interaction of economic actors within a given set of market rules. The Keynesian and later macro-economic models are generally a combination of the neoclassical model and a model of economic governance through rational state intervention. The negotiated political economy perspective, finally, displays models of competitive multiple-centered governance, where economic actors engage in economic transactions, but also deliberately organize to reshape market conditions and transaction rules.

These three models can be seen as successive steps in increasing systemic complexity, ranging from single-level transaction systems to multi-leveled and multiple-centered systems. The self-referential character of complex systems poses severe limitations on the possibility to comprehensively model the political economy; moving from simplistic to complex realistic modelling implies also a move from fully specified optimal solutions to conditionally specified sets of alternatives. Each of these three successive models implies a widening of systemic boundaries or field of reference; neoclassical economics tends to restrict itself to pure market processes, Keynesianism shifted from a micro- to a macro-orientation and included a rational state playing an important role as an external regulator of the socio-economic system, while the negotiated political economy perspective broadens economic analysis to encompass a number of both political and administrative elements, thus creating more fuzzy boundaries between economic and other social systems.

By making different extensions of their field of reference and varying the "tightness" of their a priori analytical assumptions, neoclasssical economics, Keynesian macro-economics and negotiated political economy delineate different aspects of the political economy and are faced with different sets of methodological problems. Neoclassical economics is based to a large extent, for example, on assumptions of closed systems with fixed causal structures; as in much of social science, the constancy of causal relations is taken for granted, and one searches for laws of social behavior. However, as theories of the political economy become more inclusive through the widening of the systemic boundaries or the field of reference, as well as through the loosening of analytical assumptions or the inclusion of a greater degree of system multi-dimensionality, the analyst is increasingly faced with the complex and morphogenetic character of social systems - which precludes predictions about social processes and events in any strong sense. Keynesianism assumed a multiplier effect of the state's role in stimulating consumer demand and supplementing private investment.

This assumption was based on another implicit assumption: i.e. that the state has sufficient internal control over its own implementation process, and sufficient protection against encroaching particularistic interests. The problems of economic governance in the last two decades -characterized by stagflation, stagnant economies and expectation crises - have served to underline the unrealistic nature of the above assumptions. As the public sector became large enough to influence the economy, welfare policy developed as well and turned out to be subject to particularistic claims; the close coupling of the partially contradictory goals of macro-economic stabilization and welfare policy thus served to tie up the freedom of the state to efficiently pursue a macroeconomically motivated policy.

The negotiated political economy perspective contests the Keynesian assumption of a collectively rational state, unbound by interest conflicts within the economy in its internal decision-making. In reality, such interest conflicts abound: the regulatory state apparatus is likely to act suboptimally from the viewpoint of collective rationality of society as a whole, as a result of biased political interest aggregation on the input side and implementation problems on the output side. Governance should be viewed therefore as a multiple-centered and only partially coordinated system, where the state has to govern the economy through negotiations with other interests. In order to become more realistic, the Keynesian model therefore has to be enlarged with a set of organizations representing market actors, and a negotiating arena of competing regulatory agents supplementing and/or contesting state governance.

A problem in this respect is that such a negotiated political economy tends to over-allocate support to well-organized groups representing particularistic interests. The reasons why should be clear:

- While the interest groups have large gains and relatively small costs of mobilization, the inverse situation holds for society as a whole. Targeted public support paid out of public funds results in a considerable increase in welfare for each member of the target group at a relatively modest total cost for the system as a whole. On the other hand, to mobilize efficient support for a collectively optimal allocation of resources, if possible at all, is relatively costly. Consequently, both the distribution of payoffs and the costs of efficient mobilization thus favor particularistic interests.

- On the implementation side, organizational inertia, the autonomy of bureaucracy and the penetration of political interests invalidate the macroeconomic assumption of a neutral and rational state. Specialized bureaucracies and private sector representatives share common assumptions, priorities and procedures, such that the implementation of political decisions may serve to further underline the bias towards particularism created on the political input side.

For Midttun then, self-referentiality poses limits to modelling, as also for the authors described above. In the negotiated political economy model, predictions may severely affect the very operations of the economic behavior that is being modelled. When politico-economic modelling becomes closely coupled to political decision-making, and particularly when the policy process itself is incorporated in the model, modellers face the problem that they have to make pronouncements about the actors' expected behavior in a situation where the actual behavior of the same actors may be heavily influenced by the cognition gained through the model and its forecasts.

This problem of self-reactivity refers to a chain of linkages between information, cognition, organization and action. If information resulting from the forecast is fed back into the cognitive models of actors who participate in the system that is being forecasted, and if moreover those actors are organized in such a way that they can act on the basis of this information, they will then potentially be able to alter their behavior as a result of the forecast. Consequently, the mapping of the system must now also include mapping of self-reactive properties - and the reactions to these reactions, etc. - and all of these must be handled within the model, which logically ends up in an infinite regress, and necessitates an increasingly complex model that is vulnerable to validity and reliability problems.

Compared to these problems inherent in efforts to model a negotiated political economy, neoclassical economics - with its strong actor and structure assumptions and its restrictive boundary specification - certainly has the virtue of simplicity, and maintains an objective and logical basis for predictive knowledge. However, it is bought at the expense of its realism and its ability to cope with multi-level complexity. Keynesian macroeconomics was already able to deal with a richer set of properties of the real world by giving up the strong thesis of self-regulatory optimization. The negotiated political economy paradigm even does not assume rational optimization at the regulatory level, and thus makes for more realistic insight, though less possibilities for prediction.

While admittedly the cost of this analytical richness has been the loss of the "shortcut to predictive knowledge", the strength of the more complex models of the negotiated political economy perspective lies in their heuristic function. Outcomes may not be specified in unambiguous optimality criteria, but will have the character of probabilistic, or even possibilistic or conditional statements, dependent on rationality criteria, structural assumptions, assumed goals and values of different actor segments.



4. Epilogue

In the preceding section, eight recent examples of innovative sociocybernetic research have been discussed. These examples clearly demonstrate the applicability of sociocybernetics to a wide variety of subjects within the social sciences. Moreover, they stress its specific characteristics as mentioned in section 2 - characteristics which set it apart from most social science research, more often than not in a positive way. Finally, they clarify the direction in which sociocybernetics has been developing: from originally rather mechanistic, first order cybernetics to an increasingly sophisticated second order cybernetics, with all its implications like autopoiesis and self-reference, which make it eminently suitable for the subject matter of the social sciences: human individuals and groups.


Nevertheless, much still remains to be desired: most research in the field is still done by cyberneticians and systems theorists rather than by social scientists, while it is generally of a theoretical nature. Consequently, the authors consider it desirable to stimulate more empirical research, especially by social scientists. Up till now, the sociocybernetic approach unfortunately has gained few adherents in the mainstream social science community, which also barely makes use of its results. Perhaps this is the case because, on the one hand, it is still relatively unknown, while on the other hand it is rarely a part of social science curricula. Another reason may be the unwarranted reproach of implicit conservatism, made by generally liberal social scientists, discussed in the beginning of section 2.

Whatever the cause, however, there is a clear task for sociocyberneticians: to convince the social science community of the value of their approach.







References

1. Ashby, W.R., An Introduction to Cybernetics. London: Chapman & Hall, 1956.
2. Baumgartner, Thomas, and Tom R. Burns, "Wealth and poverty among nations: a social systems perspective on inequality, uneven development and dependence in the world economy". Pp. 3-22 in: Dependence and Inequality, op. cit.
3. Baumgartner, Thomas, Tom R. Burns, Philippe DeVillé and Bernard Gauci, "Inflation, politics, and social change: actor-oriented systems analysis applied to explain the roots of inflation in modern society". Pp. 59-88 in: Dependence and Inequality, op. cit.
4. Baumgartner, Thomas, "Actors, models and limits to societal self-steering". Pp. 9-25 in: Sociocybernetic Paradoxes, op. cit.
5. Bråten, S., "The third position: beyond artificial and autopoietic reduction". Pp. 193-205 in: Sociocybernetic Paradoxes, op. cit.
6. Buckley, W., Sociology and Modern Systems Theory. Englewood Cliffs, NJ, 1967.
7. Chomsky, N., and Miller, G., "Introduction to the formal analysis of natural languages". In: R.D. Luce, R.R. Bush and E. Galanter (eds.), Handbook of Mathematical Psychology, Vol. II. New York: Wiley, 1963.
8. Geyer, R.F., and van der Zouwen, J. (eds.), Sociocybernetics: an actor-oriented systems approach. Two volumes. Leiden: Martinus Nijhoff, 1978.
9. Geyer, R.F., and van der Zouwen, J. (eds.), Dependence and Inequality: A Systems Approach to the Problems of Mexico and Other Developing Countries. Oxford: Pergamon, 1982.
10. Geyer, F., and van der Zouwen, J. (eds.), Sociocybernetic Paradoxes: Observation, Control and Evolution of Self-steering Systems. London: SAGE, 1986.
11. Geyer, F., and van der Zouwen, J. (eds.), Self-referencing in Social Systems. Salinas, CA: Intersystems Publications, 1990 (a).
12. Geyer, F., and van der Zouwen, J., "Self-referencing in social systems", pp. 1-29 in: Self-referencing in Social Systems, op. cit. (1990b)
13. Gierer, A., "Systems aspects of socio-economic inequalities in relation to developmental strategies". Pp. 23-34 in: Dependence and Inequality, op. cit.
14. Howard, N., Paradoxes of Rationality: Theory of Metagames and Political Behavior. Cambridge, Mass.: MIT Press, 1971.
15. Lilienfeld, R., The Rise of Systems Theory: An Ideological Analysis. New York: Wiley, 1978.
16. Luhmann, N. "The autopoiesis of social systems". Pp. 172-192 in: Sociocybernetic Paradoxes, op. cit.
17. Maturana, H.R., and Varela, F.J., Autopoiesis and Cognition: The Realization of the Living. Dordrecht: Reidel, 1980.
18. Merton, R.K., "The self-fulfilling prophecy". Antioch Review, 1948, Vol. 8, 193-210.
19. Pask, G., Conversation Theory. Amsterdam: Elsevier, 1976.
20. Pask, G., "A conversation theoretic approach to social systems". Pp. 15-26, vol. I in: Sociocybernetics, op. cit.
21. Prigogine, I., and Stengers, I, Order out of Chaos. New York: Bantam, 1984.
22. Zeeuw, Gerard de, "Social change and the design of enquiry", pp. 131-144 in: Sociocybernetic Paradoxes, op. cit.




RC51 main page
Go FG page



Felix Geyer
Snailmail: Van Beeverlaan 8A, 1251 ES Laren, The Netherlands
Phone: 31 35 533 5641. Fax: 31 35 533 5643
Email: geyer@xs4all.nl


Your commentsChaime Marcuello. Webmaster
created: january 27, 1999
updated: **