VIRTUAL ENVIROMENTS
AND THE EMERGENCE OF SYNTHETIC REASON

by MANUEL DE LANDA

    At the end of World War II, Stanislav Ulam and other scientists previously involved in weapons research at Los Alamos, discovered the huge potential of computers to create artificial worlds, where simulated experiments could be conducted and where new hypotheses could be framed and tested. The physical sciences were the first ones to tap into this "epistemological reservoir" thanks to the fact that much of their accumulated knowledge had already been given a mathematical form. Among the less mathematized disciplines, those already taking advantage of virtual environments are psychology and biology (e.g. Artificial Intelligence and Artificial Life) ,although other fields such as economics and linguistics could soon begin to profit from the new research strategies made possible by computer simulations.

    Yet, before a given scientific discipline can begin to gain from the use of virtual environments, more than just casting old assumptions into mathematical form is necessary. In many cases the assumptions themselves need to be modified. This is clear in the case of Artificial Intelligence research, much of which is still caught up into older paradigms of what a symbol-manipulating "mind" should be, and hence has not benefited as much as it could from the simulation capabilities of computers. Artificial Life, on the other hand, has the advantage that the evolutionary biologist's conceptual base has been purged from classical notions of what living creatures and evolution are supposed to be, and this has put this discipline in an excellent position to profit from the new research tool represented by these abstract spaces. Since this is a crucial point, let's take a careful look at just what this "purging"has involved.

    The first classical notion that had to be eliminated from biology was the Aristotelian concept of an "ideal type", and this was achieved by the development of what came to be known in the 1930's as "population thinking". In the old tradition that dominated biological thought for over two thousand years, a given population of animals was conceived as being the more or less imperfect incarnation of an ideal essence. Thus, for example, in the case of zebras, there would exist an ideal zebra, embodying all the attributes which together make for "zebrahood"(being striped, having hoofs etc. The existence of this essence would be obscured by the fact that in any given population of zebras the ideal type would be subjected to a multiplicity of accidents (of embryological development, for instance) yielding as end result a variety of imperfect realizations. In short, in this view, only the ideal essence is real, with the variations being but mere shadows.

    When the ideas of Darwin on the role of natural selection and those of Mendel on the dynamics of genetic inheritance were brought together six decades ago, the domination of the Aristotelian paradigm came to an end. It became clear, for instance, that there was no such thing as a preexistent collection of traits defining "zebrahood".Each of the particular adaptive traits which we observe in real zebras developed along different ancestral lineages, accumulated in the population under the action of different selection pressures, in a process that was completely dependent on specific (and contingent) historical details. In other words, just as these traits (camouflage, running speed and so on) happened to come together in zebras, they may not have, had the actual history of those populations been any different.

    Moreover, the engine driving this process is the genetic variability of zebra populations. Only if zebra genes replicate with enough variability can selection pressures have raw materials to work with. Only if enough variant traits arise spontaneously, can the sorting process of natural selection bring together those features which today define what it is to be a zebra. In short, for population thinkers, only the variation is real, and the ideal type (e.g. the average zebra) is a mere shadow. Thus we have a complete inversion of the classical paradigm. 1

    Further refinement of these notions have resulted in the more general idea that the coupling of any kind of spontaneous variation to any kind of selection pressure results in a sort of "searching device". This "device" spontaneously explores a space of possibilities (i.e. possible combinations of traits), and is capable of finding, over many generations, more or less stable combinations of features, more or less stable solutions to problems posed by the environment. This "device" has today been implemented in populations that are not biological. This is the so called "genetic algorithm" (developed by John Holland) in which a population of computer programs is allowed to replicate in a variable form, and after each generation a test is performed to select those programs that most closely approximate the desired performance. It has been found that this method is capable of zeroing-in the best solutions to a given programming task. In essence, this method allows computer scientists to breed new solutions to problems, instead of directly programming those solutions. 2

    The difference between the genetic algorithm and the more ambitious goals of Artificial Life is the same as that between the action of human breeding techniques on domesticated plants and animals, and the spontaneous evolution of the ancestors of those plants and animals. Whereas in the first case the animal or plant breeder determines the criterion of fitness, in the second one there is no outside agency determining what counts as fit. In a way what is fit is simply that which survives, and this has led to the criticism that Darwinism's central formula (i.e. "survival of the fittest") is a mere tautology ("survival of the survivor"). Partly to avoid this criticism this formula is today being replaced by another one: survival of the stable. 3

    The central idea, the notion of an "evolutionary stable strategy "was formulated with respect to behavioural strategies (such as those involved in territorial or courtship behaviour in animals) but it can be extended to apply to the "engineering strategies" involved in putting together camouflage, locomotive speed and the other traits which come together to form the zebras of the example above. The essence of this approach is that the "searching device" constituted by variation and selection can find the optimal solution to a given problem posed by the environment, and that once the optimal solution has been found, any mutant strategy arising in the population is bound to be defeated. The strategy will be, in this sense, stable against invasion. To put it invisual terms, it is as if the space of possibilities explored by the "searching device" included mountains and valleys, with the mountain peaks representing points of optimal performance. Selection pressures allow the gene pool of a reproductive population to slowly climb those peaks, and once a peak has been reached, natural selection keeps the population there.

    One may wonder just what has been achieved by switching from the concept of a "fittest mutant" to that of an "optimal" one, except perhaps, that the latter can be defined contextually as "optimal given existing constraints". However, the very idea that selection pressures are strong enough to pin populations down to "adaptive peaks" has itself come under intense criticism. One line of argument says that any given population is subjected to many different pressures, some of them favoring different optimal results. For example, the beautiful feathers of a peacock are thought to arise due to the selection pressure exerted by "choosy" females, who will only mate with those males exhibiting the most attractive plumage. Yet, those same vivid colors which seduce the females also attract predators. Hence, the male peacock's feathers will come under conflicting selection pressures. In these circumstances, it is highly improbable that the peacock's solution will be optimal and much more likely that it will represent a compromise. Several such sub-optimal compromises may be possible, and thus the idea that the solution arrived at by the "searching device" is unique needs to be abandoned. 4 But if unique and optimal solutions are not the source of stability in biology, then what is?.

    The answer to this question represents the second key idea around which the field of Artificial Life revolves. It is also crucial to understand the potential application of virtual environments to fields such as economics. The old conceptions of stability (in terms of either optimality or principles of least effort) derive from nineteenth century equilibrium thermodynamics. It is well known that philosophers like Auguste Comte and Herbert Spencer (author of the formula "survival of the fittest") introduced thermodynamic concepts into social science. However, some contemporary observers complain that what was so introduced (in economics, for example) represents "more heat than light". 5

    In other words, equilibrium thermodynamics, dealing as it does with systems that are closed to their environment, postulates that stability can only be reached when all useful energy has been transformed into heat. At this point, a static and unique state of equilibrium is reached (heat death). It was this concept of a static equilibrium that late nineteenth century economists used to systematize the classical notion of an "invisible hand" according to which the forces of demand and supply tend to balance each other out at a point which is optimal from the point of view of society's utilization of resources. It was partly John Von Neumann's work on Game Theory and economics that helped entrench this notion of stability outside of physics, and from there it found its way into evolutionary biology, through the work of John Maynard Smith. 66

    This static conception of stability was the second classical idea that needed to be eliminated before the full potential of virtual environments could be unleashed. Like population thinking, the fields that provided the needed new insights (the disciplines of far-from-equilibrium thermodynamics and nonlinear mathematics) are also a relatively recent development, associated with the name of Ilya Prigogine, among others. Unlike the "conservative"systems dealt with by the old science of heat, systems which are totally isolated from their surroundings, the new science deals with systems that are subjected to a constant flow of matter and energy from the outside. Because this flow must also exit the system in question, that is, the waste products need to be dissipated, this systems are called "dissipative". 7

    For our purposes here what matters is that once a continuous flow of matter-energy is included in the model, a wider range of possible forms of dynamic equilibria becomes possible. The old static stability is still one possibility, except that now these equilibrium points are neither unique nor optimal (and yet they are more robust that the old equilibria). Non-static equilibria also exist, in the form of cycles, for instance. Perhaps the most novel type of stability is that represented by "deterministic chaos", in which a given population can be pinned down to a stable, yet inherently variable, dynamical state. These new forms of stability have received the name of "attractors", and the transitions which transform one type of attractor into another have been named "bifurcations". Let's refer to the cluster of concepts making up this new paradigm of stability as "nonlinear dynamics". 8

    One of the most striking consequences of nonlinear dynamics is that any population (of atoms, molecules, cells, animals, humans) which is stabilized via attractors, will exhibit "emergent properties", that is, properties of the population as a whole not displayed by its individual members in isolation. The notion of an emergent or synergistic property is a rather old one, but for a long time it was not taken very seriously by scientists, as it was associated with quasi-mystical schools of thought such as "vitalism". Today, emergent properties are perfectly legitimate dynamical outcomes for populations stabilized by attractors. A population of molecules in certain chemical reactions, for instance, can suddenly and spontaneously begin to pulsate in perfect synchrony, constituting a veritable "chemical clock". A population of insects (termites, for instance) can spontaneously become a "nest-building machine", when their activities are stabilized nonlinearly.

    Thus, the "searching-device" constituted by variation coupled to selection, does not explore an unstructured space of possibilities, but a space "pre-organized" by attractors and bifurcations. In a way, evolutionary processes simply follow these changing distributions of attractors, slowly climbing from one dynamically stable state to another. For example, since in this space one possible outcome is a chemical clock, the searching-device could have stumbled upon this possibility which in essence constitutes a primitive form of a metabolism. The same point applies to other evolutionary stable strategies, such as the nest-building strategy of the termites.

    After this rather long introduction, we are finally in a position to understand enterprises such as Artificial Life. The basic point is that emergent properties do not lend themselves to an analytical approach, that is, an approach which dissects a population into its components. Once we perform this dissection, once the individuals become isolated from each other, any properties due to their interactions will disappear. What virtual environments provide is a tool to replace (or rather, complement) analysis with synthesis, allowing researchers to exploit the complementary insights of population thinking and nonlinear dynamics. In the words of Artificial Life pioneer, Chris Langton:

    "Biology has traditionally started at the top, viewing a living organism as a complex biochemical machine, and worked analytically downwards from there - through organs, tissues, cells, organelles, membranes, and finally molecules - in its pursuit of the mechanisms of life. Artificial Life starts at the bottom, viewing an organism as a large population of simple machines, and works upwards synthetically from there, constructing large aggregates of simple, rule-governed objects which interact with one another nonlinearly in the support of life-like, global dynamics. The 'key' concept in Artificial Life is emergent behavior. Natural life emerges out of the organized interactions of a great number of nonliving molecules, with no global controller responsible for the behavior of every part. It is this bottom-up, distributed, local determination of behavior that Artificial Life employs in its primary methodological approach to the generation of life-like behaviors ". 9

    The typical Artificial Life experiment involves first the design of a simplified version of an individual animal, which must possess the equivalent of a set of genetic instructions used both to create its offspring as well as to be transmitted to that offspring. This transmission must also be "imperfect" enough, so that variation can be generated. Then, whole populations of these "virtual animals" are unleashed, and their evolution under a variety of selection pressures observed. The exercise will be considered successful if novel properties, unthought of by the designer, spontaneously emerge from this process.

    Depending on the point of view of the designer, these emergent properties need to match those observed in reality, or not. That is, a current theme in this field is that one does not have to be exclusively concerned with biological evolution as it has occurred on planet Earth, since this may have been limited by the contingencies of biological history, and that there is much to be learned from evolutionary paths that were not tried out in this planet. At any event, the goal of the simulation is simply to help "synthesize intuitions" in the designer, insights that can then be used to create more realistic simulations. The key point is that the whole process must be bottom-up, only the local properties of the virtual creatures need to be predesigned, never the global, population-wide ones.

    Unlike Artificial Life, the approach of Artificial Intelligence researchers remained (at least until the 1980's) largely top-down and analytical. Instead of treating the symbolic properties they study as the emergent outcome of a dynamical process, these researchers explicitly put symbols (labels, rules, recipes) and symbol-manipulating skills into the computer. When it was realized that logic alone was not enough to manipulate these symbols in a significantly "intelligent" way, they began to extract the rules of thumb, tricks of the trade and other non-formal heuristic knowledge from human experts, and put these into the machine but also as fully formed symbolic structures. In other words, in this approach one begins at the top, the global behavior of human brains, instead of at the bottom, the local behavior of neurons. Some successes have been scored by this approach, notably in simulating skills such as those involved in playing chess or proving theorems, both of which are evolutionarily rather late developments. Yet the symbolic paradigm of Artificial Intelligence has failed to capture the dynamics of evolutionarily more elementary skills such as face-recognition or sensory-motor control. 10

    Although a few attempts had been made during the 1960's to take a more bottom-up approach to modeling intelligence (e.g. the perception), the defenders of the symbolic paradigm practically killed their rivals in the battle for government research funds. And so the analytical approach dominated the scene until the 1980's when there occurred a spectacular rebirth of a synthetic design philosophy. This is the new school of Artificial Intelligence known as "connectionism". Here, instead of one large, powerful computer serving as a repository for explicit symbols, we find a large number of small, rather simple computing devices (in which all that matters is their state of activation), interacting with one another either to excite or inhibit each other's degree of activation. These simple processors are then linked together through a pattern of interconnections which can vary in strength.

    No explicit symbol is ever programmed into the machine since all the information needed to perform a given cognitive task is coded in the interconnection patterns as well as the relative strengths of these interconnections. All computing activity is carried out by the dynamical activity of the simple processors as they interact with one another (i.e. as excitations and inhibitions propagate through the network), and the processors arrive at the solution to a problem by settling into a dynamical state of equilibrium. (So far point attractors are most commonly used, although some designs using cyclic attractors are beginning to appear.) 11

    If there is ever such a thing as a "symbol" here, or rather symbol-using (rule-following) behavior, it is as an emergent result of these dynamics. This fact is sometimes expressed by saying that a connectionist device (also called a "neuralnet") is not programmed by humans, but trained by them, much as a living creature would. In the simplest kind of networks the only cognitive task that can be performed is pattern association. The human trainer presents to the network both patterns to be associated, and after repeated presentations, the network "learns"to associate them by modifying the strength of the interconnections. At that point the network can respond with the second pattern whenever the first one is presented to it.

    At the other end of the spectrum of complexity, multilayered networks exhibit emergent cognitive behavior as they are trained. While in the simple case of pattern association much of the thinking is done by the trainer, complex networks (i.e. those using "hidden units") perform their own extraction of regularities from the input pattern, concentrating in micro features of the input which many times are not at all obvious to the human trainer. (In other words, the network itself "decides" what traits of the pattern it considers as salient or relevant.)

    These networks also have the ability to generalize from the patterns they have learned, and so will be able to recognize a new pattern that is only vaguely related to one they have been previously exposed to. In other words, the ability to perform simple inductive inferences emerges in the network without the need to explicitly code into it the rules of a logical calculus. These designs are also resilient against damage unlike their symbolic counterparts which are inherently brittle. But perhaps the main advantage of the bottom-up approach is that its devices can exhibit a degree of "intentionality".

    The term "intentionality" is the technical term used by philosophers to describe the relation between a believer and the states of affairs his beliefs are about. That is, an important feature of the mental states of human beings and other animals (their beliefs and desires) is that they are about phenomena that lie outside their minds. The top-down, symbolic approach to Artificial Intelligence sacrifices this connection by limiting its modeling efforts to relations between symbols. In other words, in the analytical approach only the syntactic or formal relations between symbols matter (with the exception of an "internal semantics" involving reference to memory addresses and the like). Hence, these designs must later try to reconnect the cognitive device to the world where it must function, and it is here that the main bottleneck lies (unless the "world" in question is a severely restricted domain of the real world, such as the domain of chess). Not so in the synthetic approach:

    "The connectionist approach to modeling cognition thus offers a promise in explaining the aboutness or intentionality of mental states. Representational states, especially those of hidden units, constitute the system's own learned response to inputs. Since they constitute the system's adaptation to the input, there is a clear respect in which they would be about objects or events in the environment if the system were connected, via sensory-motor organs, to that environment. The fact that these representations are also sensitive to context, both external and internal to the system, enhances the plausibility of this claim that the representations are representations of particular states. " 12

    So far, the abstract living creatures inhabiting the virtual environments of Artificial Life have been restricted to rather inflexible kinds of behavior. One may say that the only kinds of behavior that have been modelled are of the genetically "hard-wired" type, as displayed by ants or termites. Yet adding connectionist intelligence to these creatures could endow them with enough intentionality to allow researchers to model more flexible, "multiple-choice" behavior, as displayed by mammals and birds. We could then expect more complex behavioral patterns (such as territorial or courtship behavior) to emerge in these virtual worlds. Artificial Intelligence could also benefit from such a partnership, by tapping the potential of the evolutionary "searching-device" in the exploration of the space of possible network designs. The genetic algorithm, which exploits this possibility, has so far been restricted to searching for better symbolic designs (e.g. production rules).

    Furthermore, having a virtual space where groups of intentional creatures interact can also benefit other disciplines such as economics or political science. A good example of this is Robert Axelrod's use of a virtual environment to study the evolution of cooperation. His work also exemplifies the complementary use of synthesis (to generate intuitions) and analysis (to formally ground those intuitions). In the words of Douglas Hofstadter:

    "Can totally selfish and unconscious organisms living in a common environment come to evolve reliable cooperative strategies? Can cooperation evolve in a world of pure egoists? Well, as it happens, it has now been demonstrated rigorously and definitively that such cooperation can emerge, and it was done through a computer tournament conducted by political scientist Robert Axelrod. More accurately, Axelrod first studied the ways that cooperation evolved by means of a computer tournament, and when general trends emerged, he was able to spot the underlying principles and prove theorems that established the facts and conditions of cooperation's rise from nowhere." 13

    The creatures that Axelrod placed in a virtual environment to conduct this round-robin tournament were not full-fledged intentional entities of the type envisioned above. Rather, the motivations and options of the creatures were narrowly circumscribed by using the formalism of Game Theory, which studies the dynamics of situations involving conflict of interest. In particular, Axelrod's entities were computer programs, each written by a different programmer, playing a version of the game called " Prisoner's Dilemma". In this imaginary situation, two accomplices in a crime are captured by the police and separately offered the following deal: If one accuses his accomplice, while the other does not, the "betrayer" walks out free, while the "sucker" gets the stiffest sentence. If, on the other hand, both claim innocence and avoid betrayal they both get a small sentence. Finally, if both betray each other, they both get a long sentence. The dilemma here arises from the fact that even though the best overall outcome is not to betray one's partner, neither one can trust that his accomplice won't try to get the best individual outcome (to walk out free )leaving the other with the "sucker payoff". And because both prisoner's reason in a similar way, they both choose betrayal and the long sentence that comes with it, instead of loyalty and its short sentence.

    In the real world we find realizations of this dilemma in, for example, the phenomenon known as "bank runs". When news that a bank is in trouble first come out, each individual depositor has two options: either to rush to the bank and withdraw his savings or to stay home and allow the bank to recover. Each individual also knows that the best outcome for the community is for all to leave their savings in the bank and so allow it to survive. But no one can afford to be the one who loses his savings, so all rush to withdraw their money ruining the institution in the process. Hofstadter offers a host of other examples, including one in which the choice to betray or cooperate is faced by the participants not once, but repeatedly. For instance, imagine two"jungle traders" with a rather primitive system of trade: each simply leaves a bag of goods at a pre defined place, and comes back later to pick another bag, without ever seeing the trading partner. The idea is that on every transaction, one is faced with a dilemma, since one can profit the most by leaving an empty bag and sticking the other with the "sucker payoff". Yet, the difference is that doing this endangers the trading situation and hence there is more to lose in case of betrayal here. (This is called the "iterated Prisoner's Dilemma").

    Axelrod's creatures played such an iterated version of the game with one another. What matters to us here is that after several decades of applying analytical techniques to study these situations, the idea that "good guys finish last" (i.e. that the most rational strategy is to betray one's partner) had become entrenched in academic (and think tank) circles. For example, when Axelrod first requested entries for his virtual tournament most of the programs he received were "betrayers". Yet, the winner was not. It was "nice" (it always cooperated in the first encounter so as to give a sign of good faith and begin the trading situation), "retaliatory" (if betrayed it would respond with betrayal in the next encounter) yet "forgiving" (after retaliating it was willing to reestablish a partnership). As mentioned above, these were not truly intentional creatures so the properties of being "nice, retaliatory and forgiving"were like emergent properties of a much simpler design. Its name was "TIT-FOR-TAT" and its actual strategy was simply to always cooperate in the first move and there after do what the other player did in the previous move. This program won because the criterion of success was not how many partners one beats, but how much overall trade one achieves.

    Because the idea that "good guys finish last" had become entrenched, further analysis of the situation (which could have uncovered the fact that this principle does not apply to the "iterated" version of the game), was blocked. What was needed was to unblock this path by using a virtual enviroment to "synthesize" a fresh intuition. And in a sense, that is just what Axelrod did. He then went further and used more elaborate simulations (including one in which the creatures replicated, with the number of progeny being related to the trading success of the parent), to generate further intuitions as to how cooperative strategies could evolve in an ecological environment, how robust and stable these strategies were, and a host of other questions. Evolutionary biologists, armed with these fresh insights, have now discovered that apes in their natural habitats play a version of TIT-FOR-TAT. 14

    Thus, while some of the uses of virtual environments presuppose that old and entrenched ideas (about essences or optimality) have been superseded, these abstract worlds can also be used to synthesize the intuitions needed to dislodge other ideas blocking the way to a better understanding of the dynamics of reality.

    Population thinking seems to have vanished "essences" from the world of philosophy once and for all. Nonlinear dynamics, and more specifically, the notion of an " emergent property" would seem to signal the death of the philosophical position known as "reductionism" (basically, that all phenomena can in principle be reduced to those of physics). It is clear now that at every level of complexity, there will be emergent properties that are irreducible to the lower levels, simply because when one switches to an examination of lower level entities, the properties which emerge due to their interactions disappear. Connectionism, in turn, offers a completely new understanding of the way in which rule-following behavior can emerge from a system in which there are no explicit rules or symbols what so ever. This would seem destined to end the domination of a conception of language based on syntactical entities and their formal relations (Saussure's signifiers or Chomsky's rules). This conception (let's call it "formalism") has entirely dominated this century, leading in some cases to extreme forms of linguistic relativism, that is, the idea that every culture partitions the world of experience in a different way simply because they use different linguistic devices to organize this experience. If connectionism is correct, then humanity does indeed has a large portion of shared experience (the basic intentional machinery linking them to the world) even if some of this experience can be casted in different linguistic form. 15

    Furthermore, once linguists become population thinkers and users of virtual environments, we could witness the emergence of an entirely different type of science of language. For instance, about a millennium ago, the population of Anglo-Saxon peasants inhabiting England, suffered the imposition of French as the official language of their land by the Norman invaders. In about two hundred years, and in order to resist this form of linguistic colonialism, this peasant population transformed what was basically a soup of Germanic dialects (with added Scandinavian spices) into something that we could recognize as English. No doubt, in order to arrive at modern English another few centuries of transformation would be needed, but the backbone of this language had already emerged from the spontaneous labor of a population under the pressure of an invading language. 16 Perhaps one day linguists will be required to test their theories in a virtual environment of interacting intentional entities, so that the rules of grammar they postulate for a language can be shown to emerge spontaneously from the dynamics of a population of speakers (instead of existing in a "synchronic"world, isolated from the actual interactions of several generations of speakers).

    Virtual environments may not only allow us to capture the fluid and changing nature of real languages, they could also be used to gain insights into the processes that tend to "freeze" languages, such as the processes of standardization which many European languages underwent beginning in the seventeenth century. Unlike the cases of Spanish, Italian and French, where the fixing of the rules and vocabulary of the language were enforced by an institution (e.g. an Academy), in England the process of standardization was carried out via the mass publication of authoritative dictionaries, grammars and orthographies. Just how these "linguistic engineering" devices achieved the relative freezing of what was formerly a fluid "linguistic matter", may be revealed through a computer simulation. Similarly, whenever a language becomes standardized we witness the political conquest of many "minority" dialects by the dialect of the urban capital (London's dialect in the case of English). Virtual environments could allow us to model dynamically the spread of the dominant dialect across cultural and geographical barriers, and how technologies such as the railroader the radio (e.g. the BBC) allowed it to surmount such barriers. 17

    Future linguists may one day look back with curiosity at our twentieth century linguistics, and wonder if our fascination with a static (synchronic) view of language could not be due to the fact that the languages where these views were first formulated (French and English), had lost their fluid nature by being artificially frozen a few centuries earlier. These future investigators may also wonder how we thought the stability of linguistic structures could be explained without the concept of an attractor. How, for instance, could the prevalence of certain patterns of sentence structure (e.g. subject-verb-object, or "SVO") be explained, or how could bifurcations from one pattern to another be modeled without some form of nonlinear stabilization. (e.g. English, which may have switched over a millennium from SOV to SVO). 18 Tomorrow's linguists will also realize that, because these dynamic processes depend on the existence of heterogeneities and other nonlinearities, the reason we could not capture them in our models was due to the entrenchment of the Chomskian idea of a homogeneous speech community of monolinguals, in which each speaker has equal mastery of the language.

    Real linguistic communities are not homogeneous in the distribution of linguistic competence, and they are not closed to linguistic flows from the outside (English, for instance, was subjected to large flows of French vocabulary at several points in its evolution). Many communities are in fact bilingual or multilingual, and constructive as well as destructive interference between languages create nonlinearities which may be crucial to the overall dynamics. As an example of this we may take the case of creole languages. They all have evolved from the pidgins created in slave plantations, veritable "linguistic laboratories" where the language of the plantation master was stripped of its flourishes and combined with particles proceeding from a variety of slave dialects. It is possible that one day virtual environments will allow us to map the dynamical attractors around which these rapidly developing Creole languages stabilized. 19

    The discipline of sociolinguistics (associated with the work of linguists like William Lavob) has made many of the important contributions needed to purge the science of language from the classical assumptions leading to "formalism", and move it closer to true population thinking. Indeed, the central concern of sociolingustics has been the study of stylistic variation in speech communities. This is a mechanism for generating diversity at the level of speakers, and as such it could be dismissed as being exogenous to language. Lavob, however, has also discovered that some of the rules of language (he calls them "variable rules") can generate systematic, endogenous variation. 20 This provides us with one of the elements needed for our evolutionary "searching device".

    Sociolinguists have also tackled the study of the second element selection pressures. The latter can take a variety of forms. In small communities, where language style serves as a badge of identity, peer pressure in social networks can act as a filtering device, promoting the accumulation of those forms and structures that maintain the integrity of the local dialect. On the other hand, stigmatization of certain forms by the speakers of the standard language (particularly when reinforced by a system of compulsory education) can furnish selection pressures leading to the elimination of local styles. Despite these efforts, formalism is still well entrenched in linguistics and so this discipline cannot currently benefit from the full potential of virtual environments. (Which does not mean, of course, that computers are not used in linguistic investigations, but this use remains analytical and top-down instead of synthetic and bottom-up.)

    Just as linguistics inherited the homogeneous, closed space of classical thermodynamics, as well as its static conception of stability, so did mathematical economics. Here too, a population of producers and consumers is assumed to be homogeneous in its distribution of rationality and of market power. That is, all agents are endowed with perfect foresight and unlimited computational skill, and no agent is supposed to exercise any kind of influence over prices. Perfect rationality and perfect competition result in a kind of society-wide computer, where prices transmit information (as well as incentive to buy or sell), and where demand instantly adjusts to supply to achieve an optimal equilibrium. And much as sociolinguists are providing antidotes for the classical assumptions holding back their field, students of organizations and of organizational ecology are doing the same for the study of the economy. 21

    Not only are economic agents now viewed as severely limited in their computational skills, but this bounded rationality is being located in the context of the specific organizations where it operates and where it is further constrained by the daily routines that make up an "organizational memory". In other words, not only is decision-making within organizations performed on the basis of adaptive beliefs and action rules (rather than optimizing rationality), but much of it is guided by routine procedures for producing objects, for hiring/firing employees, for investing in research and development an so on. Because these procedures are imperfectly copied whenever a firm opens up a new plant, this process gives us the equivalent of variable reproduction. 22 A changing climate for investment, following the ups and downs of boom years and recessions, provides some of the selection pressures that operate on populations of organizations. Other pressures come from other organizations, as in natural ecosystems, where other species (predators, parasites) are also agents of natural selection. Here giant corporations, which have control over their prices (and hence are not subjected to supply/demand pressures) play the role of predators, dividing their markets along well defined territories (market shares).

    As in linguistic research, computer simulation techniques have been used in economics (e.g. Econometrics) but in many cases the approach has remained analytic (i.e. top-down, taking as its point of departure macro-economical principles). On the other hand, and unlike the situation in linguistics, a bottom-up approach, combining populations of organizations and nonlinear dynamics, is already making rapid progress. A notable example of this is the Systems Dynamics National Model, at M.I.T. As in the case of Artificial Life, one measure of success here is the ability of these models to synthesize emergent behavior not planned in advance by the model's designers. One dramatic example is the spontaneous emergence of cyclic equilibria in this model with a period matching that of the famous Kondratieff cycle.

    That data from several economic indicators (G.N.P., unemployment rate, aggregate prices, interest rates), beginning in the early nineteenth century, display an unequivocal periodic motion of approximately 50 years duration, is well known at least since the work of Joseph Schumpeter. Several possible mechanisms to explain this cyclic behavior have been offered since then, but none has gained complete acceptance. What matters to us here, is that the M.I.T. model endogenously generates this periodic oscillation, and that this behavior emerged spontaneously from the interaction of populations of organizations, to the surprise of the designers, who were in fact unaware of the literature on Kondratieff cycles. 23

    The key ingredient which allows this and other models to generate spontaneous oscillations, is that they must operate far-from-equilibrium. In traditional economic models, the only dynamical processes that are included are those that keep the system near equilibrium (such as "diminishing returns" acting as negative feedback).The effects of explosive positive feedback processes (such as"economies of scale") are typically minimized. But it is such self-reinforcing processes that drive systems away from equilibrium, and this together with the nonlinearities generated by imperfect competition and bounded rationality, is what generates the possibility of dynamical stabilization. 24

    In the M.I.T. model, it is precisely a positive feedback loop that pushes the system towards a bifurcation, where a point attractor suddenly becomes a cyclic one. Specifically, the sector of the economy which creates the productive machinery used by the rest of the firms (the capital goods sector), is prone to the effects of positive feedback because whenever the demand for machines grows, this sector must order from itself. In other words, when any one firm in this sector needs to expand its capacity to meet growing demand, the machines used to create machines come from other firms in the same sector. Delays and other nonlinearities can then be amplified by this feedback loop, giving rise to stable yet periodic behavior. 25

    As we have seen, tapping the potential of the "epistemological reservoir" constituted by virtual environments requires that many old philosophical doctrines be eradicated. Essentialism, reductionism and formalism are the first ones that need to go. Our intellectual habit of thinking linearly, where the interaction of different causes is seen as additive, and hence global properties that are more than the sum of the parts are not a possibility, also needs to be eliminated. So does our habit of thinking in terms of conservative systems, isolated from energy and matter flows from the outside. Only dissipative, nonlinear systems generate the full spectrum of dynamical forms of stabilization (attractors) and of diversification (bifurcations).

    In turn, thinking in terms of attractors and bifurcations will lead to a radical alteration of the philosophical doctrine known as "determinism". Attractors are fully deterministic, that is, if the dynamics of a given population are governed by an attractor, the population in question will be strongly bound to behave in a particular way. Yet, this is not to go back to the clockwork determinism of classical physics. For one thing, attractors come in bunches, and so at any particular time, a population that is trapped into one stable state may be pushed to another stable state by an external shock (or even by its own internal devices). In a way this means that populations have "choices" between different "local destinies".

    Moreover, certain attractors (called "strange attractors" or "deterministic chaos"), bind populations to an inherently" creative" state. That is, a population whose dynamics are governed by a strange attractor, is bound to permanently explore a limited set of possibilities of the space of its possible states. If a chaotic attractor is "small" relative to the size of this space, then it effectively pins down the dynamics of a system to a relatively small set of possible states, so that the resulting behavior is far from random and yet it is intrinsically variable. Finally, as if this were not enough to subvert classical determinism, there are also bifurcations, critical points at which one distribution of attractors is transformed into another distribution. At the moment this transformation occurs, relatively insignificant fluctuations in the environment can have disproportionately large effects in the distribution of attractors that results. In the words of Prigogine and Stengers:

    "From the physicist's point of view this involves a distinction between states of the system in which all individual initiative is doomed to insignificance on one hand, and on the other, bifurcation regions in which an individual, an idea, or a new behavior can upset the global state. Even in those regions, amplification obviously does not occur with just any individual, idea, or behavior, but only with those that are 'dangerous' - that is, those that can exploit to their advantage the nonlinear relations guaranteeing the stability of the preceding regime. Thus we are led to conclude that the same nonlinearities may produce an order out of the chaos of elementary processes and still, under different circumstances, be responsible for the destruction of this same order, eventually producing a new coherence beyond another bifurcation.". 26 P> This new view of the nature of determinism may also have consequences for yet another philosophical school of thought: the doctrine of "free will". If the dynamical population one is considering is one whose members are human beings (for example, a given human society), then the insignificant fluctuation that can become "dangerous"in the neighborhood of a bifurcation is indeed a human individual (an so, this would seem to guarantee us a modicum of free will). However, if the population in question is one of neurons (of which the global, emergent state is the conscious state of an individual) this would seem to subvert free will, since here a micro-cognitive event may decide what the new global outcome may be.

    At any rate, the crucial point is to recognize the existence, in all spheres of reality, of the reservoir of possibilities represented by nonlinear stabilization and diversification (a reservoir I have somewhere else called "the machinic phylum"). 27
    We must also recognize that by its very nature, systems governed by nonlinear dynamics resist absolute control and that sometimes the machinic phylum can only be tracked, or followed. For this task even our modicum of free will may suffice. The "searching device" constituted by genetic variation and natural selection, does in fact track the machinic phylum. That is, biological evolution has no foresight, and it must grope in the dark, climbing from one attractor to another, from one engineering stable strategy to another. And yet, it has produced the wonderfully diverse and robust ecosystems we observe today. Perhaps one day virtual environments will become the tools we need to map attractors and bifurcations, so that we too can track the machinic phylum in search for a better destiny for humanity.


    MANUEL DE LANDA

    NOTES:

    1)

    Elliot Sober. The Nature of Selection. (MIT Press, 1987), pages 157-161.

    2)

    Steven Levy. Artificial Life. (Pantheon Books, NY 1992), pages 155-187.

    3)

    Richard Dawkins. The Selfish Gene. (Oxford University Press, Oxford1989), page 12.

    4)

    Stuart A. Kauffman. Adaptation on Rugged Fitness Landscapes. In Daniel Stein ed. Lectures in the Sciences of Complexity. (Addisson-Wesley 1989).

    5)

    Cynthia Eagle Russett. The Concept of Equilibrium in AmericanSocial Thought. (Yale University Press, New Haven 1968). pages 28-54.

    6)

    John Maynard Smith. Evolution and the Theory of Games. In Did Darwin Get it Right?: Essays on Games, Sex and Evolution.(Chapman and Hall, NY 1989).

    7)

    Ilya Prigogine and Isabelle Stengers. Order Out of Chaos. (Bantam Books, NY 1984).

    8)

    Ian Stewart. Does God Play Dice: the Mathematics of Chaos. (Basil Blackwell, Oxford 1989), pages 95-110.

    9)

    Chistopher G. Langton. Artificial Life. In Christopher G. Langton ed. Artificial Life. (Addison-Wesley, 1988), page 2.

    10)

    Andy Clark. Microcognition: Philosophy, Cognitive Science and Parallel Distributed Processing. (MIT Press, 1990), pages 61-75.

    11)

    J.A. Sepulchre and A. Babloyantz. Spatio-Temporal Patterns and Network Computation. In A. Babloyantz ed. Self-Organization, Emergent Properties, and Learning. (Plenum Press, NY 1991).

    12)

    William Bechtel and Adele Abrahamsen. Connectionism and the Mind.(Basil Blackwell, Cambridge Mass. 1991). page 129.

    13)

    Douglas R. Hofstadter. The Prisoner's Dilemma and the Evolution of Cooperation. In Metamagical Themas (Basic Books, NY 1985), page 720.

    14)

    James L Gould and Carol Grant Gould. Sexual Selection. (Scientific American Library, NY 1989), pages 244-247.

    15)

    Donald E. Brown. Human Universals. (McGraw-Hill, NY 1991).

    16)

    John Nist. A Structural History of English. (St. Martin Press, NY 1966), chapter 3.

    17)

    Tony Crowley. Standard English and the Politics of Language. (University of Illinois Press, 1989).

    18)

    Winfred P. Lehmann. The Great Underlying Ground-Plans. In Winfred P. Lehmann ed. Syntactic Typology. (Harvester Press, Sussex UK, 1978), page 37.

    19)

    David Decamp. The Study of Pidgin and Creole Languages. In Dell Hymes ed. Pidginization and Creolization of Languages. (Cambridge University Press, Oxford UK 1971).

    20)

    William Labov. Sociolinguistic Patterns. (University of Pennsylvania Press, Philadelphia 1971), pages 271-273.

    21)

    Michael T. Hannan and John Freeman. Organizational Ecology. (Harvard University Press, 1989).

    22)

    Richard R. Nelson and Sidney G. Winter. An Evolutionary Theory of Economic Change. (Belknap Press, Cambridge Mass. 1982), page14.

    23)

    Jay W. Forrester. Innovation and Economic Change. In Christopher Freeman ed. Long Waves in the World Economy. (Butterworth, 1983), page 128.

    24)

    W. Brian Arthur. Self-Reinforcing Mechanisms in Economics. In Philip W. Anderson, Kenneth J. Arrow and Daniel Pines eds. The Economy as an Evolving Complex System. (Addisson-Wesley, 1988)

    25)

    J.D. Sterman. Nonlinear Dynamics in the World Economy: the Economic Long Wave. In Peter L. Christiansen and R.D. Parmentier eds. Structure, Coherence and Chaos in Dynamical Systems. (Manchester University Press, Manchester UK, 1989).

    26)

    Ilya Prigogine and Isabelle Stengers. op. cit. page 190.

    27)

    Manuel De Landa. War in the Age of Intelligent Machines. (Zone Books, NY 1991).