. 7
( 10)


someone without a bent toward conspiracy theories (without a strong C0), the cost of supplying
the concept "conspiracy" would sufficiently great that C1 would not be a pattern in a handful of
cases of apparent food poisoning. But for Jane, I(C4|C1,C0) < I(C4|C0). Relative to the background
information C0, C1 simplifies C4.

Clearly, C2 and C3 may be treated in a manner similar to C1. Production of Actual Events

Now let us turn to the last three belief-processes. What about C5, the belief that her co-workers
are acting unpleasantly toward her? First of all, it is plain that the belief C2 works to produce the
belief C5. If one believes that one's co-workers are conspiring against one, one is far more likely
to interpret their behavior as being unpleasant.

And furthermore, given C2, the more unpleasant her co-workers are, the simpler the form C2
can take. If the co-workers are acting pleasant, then C2 has the task of explaining how this
pleasantry is actually false, and is a form of conspiracy. But if the co-workers are acting
unpleasant, then C2 can be vastly simpler. So, in this sense, it may be said that C5 is a pattern in

By similar reasoning, it may be seen that C4 and C6 are both produced by other beliefs in the
list, and patterns in or among other beliefs in the list.

Get any book for free on: www.Abika.com
CHAOTIC LOGIC Jane's Conspiracy as a "Structural Conspiracy"

The arguments of the past few paragraphs are somewhat reminiscent of R.D. Laing's Knots
(1972), which describes various self-perpetuating interpersonal and intrapersonal dynamics.
Some of Laing's "knots" have been cast in mathematical form by Francisco Varela (1978).
However, Laing's "knots" rather glibly treat self-referential dynamics in terms of
propositionallogic, which as we have seen is of dubious psychological value. The present
treatment draws on a far more carefully refined model of the mind.

It follows from the above arguments that Jane's conspiratorial belief system is in fact a
structural conspiracy. It is approximately a fixed point for the "cognitive law of motion." A
more precise statement, however, must take into account the fact that the specific contents of the
belief-processes Ci are constantly shifting. So the belief system is not exactly fixed: it is subject
to change, but only within certain narrow bounds. It is a strange attractor for the law of motion.

Whether it is a chaotic attractor is not obvious from first principles. However, this question
could easily be resolved by computer simulations. One would need to assume particular
probabilities for the creation of a given belief from the combination of a certain group of
beliefs, taking into account the variety of possible belief-processes falling under each general
label Ci. Then one could simulate the equation of motion and see what occurred. My strong
suspicion is that there is indeed chaos here. The specific beliefs and their strengths most likely
fluctuate pseudorandomly, while the overall conspiratorial structure remains the same.

9.3.3. Implication and Conspiracy (*)

As an aside, it is interesting to relate the self-production of Jane's belief system with the notion
of informational implication introduced in Chapter Four. Recall that A significantly implies B,
with respect to a given deductive system, if there is some chain of deductions leading from A to
B, which uses A in a fundamental way, and which is at least as simple as other, related chains of
deductions. What interests us here is how it is possible for two entities to significantly imply
each other.

Formally, "A implies B to degree K" was written as A -->K B, where K was defined as the
minimum of cL + (1-c)M, for any sequence Y of deductions leading from A to B (any sequence
of expressions

A=B0,B1,...,Bn=B, where Bi+1 follows from Bi according to one of the transformation rules of the
deductive system in question). L was the ratio |B|/|Y|, and M was a conceptually simple but
formally messy measure of how much additional simplicity Y provides over those otherproofs
that are very similar to it. Finally, c was some number between 0 and 1, inserted to put the
quantities L and M on a comparable "scale."

For sake of simplicity, let us rechristen the beliefs C1, C2 and C3 as "F," "W," and "L"
respectively. In other words, L denotes the hypothesis that the leg pain is due to a conspiracy, W
denotes the hypothesis that the work and social problems are due to a conspiracy, and F denotes
the hypothesis that the food problems are due to a conspiracy.

Get any book for free on: www.Abika.com

Phrased in terms of implication, the self-generating dynamics of Jane's belief system would
seem to suggest

(L and W) -->K(F) F

(F and W) -->K(L) L

(F and L) -->K(W) W

where the degrees K(F), K(L) and K(W) are all non-negligible. But how is this possible?

Let Y(F) denote the "optimal sequence" used in the computation of K(F); define Y(L) and
Y(W) similarly. One need not worry about exactly what form these optimal sequences take; it is
enough to state that the "deductive system" involved has to do with Jane's personal belief system.
Her belief system clearly includes an analogical transformation rule based on the idea that if one
thing is caused by a conspiracy, then it is likely that another thing is too, which transforms
statements of the form "A is likely caused by a conspiracy" into other statements of the form "A
and ___ are likely caused by a conspiracy."

Then, it is clear that L(Y) cannot be large for all of these Y, perhaps not for any of them. For
one has

L[Y(F)] = |F|/|Y(F)| < |F|/[|L|+|W|]

L[Y(W)] = |W|/|Y(W)| < |W|/[|L|+|F|]

L[Y(L)] = |L|/|Y(L)| < |L|/[|F|+|W|]

For example, if each of the conspiracy theories is of equal intuitive simplicity to Jane, then all
these L(Y)'s are less than 1/3. Or if, say, the work theory is twice as simple than the others, then
L[Y(W)] may be close to 1, but L[Y(F)] and L[Y(L)] are less than 1/4. In any case, perhaps
sometimes the most "a priori" plausible of the beliefs may attain a fairly large K by having a
fairly large L, but for the others a large K must be explained in terms of a large M.

So, recall how the constant M involved in determining the degree K in A -->K B was defined --
as the weighted sum, over all proofs Z of B, of L(Z). The weight attached to Z was determined
by I(Z|Y), i.e. by how similar Z is to Y. A power p was introduced into the weight functions, in
order to control how little the those Z that are extremely similar to Y are counted.

If M[Y(W)] is large, this means that the theory that a conspiracy is responsible for Jane's work
problems is much simpler than other theories similar to it. This can be taken in two ways. If p is
very large, then M basically deals only with proofs that are virtually identical to Y. On the other
hand, if p is moderate in size, then M will incorporate a comparison of the simplicity granted by
Y(W) with the simplicity of true alternatives, such as the theory that Jane herself is responsible
for her work problems. Now, to almost any other person, it would be very simple indeed to

Get any book for free on: www.Abika.com

deduce Jane's work problems from Jane's personality. But to Jane herself, this deduction is not at
all intuitive.

So, formally speaking, Jane's circular implication can be seen to come from two sources. First
of all, a very large p, which corresponds to a very lenient definition of what constitutes a "natural
proof" of something. Or, alternately, a blanket negative judgement of the simplicity of all
alternative theories. Both of these alternatives amount to the same thing: excessive self-trust,
non-consideration of alternative hypotheses ... what I will call conservatism.

So, in sum, the informational-implication approach has not given us terribly much by way of
new insight into Jane's situation. What I have shown, on the other hand, is that Jane's real-life
delusional thinking fits in very nicely with the formal theory of reasoning given in Chapter Four.
This sort of correspondence between theory and everyday reality is precisely what the standard
Boolean-logic approach to reasoning lacks.


Jane's belief system is clearly, according to the standards of modern "sane" society, irrational.
It is worth asking how this irrationality is tied in with the dynamical properties of the belief
system, as discussed in the previous section. This investigation will leadtoward a strikingly
general dynamical formulation of the concept of rationality.

9.4.1. Conservatism and Irrelevance

The irrationality of Jane's belief system manifests itself in two properties. First of all, Jane is
simply too glib in her generation of theories. Given any unpleasant situation, her belief system
has no problem whatsoever reeling off an explanation: the theory is always "the conspirators did
it." New events never require new explanations. No matter how different one event is from
another, the explanation never changes. Let us call this property conservatism.

To put it abstractly, let Es denote the collection of beliefs which a belief system generates in
order to explain an event s. That is, when situation s arises, Es is the set of explanatory processes
which the belief system generates. Then one undesirable property of Jane's belief system is that
the rate of change of Es with respect to s is simply too small.

The second undesirable property of Jane's belief system is, I suggest, that the theories created
to explain an event never have much to do with the specific structure of the event. Formally, the
collection of patterns which emerge between Es and s is invariably very small. Her belief system
explains an event in a way which has nothing to do with the details of the actual nature of the
event. Let us call this property irrelevance.

Of course, Jane would reject these criticisms. She might say "I don't need to change my
explanation; I've always got the right one!" A dogmatist of this sort is the exact opposite of the
prototypical skeptic, who trusts nothing. The skeptic is continually looking for holes in every
argument; whereas Jane doesn't bother to look for holes in any argument. She places absolute

Get any book for free on: www.Abika.com

trust in one postulate, and doesn't even bother to look for holes in arguments purporting to
contradict it, for she simply "knows" the holes are there.

This attitude may be most easily understood in the context of the mathematical theory of
pattern. The pattern-theoretic approach to intelligence assumes that the environment is chaotic on
the level of detailed numerical paramters, but roughly structurally predictable. In Charles S.
Peirce's phrase, it assumes that the world possesses a "tendency to take habits."

Under this assumption, it is clear that conservatism and irrelevance and reluctance to test are, in
any given case, fairly likely to be flaws. First of all because if change is likely, if old ideas are
not necessarily true for the future, then a belief system which does not change is undesirable.
And secondly because if induction is imperfect, and the mind works by induction, then one must
always face the fact that one's own conclusions may be incorrect.

9.4.2. The Genesis of Delusion

Why, exactly, is Jane's belief system conservative and irrelevant? To answer this, it is
convenient to first ask how Jane's mind ever got into the irrational attractor which I have

The beginning, it seems, was an instance of C5 and C2: a professor at school was asking her
questions relating to her Overeaters Anonymous group, and she came to the conclusion that
people were talking about her behind her back. Whether or not this initial conspiracy was real is
not essential; the point is that it was nowhere nearly as unlikely as the conspiracies imagined by
her later.

Even if no real conspiracy was involved, I would not say that this first step was "unjustified".
It was only a guess; and there is nothing unjustified about making a wrong guess. After all, the
mind works largely by trial and error. What is important is that Jane's initial belief in a
conspiracy was not strongly incompatible with the remainder of her sane, commonsensical mind.

After this, all that were needed were a few instances of C4 or C6, and a few more instances of
C5. This caused the creation of some C0 belief-processes; then the feedback dynamics implicit in
the analysis of the previous section kicked in. The point is that only a small number of Ci are
necessary to start a cybernetic process leading to a vast proliferation. Eventually C0 became so
strong that plausible stories about conspiracies were no longer necessary; an all-purpose "them"
was sufficient.

Most of us weather unpleasant experiences without developing extravagant conspiracy
theories. In the initial stages of its growth, Jane's conspiratorial belief system depended crucially
on certain other aspects of Jane's personality; specifically, on her absolute refusal to accept any
responsibility for her misfortunes. But once this early phase was past, the spread of her belief
system may have had little to do with the remainder of her mind. It may have been a process of
isolated expansion, like the growth of a cancer.

9.4.3. Rationality and Dynamics

Get any book for free on: www.Abika.com

The lesson is that irrational belief systems are self-supporting, self-contained, integral units.
Considered as attractors, they are just as genuine and stable as the belief systems which we
consider "normal." The difference is that they gain too much of their support from internal self-
generating dynamics -- they do not draw enough on the remainder of the mental process network.

This is perhaps the most objective test of rationality one can possibly pose: how much support
is internal, and how much is external? Excessive internal support is clearly inclined to cause
conservatism and irrelevance. In this way the irrationality of a person's mind may be traced
back to the existence of overly autonomous subattractors of the cognitive equation. The mind
itself is an attractor of the cognitive equation; but small portions of the mind may also be
attractors for this same equation. When a portion of the mind survives because it is itself an
attractor, rather than because of its relations with the rest of the mind, there is a significant
danger of irrationality.

Looking ahead to Chapter Twelve, another way to put this is as follows: irrationality is a
consequence of dissociation. This formulation is particularly attractive since dissociation has
been used as an explanation for a variety of mental illnesses and strange psychological
phenomena -- schizophrenia, MPD, post-traumatic stress syndrome, cryptomnesia, hypnosis,
hysterical seizure, etc. (Van der Kolk et al, 1991). The general concept of dissociation is that of a
"split" in the network of processes that makes up the mind. Here I have shown that this sort of
split may arise due to the dynamical autonomy of certain collections of processes.


Consider once again Galileo's belief that what one sees when one points a telescope out into
space is actually there. As noted above, this seems quitereasonable from today's perspective.
After all, it is easy to check that when one points a telescope toward an earthbound object, what
one sees is indeed there. But we are accustomed to the Newtonian insight that the same natural
laws apply to the heavens and the earth; and the common intuition of Galileo's time was quite the
opposite. Hence Galileo was going against commonsense logic.

Also, it was said at the time that he was making hypotheses which could not possibly be
proven, merely dealing in speculation. Now we see that this objection is largely unfounded; we
have measured the heavens with radio waves, we have sent men and robotic probes to nearby
heavenly bodies, and the results agree with what our telescopes report. But to the common sense
of Galileo's time, the idea of sending men into space was no less preposterous than the notion of
building a time machine; no less ridiculous than the delusions of a paranoiac.

Furthermore, it is now known that Galileo's maps of the moon were drastically incorrect; so it
is not exactly true that what he saw through his primitive telescopes was actually there!

Galileo argued that the telescope gave a correct view of space because it gave a correct view of
earth; however, others argued that this analogy was incorrect, saying "when the telescope is
pointed toward earth, everyone who looks through it saw the same thing; but when it's pointed
toward space, we often see different things."

Get any book for free on: www.Abika.com

Now we know enough about lenses and the psychology of perception to make educated
guesses as to the possible causes of this phenomenon, reported by many of those who looked
through Galileo's telescopes. But at the time, the only arguments Galileo could offer were of the
form "There must be something funny going on either in your eye or in this particular lense,
because what is seen through the telescope in the absence of extraneous interference is indeed
truly, objectively there." In a way, he reasoned dogmatically and ideologically rather than

How is Galileo's belief system intrinsically different from the paranoid belief system discussed
above? Both ignore common sense and the results of tests, and both are founded on "wild"
analogies. Was Galileo's train of thought just as crazy a speculation as Jane's, the only difference
being that Galileo was lucky enough to be "right"? Or is it more accurate to saythat, whereas
both of them blatantly ignored common logic in order to pursue their intuitions, Galileo's
intuition was better than Jane's? I find the latter explanation appealing, but it begs the question:
was the superiority of Galileo's intuition somehow related to the structure of his belief system?

Whereas Jane's belief system is conservative and irrelevant, Galileo's belief system was
productive. Once you assume that what you see through the telescope is really out there, you
can look at all the different stars and planets and draw detailed maps; you can compare what you
see through different telescopes; you can construct detailed theories as to why you see what you
see. True, if it's not really out there then you're just constructing an elaborate network of theory
and experiment about the workings of a particular gadget. But at least the assumption leads to a
pursuit of some complexity: it produces new pattern. A conspiracy theory, taken to the extreme
described above, does no such thing. It gives you access to no new worlds; it merely derides as
worthless all attempts to investigate the properties of the everyday world. Why bother, if you
already know what the answer will be?

Call a belief system productive to the extent that it is correlated with the emergence of new
patterns in the mind of the system containing it. I suggest that productivity in this sense is
strongly correlated with the "reasonableness" of belief systems. The underlying goal of the next
few sections is to pursue this correlation, in the context of the dual network and the cognitive

9.5.1 Stages of Development

One often hears arguments similar to the following: "In the early stages of the development of
a theory, anything goes. At this stage, it may be advisable to ignore discouraging test results -- to
proceed counterinductively. This can lend insight into flaws in the test results or their standard
interpretations, and it can open the way to creative development of more general theories which
may incorporate the test results. And it may be advisable to think in bizarre, irrational ways -- so
as to generate original hypotheses. But once this stage of discovery is completed and the stage of
justification is embarked upon, these procedures are nolonger allowed: then one must merely test
one's hypotheses against the data."

Get any book for free on: www.Abika.com

Of course, this analysis of the evolution of theories is extremely naive: science does not work
by a fragmented logic of hypothesis formation and testing, but rather by a systematic logic of
research programmes. But there is obviously some truth to it.

I have suggested that two properties characterize a dogmatic belief system:

1) the variation in the structure of the explanations offered with respect to the events being
explained is generally small (formally, d[St(Es),St(Et)]/ d#[s,t] is generally small, where d and d#
denote appropriate metrics)

2) the nature of explanations offered has nothing to do with the events being explained
(formally, Em(Es,s) is generally small)

Intuitively, these conditions -- conservatism and irrelevance -- simply mean that the system is
not significantly responsive to test. In light of these criteria, I propose the following fundamental
normative rule:

During the developmental stage, a belief system may be permitted to be unresponsive to
test results (formally, to have consistently small d[St(Es)-St(Et)]/d#[s,t] and/or Em(Es,s) ).
However, after this initial stage has passed, this should not be considered justified.

This is a systemic rendering of the classical distinction between "context of discovery" and
"context of justification."

I will call any belief system fulfilling the conditions of non-conservatism and (sic) non-
irrelevance a dialogical system. A dialogical system is one which engages in a dialogue with its
context. The opposite of a dialogical system is a monological system, a belief system which
speaks only to itself, ignoring its context in all but the shallowest respects.

A system which is in the stage of development, but will eventually be dialogical, may be
called predialogical. In its early stage of development, a predialogical system may be
indistinguishable from a monological one. Pre-dialogicality, almost by definition, can be
established only in retrospect. Human minds and societies deal with the problem of
distinguishing monologicality from predialogicality the same way they deal with everything else
-- by inductionand analogy, by making educated guesses based on what they've seen in the past.
And, of course, these analogies draw on certain belief systems, thus completing the circle and
destroying any hope of gleaning a truly objective theory of "justification."

The terms "dialogical" and "monological" are not original; they were used by Mikhail Bakhtin
in his analysis of Dostoevsky. The reality of Dostoevsky's novels is called "dialogical," meaning
that it is the result of significant interaction between different world-views.

His path leads not from idea to idea, but from orientation to orientation. To think, for him, means
to question and to listen, to try out orientations.... Even agreement retains its dialogic character
... it never leads to a merging of voices and truths in a single impersonal truth, as in the
monologic world.

Get any book for free on: www.Abika.com

Each of Dostoevsky's major novels contains a number of conflicting belief systems -- and the
action starts when the belief systems become dialogical in the sense defined here. They test each
other, and produce creative explanations in response to the phenomena which they provide for
each other.

9.5.2. Progressive and Regressive

Lakatos has proposed that good scientific research programmes are "progressive" in that they
consistently produce new results which are surprising or dramatic. Bad research programmes are
"regressive" in that they do not. This is a valuable analysis, but I don't think it gets to the core of
the matter. "Surprising" and "dramatic" are subjective terms; so this criterion doesn't really say
much more than "a programme is good if it excites people."

However, I do think that the "monologicity/dialogicity" approach to justification is closely
related to Lakatos's notion of progressive and regressive research programmes. It is quite clear
that if a system always says the same thing in response to every test, then it is unlikely to give
consistently interesting output, and is hence unlikely to be progressive. And I suggest that the
converse is also true: that if a system is capable of incorporatingsensitive responses to data into
its framework, then it is reasonably likely to say something interesting or useful about the
context which generates the data.

Another way to phrase this idea is as follows: in general, dialogicality and productivity are
roughly proportional. That is: in the real world, as a rule of thumb, any system which produces
a lot of new pattern is highly dialogical, and any system which is highly dialogical produces a lot
of new pattern.

The second of these assertions follows from the definition of dialogicality. The former,
however, does not follow immediately from the nature of belief systems, but only from the
general dynamics of mind; I will return to it in Section 9.7.

9.5.3 Circular Implication Structure

For a slightly different point of view on these issues, let us think about belief systems in terms
of implication. Recall the passage above in which I analyzed the origins of Jane's paranoid
belief system. I considered, among others, the following triad of implications:

My leg pain and my trouble at work are due to

conspiracies, so my problem with food probably is too

My trouble at work and my problem with food are

due to conspiracies, so my leg pain probably is too

My leg pain and my problem with food are due to

Get any book for free on: www.Abika.com

conspiracies, so my trouble at work probably is too

In formulas, I let L denote the hypothesis that the leg pain is due to a conspiracy, W denote the
hypothesis that the work problems are due to a conspiracy, and F denote the hypothesis that the
food problems are due to a conspiracy, and I arrived at:

(L and W) --> F

(W and F) --> L

(L and F) --> W

(where each implication, in accordance with the theory of informational implication, had a
certain degree determined by the properties of Jane's belief system).

The same basic implication structure can be associated with any belief system, not just a
conspiratorial belief system. Suppose one has a group of phenomena, and then a group of
hypotheses of the form " this phenomenon can be explained by my belief system." These
hypotheses will support one another if a large number of implications of the form

this and this and ... this

can be explained by my belief system -->

that can be explained by my belief system

hold with nontrivially high degree. Earlier I reviewed conditions under which a collection of
implications of this form can hold with nontrivially high degree. Our conclusion was that a high
degree of conservatism is required: one must, when determining what follows what, not pay too
much attention to hypotheses dissimilar to those which one has already conceived. If a high
degree of conservatism is present, then it is perfectly possible for a group of beliefs to mutually
support each other in this manner.

For a very crude and abstract example, consider the belief that the outside world is real, and
the belief that one's body is real. One believes the outside world is real because one feels it -- this
is G.E. Moore's classic argument, poor philosophy but good common sense ... to prove the
external world is really there, kick something! And, on the other hand, why does one believe
one's body is real and not an hallucination? Not solely because of one's internal kinesthetic
feelings, but rather largely because of the sensations one gets when moving one's hand through
the air, walking on the ground, and in general interacting with the outside world.

It doesn't take much acumen to see how these two phenomenological "proofs" fit together. If
the outside world were an hallucination, then moving one's body through it would be no
evidence for the reality of one's body. One has two propositions supporting one another.

Get any book for free on: www.Abika.com

According to the dynamics of the dual network, various belief systems will compete for
survival -- they will compete not to have the processes containing their component beliefs
reprogrammed. I suggest that circular support structures are an excellent survival strategy, in that
they prevent the conception of hypotheses other than those already contained in the belief

But opposed to this, of course, is the fact that the conservatism needed to maintain a circular
support structure is fundamentally incompatible with dialogicality. Circular support structures
and dialogicality are both quality survival strategies, and I suggest that both strategies are in
competition in most large belief systems. Dialogicality permits the belief system to adapt to new
situations, and circular support structures permit the belief system to ignore new situations. In
order to have long-term success, a belief system must carefully balance these two contradictory
strategies -- enough dialogicality to consistently produce interesting new pattern, and enough
circular support to avoid being wiped out when trouble arises.

The history of science, as developed by Kuhn, Feyerabend, Lakatos and others, shows that in
times of crisis scientific belief systems tend to depend on circular support. In the heyday of
Newtonian science, there was a little circular support: scientists believed the Newtonian
explanation of W partly because the Newtonian explanations of X, Y and Z were so good, and
believed the Newtonian explanation X partly because the Newtonian explanations of W, Y and Z
were so good, et cetera. But toward the end of the Newtonian era, many of the actual
explanations declined in quality, so that this circular support became a larger and larger part of
the total evidence in support of each hypothesis of Newtonian explanation.

Circular implication structure is an inevitable consequence of belief systems being attractors
for the cognitive equation. But the question is, how much is this attraction relied on as the sole
source of sustenance for the belief system? If circular support, self-production, is the belief
system's main means of support, then the belief system is serving little purpose relative to the
remainder of the mind: it is monological. This point will be pursued in more detail in Chapter


So, in conclusion, a belief system is: 1) a miniature dual network structure,

2) a structured transformation system, 3) an attractor for the cognitive equation.

What does this finally say about the proposed correlation between dialogicality and

It is, of course, conceivable that a monological system might create an abundance of new
pattern. To say that this is highly unlikely is to say that, in actuality, new pattern almost always
emerges from significant interaction, from systematic testing. But why should this be true?

The correct argument, as I have hinted above, proceeds on grounds of computational
efficiency. This may at first seem philosophically unsatisfying, but on the other hand it is very

Get any book for free on: www.Abika.com

much in the spirit of pattern philosophy -- after all, the very definition of pattern involves an
abstract kind of "computational efficiency."

A monological system, psychologically, represents a highly dissociated network of belief
processes. This network does of course interact with the remainder of the mind -- otherwise it
would have no effects. But it restricts its interactions to those in which it can play an actor role;
it resists being modified, or entering into symbiotic loops of inter-adjustment. This means that
when a monological belief system solves a problem, it must rely only, or primarily, upon its
own resources.

But the nature of thought is fundamentally interactive and parallel: intelligence is achieved
by the complex interactions of different agents. A dialogical belief system containing N
modestly-sized processes can solve problems which are of such an intrinsic computational
complexity that no excessively dissociated network of N modestly-sized processes can ever
solve them. For a dialogical system can solve problems by cooperative computation: by using
its own processes to request contributions from outside processes. A monological system, on the
other hand, cannot make a habit of interacting intensively with outside processes -- if it did, it
would not be monological.

This, I suggest, is all there is to it. Despite the abstract terminology, the idea is very simple.
Lousy, unproductive belief systems are lousy precisely because they keep to themselves; they do
not make use of the vast potential for cooperative computation that is implicit in the dual
network. This is the root of their conservatism and irrelevance. They are conservative and
irrelevant because, confronted with the difficult problems of the real world, any belief system of
their small size would necessarily be conservative and irrelevant, if it did not extensively avail
itself of the remainder of the mind.

All this leaves only one question unanswered: why do monological systems arise, if they are
unproductive and useless? The answer to this lies in the cognitive equation. Attractors can be
notoriously stubborn. And this leads us onward....

Chapter Ten


The train of thought reported in this chapter began in the fall of 1991. My father was writing
Turncoats and True Believers (Ted Goertzel, 1993), a book about political ideologies, those who
abandon them, and those who maintain them; he was collecting anecdotes from a variety of
biographies and autobiographies, and he was struck by the recurrent patterns. In some intuitively
clear but hard-to-specify sense, ideologues of all different stripes seemed to think alike.

My father has studied ideology for nearly a quarter century, and his approach is thoroughly
rationalist: he believes that ideological belief systems coincide with irrational thought, whereas
nonideological belief systems coincide with rational thought. This rationalism implies that

Get any book for free on: www.Abika.com

adherents to nonideological belief systems should all think alike -- they are all following the
same "correct" form of logical reasoning. But it says nothing about the nature of irrationality --
it does not explain why deviations from "correct" logical reasoning all seem to follow a few
simple psychological forms.

He hoped to resolve the puzzle by coming up with a "litmus test" for belief systems -- a
property, or a list of properties, distinguishing irrational, ideological reasoning from rational
thought. For example, two properties under tentative consideration for such a list were:

1) adherents to ideological belief systems tend to rely on reasoning by analogy rather than
logical deduction

2) adherents to ideological belief systems tend to answer criticism by reference to "hallowed"
texts, such as the Bible or das Kapital.

But both of these properties were eventually rejected: the first because analogy is an essential
part of logical deduction (as shown in Chapter Four); and thesecond because reference to
hallowed texts is really a surface symptom, not a fundamental flaw in reasoning.

Every property that he came up with was eventually discarded, for similar reasons. Eventually
he decided that, given these serious conceptual troubles, Turncoats and True Believers would
have to do without a formal theory of justification -- a decision that probably resulted in a much
more entertaining book! The present chapter, however, came about as a result of my continued
pursuit of an explanation of the difference between "rational" and "ideological" thought.

I will not discuss political belief systems here -- that would take us too far afield from the
cognitive questions that are the center of this book. However, the same questions that arise in the
context of political belief systems, also emerge from more general psychological considerations.
For I have argued that strict adherence to formal logic does not characterize sensible, rational
thought -- first because formal logic can lead to rational absurdities; and second because useful
applications of formal logic require the assistance of "wishy-washy" analogical methods. But if
formal logic does not define rationality -- then what does?

In this chapter I approach rationality using ideas drawn from evolutionary biology and
immunology. Specifically, I suggest that old-fashioned rationalism is in some respects similar to
Neo-Darwinism, the evolutionary theory which holds the "fitness" of an organism to be a
property of the organism in itself. Today, more and more biologists are waking up to the
sensitive environment-dependence of fitness, to the fact that the properties which make an
organism fit may not even be present in the organism, but may be emergent between the
organism and its environment. And similarly, I propose, the only way to understand reason is to
turn the analogy-dependence of logic into a tool rather than an obstacle, and view rationality as a
as a property of the relationship between a belief system and its "psychic environment."

In order to work this idea out beyond the philosophical stage, one must turn to the dual
network model. Productivity alone does not guarantee the survival of a belief system in the dual
network. And unproductivity does not necessarily mitigate against the survival of a belief

Get any book for free on: www.Abika.com

system. What then, I asked, doesdetermine survival in the complex environment that is the dual-
network psyche?

There are, I suggest, precisely two properties common to successful belief systems:

1) being an attractor for the cognitive equation

2) being productive, in the sense of creatively constructing new patterns in response to
environmental demands

A belief system cannot survive unless it meets both of these criteria. But some belief systems
will rely more on (1) for their survival, and some will rely more on (2). Those which rely mainly
on (1) tend to be monological and irrational; those which rely mainly on (2) are dialogical,
rational and useful. This is a purely structural and systemic vision of rationality: it makes no
reference to the specific contents of the belief systems involved, nor to their connection with the
external, "real" world, but only to their relationship with the rest of the mind.

In this chapter I will develop this approach to belief in more detail, using complex biological
processes as a guide. First I will explore the systematic creativity inherent in belief systems, by
analogy to the phenomenon of evolutionary innovation in ecosystems. Then, turning to the
question of a belief system interacts with the rest of the mind, I will present the following crucial
analogy: belief systems are to the mind as the immune system is to the body. In other words,
belief systems protect the upper levels of the mind from dealing with trivial ideas. And, just like
immune systems, they maintain themselves by a process of circular reinforcement.

In addition to their intrinsic value, these close analogies between belief systems and biological
systems are a powerful argument for the existence of nontrivial complex systems science.
Circular reinforcement, self-organizing protection and evolutionary innovation are deep ideas
with relevance transcending disciplinary bounds. The ideas of this chapter should provide new
ammunition against those who would snidely assert that "there is no general systems theory."


As suggested in the previous chapter, a complex belief system such as a scientific theory may
be modeledas a self-generating structured transformation system. The hard core beliefs are
the initials I, and the peripheral beliefs are the elements of D(I,T). The transformations T are the
processes by which peripheral beliefs are generated from hard core beliefs. And all the elements
of D(I,T) are "components," acting on one another according to the logic of self-generating

For example, in the belief systems of modern physics, many important beliefs may be
expressed as equational models. There are certain situation-dependent rules by which basic
equational models (Maxwell's Laws, Newton's Laws, the Schrodinger Equation) can be used to
generate more complex and specific equational models. These rules are what a physicist needs to
know but an engineer (who uses the models) or a mathematician (who develops the math used
by the models) need not. The structuredness of this transformation system is what allows

Get any book for free on: www.Abika.com

physicists to do their work: they can build a complex equational model out of simpler ones, and
predict some things about the behavior of the complex one from their knowledge about the
behavior of the simpler ones.

On the other hand, is the conspiratorial belief system presented above not also a structured
transformation system? Technically speaking, it fulfills all the requirements. Its hard core
consists of one simple conspiracy theory, and its D(I,T) consists of beliefs about psychological
and social structures and processes. Its T contains a variety of different methodologies for
generating situated conspiracy beliefs -- in fact, as a self-generating component-system, its
power of spontaneous invention can be rather impressive. And the system is structured, in the
sense required by continuous compositionality: similar phenomena correspond to similar
conspiracy theories. Yes, this belief system is an STS, though a relatively uninteresting one.

In order to rule out cases such as this, one might add to the definition of STS a requirement
stating that the set D(I,T) must meet some minimal standard of structural complexity. But there
is no pressing need to do this; it is just as well to admit simplistic STS's, and call them simplistic.
The important observation is that certain belief systems generate a high structural complexity
from applying their transformation rules to one another and their initials -- just as written and
spoken language systems generate a high structuralcomplexity from combining their words
according to their grammatical words.

And the meanings of the combinations formed by these productive belief systems may be
determined, to a high degree of approximation, by the principle of continuous
compositionality. As expressions are becoming complex, so are their meanings, but in an
approximately predictable way. These productive belief systems respond to their environments
by continually creating large quantities of new meaning.

Above it was proposed that, in order to be productive, in order to survive, a belief system
needs a generative hard core . A generative hard core is, I suggest, synonymous with a hard core
that contains an effective set of "grammatical" transformation rules -- rules that take in the
characteristics of a particular situation and put out expressions (involving hard core entities)
which are tailored to those particular situations. In other words, the way the component-system
which is a belief system works is that beliefs, using grammatical rules, act on other beliefs to
produce new beliefs. Grammatical rules are the "middleman"; they are the part of the definition
of f(g) whenever f and g are beliefs in the same belief system.

And what does it mean for an expression E to be "tailored to" a situation s? Merely that E and
s fit together, in the sense that they help give rise to significant emergent patterns in the set of
pairs {(E,s)}. That a belief system has a generative hard core means that, interpreted as a
language, it is complex in the sense introduced in the previous paragraph -- that it habitually
creates significant quantities of meaning.

The situatedness of language is largely responsible for its power. One sentence can mean a
dozen different things in a dozen different contexts. Similarly, the situatedness of hard core
"units" is responsible for the power of productive belief systems. One hard core expression can
mean a dozen different things in a dozen different situations. And depending upon the particular

Get any book for free on: www.Abika.com

situation, a given word, sentence or hard core expression, will give rise to different new
expressions of possibly great complexity. To a degree, therefore, beliefs may be thought of as
triggers. When flicked by external situations, these triggers release appropriate emergent
patterns. The emergent patterns are not in the belief, nor are they in the situation; they are
fundamentally a synergetic production.

10.1.1. Evolutionary Innovation

To get a better view of the inherent creativity of belief systems, let us briefly turn to one of the
central problems of modern theoretical biology: evolutionary innovation. How is it that the
simple processes of mutation, reproduction and selection have been able to create such incredibly
complex and elegant forms as the human eye?

In The Evolving Mind two partial solutions to this problem are given. These are of interest
here because, as I will show, the problem of evolutionary innovation has a close relation with the
productivity of belief systems. This is yet another example of significant parallels among
different complex systems.

The first partial solution given in EM is the observation that sexual reproduction is a
surprisingly efficient optimization tool. Sexual reproduction, unlike asexual reproduction, is
more than just random stabbing out in the dark. It is systematic stabbing out in the dark.

And the second partial solution is the phenomenon of structural instability. Structural
instability means, for instance, that when one changes the genetic code of an organism slightly,
this can cause disproportionately large changes in the appearance and behavior of the organism.

Parallel to the biological question of evolutionary innovation is the psychological question of
evolutionary innovation. How is it that the simple processes of pattern recognition, motor control
and associative memory give rise to such incredibly complex and elegant forms as the
Fundamental Theorem of Calculus, or the English language?

One may construct a careful argument that the two resolutions of the biological problem of
evolutionary innovation also apply to the psychological case. For example, it is shown that the
multilevel (perceptual-motor) control hierarchy naturally gives rise to an abstract form of sexual
reproduction. For, suppose process A has subsidiary processes W and X, and process B has
subsidiaries X and Y. Suppose A judges W to work better than X, and reprograms W to work
like X. Then, schematically speaking, one has

A(t) = A', W, X

B(t) = B', X, Y

A(t+1) = A', W, W

B(t+1) = B', W, Y

Get any book for free on: www.Abika.com

(where A' and B' represent those parts of A and B respectively that are not contained in W, X or
Y). The new B, B(t+1), contains part of the old A and part of the old B -- it is related to the old A
and B as a child is related to its parents. This sort of reasoning can be made formal by reference
to the theory of genetic algorithms.

Sexual reproduction is an important corollary of the behavior of multilevel control networks.
Here, however, our main concern will be with structural instability. Let us begin with an
example from A. Lima de Faria's masterful polemic, Evolution Without Selection (1988). As
quoted in EM, Lima de Faria notes that

the 'conquest of the land' by the vertebrates is achieved by a tenfold increase in thyroid hormone
levels in the blood of a tadpole. This small molecule is responsible for the irreversible changes
that oblige the animal to change from an aquatic to a terrestrial mode of life. The transformation
involves the reabsorption of the tail, the change to a pulmonary respiration and other drastic
modifications of the body interior.... If the thyroid gland is removed from a developing frog
embryo, metamorphosis does not occur and the animal continues to grow, preserving the aquatic
structures and functions of the tadpole. If the thyroid hormone is injected into such a giant
tadpole it gets transformed into a frog with terrestrial characteristics....

There are species of amphibians which represent a fixation of the transition stage between the
aquatic and the terrestrial form. In them, the adult stage, characterized by reproduction, occurs
when they still have a flat tail, respire by gills and live in water. One example is... the mud-
puppy.... Another is... the Mexican axolotl.

The demonstration that these species represent transitional physiological stages was obtained
by administering the thyroid hormone to axolotls. Following this chemical signal their
metamorphosis proceeded and they acquired terrestrial characteristics (round tail and aerial
respiration). (p. 241)

This is a sort of paradigm case for the creation of new form by structural instability. The
structures inherent in water-breathing animals, if changed only a little, become adequate for the
breathing of air. And then, once a water-breathing animal comes to breathe air, it is of course
prone to obtain a huge variety of other new characteristics. A small change in a small part of a
complex network of processes, can lead to a large ultimate modification of the product of the

In general, consider any process that takes a certain "input" and transforms it into a certain
"output." The process is structurally unstable if changing the process a little bit, or changing its
input a little bit, can change the structure of (the set of patterns in) the output by a large amount.
This property may also be captured formally: in the following section, the first innovation ratio
is defined as the amount which changing the nature of the process changes the structure of the
output, and the second innovation ratio is defined as the amount which changing the nature of
the input changes the structure of the output.

Get any book for free on: www.Abika.com

When dealing with structures generated by structurally unstable processes, it is easy to
generate completely new forms -- one need merely "twiddle" the machinery a bit. Predicting
what these new forms will be is, of course, another matter. The Innovation Ratios (*)

Let y and y' be any two processes, let z and z' be any two entities, and let e.g. y*z denote the
outcome of executing the process y on the entity z. For instance, in EM y and y' denote genetic
codes, z and z' are sets of environmental stimuli, and y*z and y'*z' represent the organisms
resultant from the genetic codes y and y' in the environments z and z'. Then the essential
questions regarding the creation of new form are:

1) what is the probability distribution of the "first innovation ratio"


That is: in general, when a process is changed by a certain amount, how much is the structure of
the entities produced by the process changed? (d and d# denote appropriate metrics.)

2) what is the probability distribution of the "second innovation ratio"


That is: when an entity is changed by a certain amount, how much is the structure of the entity
which the process y transforms that entity into changed? For example, how much does the
environment affect the structure of an organism?

If these ratios were never large, then it would be essentially impossible for natural selection to
give rise to new form.

In EM it is conjectured that, where z and z' represent environments, y and y' genetic codes, and
y*z and y'*z' organisms, natural selection can give rise to new form. This is not purely a
mathematical conjecture. Suppose that for an arbitrary genetic code the innovation ratios had a
small but non-negligible chance of being large. Then there may well be specific "clusters" of
codes -- specific regions in process space -- for which the innovation ratio is acceptably likely to
be large. If such clusters do exist, then, instead of a purely mathematical question, one has the
biological question of whether real organisms reside in these clusters, and how they get there and
stay there.

The structural instability of a process y may be defined as the average, over all y', of
d(S(y*z),S(y'*z))/d#(y,y') + d(S(y*z),S(y*z'))/d#(z,z') [i.e. of the sum of the first and second
innovation ratios]. In a system which evolves at least partly by natural selection, the tendency to
the creation of new form may be rephrased as the tendency to foster structurally unstable

Get any book for free on: www.Abika.com

Several mathematical examples of structurally unstable processes are discussed in EM. It has
been convincingly demonstrated that one-dimensional cellular automata can display a high
degree of structural instability. And it is well-known that nonlinear iterated function systems can
be structurally unstable; this is the principle underlying the oft-displayed Mandelbrot set.

10.1.2. Structural Instability of Belief Systems

Now, let us see how structural instability ties in with the concepts of monologicity and
dialogicality. Onemay consider the hard core of a belief system as a collection of processes y1,
y2,.... Given a relevant phenomenon z, one of the yi creates an explanation that may be denoted
yi*z. If similar phenomena can have dissimilar explanations, i.e. if yi*z can vary a lot as z varies
a little, then this means that the second innovation ratio is large; and it also fulfills half of the
definition of dialogicality -- it says that the explanation varies with the phenomenon being

The other half of the definition of dialogicality is the principle of relevance -- it says that
Em(yi*z,z) should be nontrivial; that the explanation should have something to do with the
phenomenon being explained. Part of the difficulty with maintaining a productive belief system
is the tension between creativity-promoting structural instability and the principle of relevance.

And what does the first innovation ratio have to do with belief systems? To see this, one must
delve a little deeper into the structure of belief systems. It is acceptable but coarse to refer to a
belief system as a collection of processes, individually generating explanations. In reality a
complex belief system always has a complex network structure.

Many explanation-generating procedures come with a collection of subsidiary procedures, all
related to each other. These subsidiaries "come with" the procedure in the sense that, when the
procedure is given a phenomenon to deal with, it either selects or creates (or some combination
of the two) a subsidiary procedure to deal with it. And in many cases the subsidiary procedures
come with their own subsidiary procedures -- this hierarchy may go several levels down, thus
providing a multilevel control network.

So, in a slightly less coarse approximation to this dual network structure, let us say that each
hard core process yi generates a collection of subprocesses yi1, yi2,.... For each i, let us consider
the explanations of a fixed phenomenon z generated by one these subprocesses -- the collection

{yij*z, j=1,2,3,...}. The first innovation ratio [d(S(yij*z),S(y'*z))/d(#yij,y')] measures how much
changing the subprocess yij changes the explanation which the subprocess generates. This is a
measure of the ability of yi to come up with fundamentally new explanations by exploiting
structural instability. It is thus a measure of the creativity or flexibility of the hard core of the
belief system.

Of course, if a belief system has many levels, the first innovation ratio has the same meaning
on each level: it measures the flexibility of the processes on that level of the belief system. But
considering creativity on many different levels has an interesting consequence. It leads one to
ask of a given process, not only whether it is creative in generating subprocesses, but whether it

Get any book for free on: www.Abika.com

generates subprocesses that are themselves creative. I suggest that successful belief systems have
this property. Their component processes tend to be creative in generating creative

This, I suggest, is one of the fundamental roles of belief systems in the dual network. Belief
systems are structured transformation systems that serve to systematically create new pattern
via multilevel structural instability.

Earlier I explained how the linguistic nature of belief systems helps make it possible for them
to generate complex explanations for novel situations. Linguistic structure allows one to
determine the form of a combination of basic building blocks, based on the meaning which one
wants that combination to have. Now I have also explained why linguistic structure is not
enough: in order to be truly successful in the unpredictable world, a belief system must be
systematically creative in its use of its linguistic structure.


A belief system is a complex self-organizing system of processes. In this section I will
introduce a crucial analogy between belief systems and a complex self-organizing physical
system: the immune system. If this analogy has any meat to it whatsoever, it is a strong new
piece of evidence in favor of the existence of a nontrivial complex systems science.

Recall that the multilevel control network is roughly "pyramidal," in the sense that each
processor is connected to more processes below it in the hierarchy than above it in the hierarchy.
So, in order to achieve reasonably rapid mental action, not every input that comes into the lower
levels can be passed along to the higher levels. Only the most important things should be passed
further up.

For example, when a complex action -- say, reading -- is being learned, it engages fairly high-
level processes: consciousness, systematic deductive reasoning, analogical memory search, and
so on. But eventually, once one has had a certain amount of practice, reading becomes
"automatic" -- lower-level processes are programmed to do the job. Artful conjecture and
sophisticated deduction are no longer required in order to decode the meaning of a sentence.

An active belief about an entity s may be defined as a process in the multilevel control
hierarchy that:

1) includes a belief about s, and

2) when it gets s as input, deals with s without either

a) doing recursive virtually-serial computation regarding s, or b) passing s up to a higher level.

In other words, an active belief about s is a process containing a belief about s that tells the
mind what to do about s in a reasonably expeditious way: it doesn't pass the buck to one of its
"bosses" on a higher level, nor does it resort to slow, ineffective serial computation.

Get any book for free on: www.Abika.com

This definition presupposes that individual "processes" in the dual network don't take a
terribly long time to run -- a noncontroversial assumption if, as in Edelman's framework, mental
processes are associated with clusters of cooperating neurons. Iterating single processes or
sequences of processes may be arbitrarily time-consuming, but that's a different matter.

All this motivates the following suggestive analogy: belief systems are to the mind as
immune systems are to the body. This metaphor, I suggest, holds up fairly well not only on the
level of purpose, but on the level of internal dynamics as well.

The central purpose of the immune system is to protect the body against foreign invaders
(antigens), by first identifying them and then destroying them. The purpose of a belief system,
on the other hand, is to protect the upper levels and virtual serial capacity of the mind against
problems, questions, inputs -- to keep as many situations as possible out of reach of the upper
levels and away from virtual serial processing, by dealing with them according to lower-level
active beliefs.

10.2.1. Immunodynamics

Let us briefly review the principles of immunodynamics. The easy part of the immune
system's task is the destruction of the antigen: this is done by big, dangerous cells well suited for
their purpose. The trickier duties fall to smaller antibody cells: determining what should be
destroyed, and grabbing onto the offending entities until the big guns can come in and destroy
them. One way the immune system deals with this problem is to keep a large reserve of different
antibody classes in store. Each antibody class matches (identifies) only a narrow class of
antigens, but by maintaining a huge number of different classes the system can recognize a wide
variety of antigens.

But this strategy is not always sufficient. When new antigens enter the bloodstream, the
immune system not only tries out its repertoire of antibody types, it creates new types and tests
them against the antigen as well. The more antigen an antibody kills, the more the antibody
reproduces -- and reproduction leads to mutation, so that newly created antibody types are likely
to cluster around those old antibody types that have been the most successful.

Burnet's (1976) theory of clonal selection likens the immune system to a population of
asexually reproducing organisms evolving by natural selection. The fittest antibodies reproduce
more, where "fitness" is defined in terms of match which antigen. But Jerne (1973) and others
showed that this process of natural selection is actually part of a web of intricate self-
organization. Each antibody is another antibody's antigen (or at least another "potential
antibody"'s antigen), so that antibodies are not only attacking foreign bodies, they are attacking
one another.

This process is kept in check by the "threshold logic" of immune response: even if antibody
type Ab1 matches antibody type Ab2, it will not attack Ab2 unless the population of Ab2 passes a
certain critical level. When the population does pass this level, though, Ab1 conducts an all-out
battle on Ab2. So, suppose an antigen which Ab2 recognizes comes onto the scene. Then Ab2 will
multiply, due to its success at killing antigen. Its numbers will cross the critical level, and Ab1

Get any book for free on: www.Abika.com

will be activated. Ab1 will multiply, due to its success at killing Ab2 -- and then anything which
matches Ab1 will be activated.

The process may go in a circle -- for instance, if Ab0 matches Ab1, whereas Ab2 matches Ab0.
Then one mightpotentially have a "positive feedback" situation, where the three classes mutually
stimulate one another. In this situation a number of different things can happen: any one of the
classes can be wiped out, or the three can settle down to a sub-threshold state.

This threshold logic suggests that, in the absence of external stimuli, the immune system might
rest in total equilibrium, nothing attacking anything else. However, the computer simulations of
Alan Perelson and his colleagues at Los Alamos (Perelson 1989, 1990; deBoer and Perelson,
1990) suggest that in fact this equilibrium is only partial -- that in normal conditions there is a
large "frozen component" of temporarily inactive antibody classes, surrounded by a fluctuating
sea of interattacking antibody classes.

Finally, it is worth briefly remarking on the relation between network dynamics and immune
memory. The immune system has a very long memory -- that is why, ten years after getting a
measles vaccine, one still won't get measles. This impressive memory is carried out partly by
long-lived "memory B-cells" and partly by internal images. The latter process is what interests
us here. Suppose one introduces Ag = 1,2,3,4,5 into the bloodstream, thus provoking
proliferation of

Ab1 = -1,-2,-3,-4,-5. Then, after Ag is wiped out, a lot of Ab1 will still remain. The inherent
learning power of the immune system may then result in the creation and proliferation of Ab2 =
1,2,3,4,5. For instance, suppose that in the past there was a fairly large population of Ab3 =
1,1,1,4,5. Then many of these

Ab3 may mutate into Ab2. Ab2 is an internal image of the antigen. It lacks the destructive power
of the antigen, but it has a similar enough shape to take the antigen's place in the ideotypic

Putting internal images together with immune networks leads easily to the conclusion that
immune systems are structurally associative memories. For, suppose the antibody class Ab1 is
somehow stimulated to proliferate. Then if Ab2 is approximately complementary to Ab1, Ab2 will
also be stimulated. And then, if Ab3 is approximately complementary to Ab2, Ab3 will be
stimulated -- but Ab3, being complementary to Ab2, will then be similar to Ab1. To see the value
of this, suppose

Ag = 5,0,0,0,5

Ab1 = -5,0,0,0,-5

Ab2 = 5,0,0,-6,0

Ab3 = 0,-4,0,6,0

Get any book for free on: www.Abika.com

Then the sequence of events described above is quite plausible -- even though Ab3 itself will not
be directly stimulated by Ag. The similarity between Ab3 and Ab1 refers to a different
subsequence than the similarity between Ab1 and Ag. But proliferation of Ag nonetheless leads
to proliferation of Ab3. This is the essence of analogical reasoning, of structurally associative
memory. The immune system is following a chain of association not unlike the chains of free
association that occur upon the analyst's couch. Here I have given a chain of length 3, but in
theory these chains may be arbitrarily long. The computer simulations of Perelson and de Boer,
and those of John Stewart and Francisco Varela (personal communication), suggest that the
immune systems contains chains that are quite long indeed.

One worthwhile question is: what good does this structurally associative capacity do for the
immune system? A possible answer is given by the speculations of John Stewart and his
colleagues at the Institute Pasteur (Stewart, 1992), to the effect that the immune system may
serve as a general communication line between different body systems. I have mentioned the
discovery of chains which, structurally, are analogous to chains of free association. Stewart's
conjecture is that these chains serve the as communication links: one end of the chain connects
to, say, a neurotransmitter, and the other end to a certain messenger from the endocrine system.

10.2.2. Belief Dynamics

So, what does all this have to do with belief systems? The answer to this question comes in
several parts.

First of all, several researchers have argued that mental processes, just like antibodies,
reproduce differentially based on fitness. As discussed above, Gerald Edelman's version of this
idea is particularly attractive: he hypothesizes that types of neuronal clusters survive
differentially based on fitness.

Suppose one defines the fitness of a process P as the size of

Em(P,N1,...,Nk) - Em(N1,...,Nk), where the Ni are the "neighbors" of P in the dual network. And
recall that the structurally associative memory is dynamic -- it iscontinually moving processes
around, trying to find the "optimal" place for each one. From these two points it follows that the
probability of a process not being moved by the structurally associative memory is roughly
proportional to its fitness. For when something is in its proper place in the structurally
associative memory, its emergence with its neighbors is generally high.

This shows that, for mental processes, survival is in a sense proportional to fitness. In The
Evolving Mind it is further hypothesized that fitness in the multilevel control network
corresponds with survival: that a "supervisory" process has some power to reprogram its
"subsidiary" processes, and that a subsidiary process may even have some small power to
encourage change in its supervisor. Furthermore, it is suggested that successful mental processes
can be replicated. The brain appears to have the ability to move complex procedures from one
location to another (Blakeslee, 1991), so that even if one crudely associates ideas with regions of
the brain this is a biologically plausible hypothesis.

Get any book for free on: www.Abika.com

So, in some form, mental processes do obey "survival of the fittest." This is one similarity
between immune systems and belief systems.

Another parallel is the existence of an intricately structured network. Just as each antibody is
some other antibody's antigen, each active belief is some other active belief's problem. Each
active belief is continually putting questions to other mental processes -- looking a) to those on
the level above it for guidance, b) to those on its own level as part of structurally associative
memory search, and c) to those on lower levels for assistance with details. Any one of these
questions has the potential of requiring high-level intervention. Each active belief is continually
responding to "questions" posed by other active beliefs, thus creating a network of cybernetic

Recall that, in our metaphor, the analogy to the "level" of antigen or antibody population is,
roughly, "level" in the multilevel control network (or use of virtual serial computation). So the
analogue of threshold logic is that each active belief responds to a question only once that
question has reached its level, or a level not too far below.

As in the Ab1, Ab2, Ab3 cycle discussed above, beliefs can stimulate one another circularly.
One can have, say, two active beliefs B1 and B2, which mutually support one another. An
example of this was given alittle earlier, in the context of Jane's paranoid belief system:
"conspiracy caused leg pain" and "conspiracy caused stomach pain."

When two beliefs support one another, both are continually active -- each one is being used to
support something. Thus, according to the "survival of the fittest" idea, each one will be
replicated or at least reinforced, and perhaps passed up to a higher level. This phenomenon,
which might be called internal conspiracy, is is a consequence of what in Chapter Eight was
called structural conspiracy. Every attractor of the cognitive equation displays internal
conspiracy. But the converse is not true; internal conspiracy does not imply structural

Prominence in the dual network increases with intensity as a pattern (determined by the
structurally associative memory), and with importance for achieving current goals (determined
by the multilevel control network). Internal conspiracy is when prominence is achieved through
illusion -- through the conspiratorially-generated mirage of intensity and importance.

10.2.3. Chaos in Belief Systems and Immune Systems

Rob deBoer and Alan Perelson (1992) have shown mathematically that, even in an immune
system consisting of two antibody types, chaos is possible. And experiments at the Institute
Pasteur in Paris (Stewart, 1992) indicate the presence of chaotic fluctuations in the levels of
certain antibody types in mice. These chaotic fluctuations are proof of an active immune network
-- proof that the theoretical possibility of an interconnected immune system is physically

Suppose that some fixed fraction of antibody types participates in the richly interconnected
network. Then these chaotic fluctuations ensure that, at any given time, a "pseudorandom"

Get any book for free on: www.Abika.com

sample of this fraction of antibody types is active. Chaotic dynamics accentuates the Darwinian
process of mutation, reproduction and selection, in the sense that it causes certain antibody types
to "pseudorandomly" reproduce far more than would be necessary to deal with external antigenic
stimulation. Then these excessively proliferating antibody types may mutate, and possibly
connect with other antibody types, forming new chains.

Of course, chaos in the narrow mathematical sense is not necessary for producing
"pseudorandom" fluctuations -- complex periodic behavior would do as well, or aperiodic
behavior which depends polynomially but not exponentially on initial conditions. But since we
know mathematically that immune chaos is possible, and we have observed experimentally what
looks like chaos, calling these fluctuations "chaos" is not exactly a leap of faith. Indeed, the very
possibility of a role for immunological chaos is pregnant with psychological suggestions. What
about chaos in the human memory network?

Chaos in the immune network may, for example, be caused by two antibody types that
partially match each other. The two continually battle it out, neither one truly prevailing; the
concentration of each one rising and falling in an apparently random way. Does this process not
occur in the psyche as well? Competing ideas, struggling against each other, neither one ever
gaining ascendancy?

To make the most of this idea, one must recall the basics of the dual network model.
Specifically, consider the interactions between a set (say, a pair) of processes which reside on
one of the lower levels of the perceptual-motor hierarchy. These processes themselves will not
generally receive much attention from processes on higher levels -- this is implicit in the logic of
multilevel control. But, by interacting with one another in a chaotic way, the prominences of
these processes may on some occasions pseudorandomly become very large. Thus one has a
mechanism by which pseudorandom samples of lower-level processes may put themselves forth
for the attention of higher-level processes. And this mechanism is enforced, not by some
overarching global program, but by natural self-organizing dynamics.

This idea obviously needs to be refined. But even in this rough form, it has important
implications for the psychology of attention. If one views consciousness as a process residing on
the intermediate levels of the perceptual-motor hierarchy, then in chaos one has a potential
mechanism for pseudorandom changes in the focus of attention. This ties in closely with the
speculation of Terry Marks (1992) that psychological chaos is the root of much impulsive


I have been talking about beliefs "attacking" one another. By this I have meant something
rather indirect: one belief attacks another by giving the impression of being more efficient
than it, and thus depriving it of the opportunity to be selected by higher-level processes. One
way to think about this process is in terms of the "antimagician" systems of Chapter Seven.

Also, I have said that belief systems may be viewed as component-systems, in which beliefs
act on other beliefs to produce new beliefs. But I have not yet remarked that the process of

Get any book for free on: www.Abika.com

beliefs destroying other beliefs may be conceived in the same way. When beliefs B and C are
competing for the attention of the same higher-level process, then each time one "unit" of B is
produced it may be said that one "unit" of anti-C is produced. In formal terms, this might be
guaranteed by requiring that whenever f(g) = B, f(g,B) = C^. According to this rule, unless f and
g vanish immediately after producing B, they will always produce one unit of anti-C for each
unit of B.

The relationship between C and C^ strengthens the immunological metaphor, for as I have
shown each antibody class has an exactly complement. In the immune system, an antibody class
and its complement may coexist, so long as neither one is stimulated to proliferate above the
threshold level. If one of the two complements exceeds the threshold level, however, then the
other one automatically does also. And the result of this is unpredictable -- perhaps periodic
variation, perhaps disaster for one of the classes, or perhaps total chaos.

Similarly, B and C may happily coexist in different parts of the hierarchical network of mind.
The parts of the mind which know about B may not know about C, and vice versa. But then, if C
comes to the attention of a higher-level process, news about C is spread around. The processes
supervising B may consider giving C a chance instead. The result may be all-out war. The
analogue here is not precise, since there is no clear "threshold" in psychodynamics. However,
there are different levels of abstraction -- perhaps in some cases the jump from one of these
levels to the next may serve as an isomorph of the immunological threshold.

Anyhow, the immunological metaphor aside, it is clear that the concept of an "antimagician"
has some psychological merit. Inherently, the dynamics of belief systems are productive and not
destructive. It is the multilevel dynamics of the dual network which providesfor destruction.
Space and time constraints dictate that some beliefs will push others out. And this fact may be
conveniently modeled by supposing that beliefs which compete for the attention of a supervisory
process are involved with creating "anti-magicians" for one another.

Indeed, recalling the idea of "mixed-up computation" mentioned in Chapter Seven, this
concept is seen to lead to an interesting view of the productive power of belief systems. Belief
systems without antimagicians cannot compute universally unless their component beliefs are
specifically configured to do so. But belief systems with antimagicians can compute universally
even if the beliefs involved are very simple and have nothing to do with computation. It appears
that, in this case, the discipline imposed by efficiency has a positive effect. It grants belief
systems the automatic power of negation, and hence it opens up to them an easy path toward the
production of arbitrary forms.

For instance, consider the following simple collection of beliefs:

A: I believe it is not a duck

B: I believe it is a duck

C: I believe it walks like a duck

Get any book for free on: www.Abika.com

D: I believe it quacks like a duck

E: I believe it is a goose

The mind may well contain the following "belief generation equations":

F(F) = F

F(C,D) = B

B(B) = B

G(G) = G

G(E) = B^

The self-perpetuating process F encodes the rule "If it walks like a duck, and quacks like a duck,
it should probably be classified as a duck." The self-perpetuating process B encodes the
information that "it" is a duck, and that if it was classified as a duck yesterday, then barring
further information it should still be a duck today. And, finally, the self-perpetuating process G
says that, if in fact it should be found out that "it" is a goose, one should not classify it as a duck,
irrespective of the fact that it walks like a duck andquacks like a duck (maybe it was a goose
raised among ducks!).

The entity F performs conjunction; the entity G performs negation. Despite the whimsical
wording of our example, the general message should be clear. The same type of arrangement can
model any system in which certain standard observations lead one to some "default"
classification, but more specialized observations have the potential to overrule the default
classification. The universal computation ability of antimagician systems may be rephrased in
the following form: belief systems containing conjunctive default categorization, and having the
potential to override default categorizations, are capable of computing anything whatsoever.
Belief systems themselves may in their natural course of operation perform much of the
computation required for mental process.


Now, in this final section, I will turn once again to the analysis of concrete belief systems. In
Chapter Eight I considered one example of intense internal conspiracy -- Jane's paranoid belief
system. But this may have been slightly misleading, since Jane's belief system was in fact an
explicit conspiracy theory. In this section I will consider a case of internal and structural


. 7
( 10)