<<

. 4
( 10)



>>


CRUCIAL CONNECTIONS

Everything is related to everything else; in fact, if properly perceived, any one thing can be
seen to contain everything else. This interpenetration, however, need not act as a hindrance to
thinking about the overall nature of the world. One must merely pick some concept as a starting
point, arbitrarily, and take it where it leads. The deeper one digs into one's initial concept, the
more of the interconnected web of ideas one will uncover.

Our main concerns so far have been logic, language, and their roles in the mental network. In
this chapter, the scope of the discussion will broaden, almost to the point of disorganization (but
not quite). I will consider language in its connection to deductive thought, consciousness,
evolution, and physical reality. But this does not represent a digression or a change of subject: it
is merely a matter of delving deeper into the nature of language, so deep that one encounters
these other issues as well.

The connections drawn in this chapter will be essential to the rest of the book. I will pose the
crucial question of how language, logic and consciousness conspire with memory to create self,
intuition and reality. The "final" resolution of these question will wait until the final chapter,
when ideas regarding belief systems and cognitive dynamics can be drawn into the picture. But
with the mere posing of the question, half the work is done.

6.1. THE WHORF CONTROVERSY


Get any book for free on: www.Abika.com
81
CHAOTIC LOGIC


I have defined communication as the use of language to mold the world. But I have not yet
probed the difficult question of just how useful language is. The "Sapir-Whorf hypothesis," also
known as the hypothesis of linguistic determinism, suggests that the influence of communication
is very great indeed. It claims that language is the main constructive force underlying the world
that we see around us.

In this section I will give a new perspective on linguistic determinism. I will argue that, when
viewed in a sufficiently abstract way, linguistic determinism is a natural consequence of the
structure of mind. This does not imply that spoken language is responsible for every aspect of the
world you see in front of you -- but it does mean that the maintenance of the belief systems
which we call "self" and "external reality" would be impossible without the aid of sophisticated
linguistic systems.

As has often been observed, the Sapir-Whorf hypothesis may be divided into two separate
parts. First, the idea that the structure of language is closely related to the structure of mind and
"subjective" reality. Second, the idea that the structural differences between the languages of
different cultures are sufficiently large to imply that these different cultures have significantly
different "subjective" realities.

The first claim is the central one. The second claim implies the first. If one demonstrates that
cultures think differently because they use language differently, then one has demonstrated a
fortiori that language determines thought. But, suppose it turned that out cross-cultural
differences in language and thought were small or uncorrelated -- this would speak against the
second claim, but not the first.

Most of the criticism of Whorf's work, however, has centered on his particular arguments for
the second claim, which are less theoretical and more empirical. The statistical work of Lucy
(1987), Bloom (1981) and others shows that grammatical patterns do influence patterns of
attention, memory and classification to a certain extent. However, Whorf seems to have
exaggerated this extent somewhat. He may well have underestimated the degree of commonality
between the language, logic and world-view of an aborigine and the language, logic and world-
view of a New Yorker. For a concrete example of Whorfian thought, consider that, in English,
we call words like "lightening, spark, wave, eddy, pulsation, flame, storm, phase, cycle, spasm,
noise, emotion" nouns. Even though they refer to temporary phenomena, we tend to think of
them as definite entities, and this is probably related to the way our language treats them.

In the Hopi language, 'lightning, wave, flame, meteor, puff of smoke, pulsation' are verbs --
events of necessarily brief duration cannot be anything but verbs. 'Cloud' and 'storm' are at about
the lower limit of duration for nouns. Hopi, you see, actually has a classification of events (or
linguistic isolates) by duration type, something strange to our modes of thought.

Based on this analysis, I would bet that Whorf is correct to hypothesize that a Hopi monolingual
will tend to classify events by duration, whereas an English monolingual will only do so to a
lesser degree. This is in line with the relatively conservative quasi-Whorfism of Lakoff (1987),
Searle (1983), etc.




Get any book for free on: www.Abika.com
82
CHAOTIC LOGIC


Thus a Hopi monolingual will be less likely than an English monolingual to think about waves
by analogy to particles, or to think about meteors as falling objects. And some of the analogies
and correspondences that come naturally to a Hopi monolingual, will take longer to come to an
English monolingual. All this does not mean that there are ideas which are forbidden to a person
by the "decree" of her language. But, as argued extensively in The Structure of Intelligence,
analogy guides the mind in its every move. It is the reason for the structure of memory. To
influence analogy is to influence cognition, memory and behavior.

6.1.1. The Trouble with Translation

Emily Schultz (1990, p. 25) has suggested that Whorf intentionally overestimated the degree
of variance between languages, and the degree of control which language exerts over thought
processes. Had he not done this, she claims, he would not have been so easily able to convince
his audience of the essential dependence of thought on language. Parts of the following analysis
of Whorf's ideas are inspired by the excellent discussion given in Schultz (1990).

To fully understand the debate over Whorf's ideas, one should really read his essays, most of
which are not at all difficult. But, to get some sense of the problem, let us listen to Au (1983,
182-183), an ardent anti-Whorfian:

Many French teachers have told their English-speaking students that "Comment allez-vous?"
which is literally "How go you?" actually means "How are you?" ... I wonder if some day an
Apache speaker will tell us that Whorf's English translation, "as water, or springs; whiteness
moves downward" actually means "It is a dripping spring"; and if a Shawnee speaker will one
day tell us that "direct a hollow moving dry spot by movement of tool" actually means "cleaning
a gun with a ramrod."

Au is obviously misleading us here: there is no way that his French example is analogous to his
Apache and Shawnee examples. "How go you?" is not that far off from "How's it going?", which
American English speakers recognize as being very similar in meaning to "How are you?" So the
difference between French and English in the instance which Au gives us is very little indeed. It
is unlikely that the difference between Hopi and English in describing a dripping spring is as
little as the difference between French and English in this given example -- after all, French and
English are closely related, and English and Apache are rather unrelated as languages go.

The "dripping spring" passage in Whorf [p.241] goes as follows:

We might isolate something in nature by saying "it is a dripping spring." Apache erects the
statement on a verb ga: "be white (including clear, uncolored, and so on)." With a prefix no- the
meaning of downward motion enters: "whiteness moves downward." Then to, meaning both
"water" and "spring" is prefixed. The result corresponds to our "dripping spring," but
synthetically it is "as water, or springs, whiteness moves downward." How utterly unlike our
way of thinking!

Hoijer (1953, p.559) has given a slightly different and very penetrating analysis of this phrase
"tonoga" or "tonoogah":


Get any book for free on: www.Abika.com
83
CHAOTIC LOGIC


Dripping Springs, a noun phrase, names a spot in New Mexico where the water from a spring
flows over a rocky bluff and drips into a small pool below; the English name, it is evident, is
descriptive of one part of this scene, themovement of the water. The Apache term is, in contrast,
a verbal phrase and accentuates quite a different aspect of the scene. The element to, which
means "water," precedes the verb "noogah," which means, roughly, "whiteness extends
downward." Tonoogah as a whole, then, may be translated "water-whiteness extends
downward," a reference to the fact that a broad streak of white limestone deposit, laid down by
the running water, extends downward on the rock.

Note that Whorf has moves where Hoijer has the less active extends . Also, note that although
Hoijer emphasizes that tonoogah refers to limestone, he does not say that it refers only to
limestone and not at all to water -- if it did not refer to the moving water at all, its classification
as a verbal phrase would need some explanation.

Hoijer's analysis is actually more interesting than Whorf's: it points out that the Apache and
the English are looking at different aspects of the same physical situation. To use the notation
introduced in Chapter Two, Dripping Springs / average-English-speaker and Dripping
Springs / average-Apache-speaker are not the same entity.

Depending on which language she uses, a person will tend to look at and to remember
different aspects of Dripping Springs. Dripping Springs will more likely to be connected to
white things in the mind of an Apache speaker than in the mind of an English speaker.

In some cases Whorf may indeed have been guilty of exaggerating the differences between
Amerindian and Indo-European languages. But the matter is not so simple as Au and the other
critics believe. Translation is always problematic, even between similar languages but especially
between dissimilar ones. Of the Tao te Ching, G. Spencer-Brown (1972) writes

I possess some half-dozen or so of the forty-odd translations into English alone. They differ
widely because the Chinese language is so powerful that any 'translation' into a western language
provides only one of the many possible interpretations of the original. Chinese is a pictorial
language, very poetical and mathematical, with no grammar and no parts of speech.

Whether or not you accept Spencer-Brown's assessment of the "power" of Chinese, it is
indisputable that a large number of Chinese scholars, mostly competent and with no particular ax
to grind, have produced rather different translations of the same very simple work. Chinese
seems to permit an ambiguity that cannot be directly translated into English; when translating,
one has to pick one of the several possible meanings. Of course, the ambiguity could be more
accurately transmitted by providing a list of possible interpretations instead of just one, but there
is a big psychological difference between a list of statements with varying meanings and a brief
statement with a variety of intrinsic meanings. The latter conveys the interconnectedness of the
various meanings in a direct way that the former cannot match.

This is not to say that the monolingual American reader of the Tao te Ching can never get a
sense for the inter-relatedness of the various meanings contained in the original Chinese. It is just
to say that she will have to work a little harder to get such a sense, that such a sense will tend to


Get any book for free on: www.Abika.com
84
CHAOTIC LOGIC


come more naturally to someone who reads the original Chinese. And the monolingual American
reader will have an easier time getting this sense if she reads several different translations.

So, translation between disparate languages is a genuine problem. If Whorf made Hopi and
Apache sound very different from English, but someone else can provide translations that makes
Hopi and Apache sound more similar to English, what does that tell us? That one of them was
right, and the other wrong? Who's to say that every Amerindian expression has one true
meaning that can be formulated in one simple English expression? In Chapter Five I presented a
semantical theory which indicates that meaning is indeed not this simple: that the meaning of
even a simple word can be complex and hard to specify precisely.

So it is hard to say whether Whorf translated "accurately" or not. His translations were never
blatantly inaccurate; they were always within the bounds of plausibility in that they maintained
the commonsense meanings of the expressions involved. But what if it were true that Whorf
overemphasized certain aspects of the meanings of Amerindian expressions -- namely those
aspects that he felt would seem most alien to average American readers? From his interpretations
he judged that Apache-, Hopi- or Shawnee-speaking Amerindians tend to think about things
differently than English-speaking Americans. From other interpretations one might not conclude
this. If both interpretations have some degreeof validity, then the proper conclusion is that these
Amerindians do tend to think about things differently than Americans, but probably rather less
so than Whorf believed. For the semantic differences which Whorf pointed out are there, they
are just not as important as Whorf thought, because they do not exhaust the meanings of the
Amerindian expressions in question.

Thought is influenced by all aspects of the meanings of the words and sentences it uses; it is
not controlled by any of them. The view of meaning as a fuzzy set of patterns makes this point
particularly clear. Whorf focused on certain subsets of the meaning-sets of Amerindian words,
chosen for interest and shock value. Others claim that these subsets are not as important as
Whorf thought; they argue, in effect, that the subsets which Whorf identified have small degrees
of membership in the meaning fuzzy sets of the words and sentences he translated. But unless the
degrees involved are truly negligible, which seems highly unlikely, this sort of quibble does not
have much force against Whorf's general theory of language and mind.

6.1.2. Chinese and Western Modes of Thought

Some of the most intriguing evidence in favor of the Sapir-Whorf hypothesis may be found in
a little book by Alfred Bloom, entitled The Linguistic Shaping of Thought (1981). This book
dispels two illusions at once: first, the idea that the Sapir-Whorf hypothesis is empirically false;
second, the idea (which one might get, for example, from Lucy (1987)) that the Sapir-Whorf
hypothesis is true, but only in ways that are philosophically and psychologically uninteresting.
For example, Bloom reports that

In 1972-73, while I was in Hong Kong working on the development of a questionnaire designed
to measure levels of abstraction in political thinking, I happened to ask Chinese-speaking
subjects questions of the form, "If the Hong-Kong government was to pass a law requiring that
all citizens born outside of Hong Kong make weekly reports of their activities to the police, how


Get any book for free on: www.Abika.com
85
CHAOTIC LOGIC


would you reach".... Rather unexpectedly and consistently, subjects reacted "But the government
hasn't," "It can't," or "It won't." I tried to press them a little by explaining, for instance, that "I
know the government hasn't and won't, but let us imagine that it does or did...." Yet such
attempts to lead the subjects to reason aboutthings that they knew could not be the case only
served to frustrate them and to lead to such exclamations as "We don't speak/think that way!,"
"It's unnatural," "It's unChinese!" Some subjects with substantial exposure to Western languages
and culture even branded these questions and the logic they imply as prime examples of
"Western thinking." By contrast, American and French subjects, responding to similar questions
in their native languages, never seemed to find anything unnatural about them and in fact readily
indulged in the counterfactual hypothesizing they were designed to elicit.

The unexpected reactions of the Chinese subjects were intriguing, not only because of the
cross-cultural cognitive differences they suggested, but also because the Chinese language does
not have structures equivalent to those by which English and other Indo-European languages
mark the counterfactual realm.

In giving a routine political questionnaire, Bloom stumbled upon an apparent parallel between
patterns of language and patterns of thought.

Subsequent empirical tests verified Bloom's original intuition. Given the same stories to read,
Chinese students were far less likely than American students to place a counterfactual
interpretation upon them. For example, given information of the form "The philosopher Bier, if
he had come into contact with X, would have done Y," Chinese students were far more likely to
assume that Bier had done things related to Y.

Of course, Bloom is not proposing that Chinese speakers cannot reason counterfactually. He
gives examples of counterfactual statements in Chinese. Compared to their Indo-European
counterparts, however, these are protracted and awkward. The point is that thinking
counterfactually is much easier for us than for the Chinese, because our language provides us
with ready-made schema for doing so.

These results are surprising and tremendously important. When I first read of them, my
reaction was utter disbelief. After all, every Chinese mathematician uses reductio ad absurdum,
a theorem-proving strategy which is explicitly counterfactual in nature. Obviously Chinese
mathematicians develop a mental "schema" for applying counterfactual reasoning to
mathematical statements.

But, after putting variants of Bloom's original survey question to several Chinese
mathematicians of my acquaintance, I became a believer. My informal survey indicated that
Chinese people, even those who speak reasonable English, are simply not comfortable thinking
counterfactually about commonplace situations. Counterfactual reasoning in mathematical proofs
would seem to be, psychologically, a different "routine" from counterfactual reasoning regarding
politics and everyday life. This is an intriguing example of mental "modularization." Just as a
person who reasons logically about chess need not reason logically about her boyfriend's
activities, a person who reasons counterfactually in mathematics need not reason
counterfactually about commonplace real-world events.


Get any book for free on: www.Abika.com
86
CHAOTIC LOGIC


Bloom also studied other, related differences between Chinese and Indo-European languages: for
instance, the use of articles, or the tendency to "entify" characteristics or acts into things
themselves by adding suffixes like "-ance," "-ity," "-ness," "-tion," "-age". In each case the result
is the same: the linguistic difference corresponds to a difference in interpreting events, as
measured by responses to simple surveys. Obviously all humans think alike to a large extent. But
there are scientifically demonstrable differences, which are not academic but rather closely
bound up with the interpretation of everyday events.

6.1.3. Contradictions and Loopholes

This brings us to another invalid argument often made against Whorf's ideas: that the very
concept of linguistic relativity is self-contradictory. After all, it is asked, if our thoughts and
perceptions are not based on objective reality but only on linguistic structures, then how can we
trust those thoughts and perceptions that led us to the concept of linguistic relativity in the first
place? Whorf is accused of asserting the objective truth of the impossibility of objective truth.

This argument is wrong for many reasons, the main one being that Whorf never actually made
such a strong statement for linguistic determinism. He always left loopholes in his statements --
using "largely" instead of "entirely," and so on.

Statements which at first glance seem very strong become, on closer consideration, somewhat
open-ended. For instance, consider Whorf's contention that

the world is presented in a kaleidoscopic flux of impressions which has to be organized by our
minds -- and this means largely by the linguistic systems of our minds. [p. 215]

Here there are two loopholes. First, "largely" -- what exactly does this imply? And then,
"linguistic systems" -- given the concept of an abstract "language of thought," and the fact that
Whorf has elsewhere called mathematics and music "quasilanguages," it is not clear exactly what
this phrase is supposed to mean.

Whorf just plain never claimed that language controls thought, unilaterally and absolutely.
And there is nothing paradoxical in the idea that linguistic structures are a big influence on our
thoughts and perceptions. Even big influences can potentially be overcome -- with hard work and
continual self-consciousness, or occasionally just by chance.

6.1.3.1. Language and Category

The misperception of Whorf as an extremist has caused many current researchers to distance
themselves from Whorf, while at the same time applying many of his ideas. Listen, for example,
to Searle (1983):

I am not saying that language creates reality. Far from it. Rather I am saying that what counts as
reality -- what counts as a glass of water or a book or a table, what counts as the same glass or a
different book or two tables -- is a matter of the linguistic categories that we impose on the
world.... And furthermore, when we experience the world, we experience it through categories


Get any book for free on: www.Abika.com
87
CHAOTIC LOGIC


that help shape the experiences themselves. The world doesn't come to us already sliced up into
objects and experiences; what counts as an object is already a function of our system or
representation, and how we perceive the world in our experiences is influenced by that system of
representation. The mistake is to suppose that the application of language to the world consists of
attaching labels to objects that are, so to speak, self identifying. On my view, the world divides
the way we divide it.... Our concept of reality is a matter of our linguistic categories.

Searle's emphasis on "categories" is reminiscent of Lakoff's (1987) Women, Fire and Dangerous
Things, the title of which refers to an aboriginal language thatgroups women, fire and dangerous
things together under one categorical name. It also reminds of Hilary Putnam's formal-semantic
theorem, to the effect that

'Objects' do not exist independently of conceptual schemes. We cut up the world into objects
when we introduce one or another scheme of description....

It has become acceptable in philosophical and anthropological circles to admit that language
guides our categorization of the world. If Whorf were still around, how would he react to this? I
suspect he would observe that categorization is just the simplest kind of patternment: that
language does guide the way we group things together, but it also guides our perceptions and
cognitions in subtler ways.

And Whorf might also be a bit amused to find the claim that "our concept of reality is a matter
of our linguistic categories" in the same essay as the statement that "I am not saying language
creates reality. Far from it." It would seem that contemporary thinkers like Searle find Whorfian
ideas useful, but they want to avoid controversy by marking a sharp distinction between "our
concept of reality" and "reality." What difference does this phenomenal/noumenal distinction
make, in practice?



6.1.3.2. Whorf on Culture

So far I have defended Whorf against his critics. However, I must admit that on some issues
Whorf went too far even for me. For instance, Whorf probably would not have agreed with the
ideas about language and culture sketched in Section 2.7 above. He supposed that written and
spoken languages, along with "quasilanguages" like music and mathematics, had a special power
and coherence lacked by belief systems such as those inherent in culture. Regarding the
interconnection between linguistic, social and psychological realms, he wrote:

How does such a network of language, culture and behavior come about historically? Which
was first: the language patterns or the cultural norms? In main they have grown up together,
constantly influencing each other. But in this partnership the nature of the language is the factor
that limits free plasticity and rigidifies channels of development in the more autocratic way. This
is so because a language is a system, not just an assemblage of norms. Large systematic outlines
can change to something really new onlyvery slowly, while many other cultural innovations are
made with comparative quickness. Language thus represents the mass mind; it is affected by


Get any book for free on: www.Abika.com
88
CHAOTIC LOGIC


inventions and innovations, but affected little and slowly, whereas TO inventors and innovators
it legislates with the decree immediate. (p. 156)

Even the most unsophisticated reader would be unlikely to miss the ambivalence of this
passage. In the beginning of the paragraph, "in the main they [language, culture and behavior]
have grown up together." But by the end of the paragraph, language is "affected little and
slowly," whereas language "legislates [to culture and behavior] with the decree immediate."
Which is it? Is it coevolution between two systems of roughly equal complexity, or is it the
adaptation of a relatively simple system to a much more complex one, with relatively little
influence in the opposite direction?

In the end Whorf adopts what I would call a strict Darwinist point of view (see The Evolving
Mind for a great deal more on strict Darwinism). Many evolutionary biologists believe that one
cannot analyze evolution without taking into account the fact that the environment of an
organism -- consisting as it does of other evolving organisms -- evolves along with the organism,
adapting to the organism at the same time as the organism adapts to it. Some, such as James
Lovelock (1988), even believe that the physical environment evolves to match the organisms
which simultaneously evolve to match it. In contrast to these points of view, the strict Darwinists
believe that each organism evolves independently, stringently influenced by the systematic
structure and dynamics of its environment but having very little influence upon its environment.
Whorf looks at cultural and behavioral patterns in the same way that strict Darwinism looks at
organisms: helpless in the face of the awesome power of their environment, their only option is
effective accomodation.

Unlike Whorf, I do not agree that cultural and behavioral systems are "just a collection of
norms." Far from it. The whole field of social psychology speaks against this supposition. These
systems are indeed a collection of norms, but a collection full of subtle interconnections and
interdefinitions.

As to their effect on human existence, compared to the effect of language on human existence,
here again I must differ with Whorf. Language's effect may be subtler and in some ways
deeper, but the influence of cultural and behavioral systems is much more direct.

Spoken language encodes basic background assumptions that subtly guide our analogies. It
thus plays a role throughout the mind -- in the language of Chapter Three, at virtually every level
of the dual network, in virtually every cluster of processes (only the very lowest levels are
exempt). But systems of other kinds guide our analogies as well, perhaps not quite so subtly or
pervasively, but in many cases more powerfully. Belief systems about the nature of social and
physical reality, or particular aspects thereof, guide our analogies very strongly.

And, finally, it is worth noting that even behavior systems can sometimes guide our cognitive
processes. For when we adopt a certain role, put on a certain "performance," we associate things
that we would not associate otherwise; and the mind is very good at recognizing and storing
associations. This is a relationship which deserves much more attention than it has received.

6.2. LANGUAGE, CONSCIOUSNESS, SERIALITY


Get any book for free on: www.Abika.com
89
CHAOTIC LOGIC


The dual network model, as outlined in Chapter Three, is a high-level "wiring diagram" for
intelligent systems. But it sidesteps the question: where does consciousness fit in? In The
Structure of Intelligence, consciousness is modeled as a process that moves from level to level of
the multilevel control hierarchy, but only within a certain restricted range. If the zero level is
arbitrarily selected to represent the "average" level of consciousness, then we may say
consciousness resides primarily on levels from -L to U. The levels below -L represent
perceptions that are generally below conscious perception. Consciousness is at a distance from
the lowest levels of the hierarchy, which represent "sense data" -- it deals only with constructions
of at least moderate complexity. And, on the other hand, the levels above U represent perceptions
that are in some sense beyond conscious perception: too abstract or general for consciousness to
encompass.

This theory of consciousness is similar in some respects to Jackendoff's (1986) "intermediate
level" theory of consciousness, which states that consciousness corresponds to mental
representations that lie midway between the most peripheral, sensory level and the most
"central," thoughtlike level. Jackendoff points out that his idea

goes against the grain of the prevailing approaches to consciousness, which start with the
premise that consciousness is unified and then try to locate a unique source for it. [My theory]
claims that consciousness is fundamentally not unified and that one should seek multiple sources.
[p.52]

Consciousness is not in one place; it is rather associated with a collection of processes that occur
in intermediate levels of the psychological hierarchy.

6.2.1. Dennett's Computationalist "Explanation"

We have located consciousness in the dual network. But we have not said what it is. What
tasks does it accomplish, and what does it depend on? One intriguing hypothesis in this direction
is supplied by Daniel Dennett, in his book Consciousness Explained.

A "meme" is defined as a sociocultural pattern, passed along from generation to generation.
Dennett believes that consciousness is a meme rather than something intrinsic to the structure of
the brain. He proposes that

Human consciousness is itself a huge complex of memes (or, more exactly, meme-effects in
brains) that can best be understood as the operation of a " von Neumannesque " [serial] virtual
machine implemented in the parallel architecture of a brain that was not designed for any such
activities. The powers of this virtual machine vastly enhance the underlying powers of the
organic hardware on which it runs, but at the same time many of its most curious features, and
especially its limitations, can be explained as the byproducts of the kludges that make possible
this curious but effective reuse of an existing organ for novel purposes.

What is the intuition underlying this radical hypothesis? Thinking of the streams of
consciousness that permeate James Joyce's fiction, Dennett gives this "von Neumannesque"
serial machine the alternate label "Joycean machine." And, subjectively, in most states of mind at


Get any book for free on: www.Abika.com
90
CHAOTIC LOGIC


any rate, consciousness does seem to flow like a stream rather than an ocean: all in one direction,
one thought after another.

"I am sure you want to object," Dennett writes, that "[a]ll this has little to do with
consciousness! Afterall, a von Neumann machine is entirely unconscious: why should
implementing it ... be any more conscious?" But this objection does not faze him:

I do have an answer: The von Neumann machine, by being wired up from the outset that way,
with maximally efficient informational links, didn't have to become the object of its own
elaborate perceptual systems. The workings of the Joycean machine, on the other hand, are just
as "visible" and "audible" to it as any of the things in the external world that it is designed to
perceive -- for the simple reason that they have much of the same perceptual machinery focused
on them.

Now this appears to be a trick with mirrors, I know. And it certainly is counterintuitive, hard-
to-swallow, initially outrageous -- just what one would expect of an idea that could break
through centuries of mystery, controversy and confusion.

In response to the question of what good this complex meme called consciousness does us,
Dennett quotes Margolis (1987) to the effect that

a human being ... cannot easily or ordinarily maintain uninterrupted attention on a single problem
for more than a few tens of seconds. Yet we work on problems that require vastly more time. The
way we do that ... requires periods of mulling to be followed by periods of recapitulation,
describing to ourselves what seems to have gone on during the mulling, leading to whatever
intermediate results we have reached.... [B]y rehearsing these interim results ... we commit them
to memory, for the immediate contents of the stream of consciousness are very quickly lost
unless rehearsed.... Given language, we can describe to ourselves what seemed to occur during
the mulling that led to a judgement, produce a rehearsable version of the reaching-a-judgement
process, and commit that to long-term memory by in fact rehearsing it.

This is nothing more than good common sense. It is well known that consciousness cannot
contain more than around seven entities at one time. Therefore, most of the regularities present
in the mind cannot enter directly into consciousness. But by use of language,
complexphenomena can be encapsulated in simple statements, and thus presented to
consciousness. If the unconscious "wishes" to present something to consciousness, it must
translate some approximation of this thing into simple terms, let consciousness work with the
simplified expression, and then afterwards translate back. Language is the number one tool for
this kind of translation.

6.2.2. Consciousness, Virtual Seriality, and Language

The dual network is intrinsically parallel, but it is possible for a process or group of processes
within the dual network to repeatedly feed itself its own output as input, thus creating a miniature
virtual serial machine, temporarily ignorant of the massively parallel processing going on all
around it. The dual network may in many cases connect A and B, and have A and B repeatedly


Get any book for free on: www.Abika.com
91
CHAOTIC LOGIC


exchange the results of computations without consulting any other processes -- this is virtual
seriality, where one's "serial machine" consists of A and B together.

I don't completely buy Dennett's computationalist treatment of consciousness. However, I do
agree with him that there is a very close connection between consciousness and virtual serial
processing.

In Chapter Three we reviewed two important uses for virtual serial processing: making logical
deductions, and predicting complex systems by simulation. A few pages above we discussed
another, related use: general linguistic deduction. Subjectively, these actions are all closely
connected with consciousness.

Margolus, in the quote given above, has eloquently presented the phenomenological case for
the relevance of consciousness to linguistic deduction. In order to compute high-depth elements
of D(I,T) for standard linguistic and logical systems, we need to use a complex combination of
serial conscious thought and analogical/associative-memory thought. Introspectively, neither one
process alone appears to suffice.

And the phenomenological connection between consciousness and prediction is no less direct.
Suppose one wants to determine the likely consequences of a given action. One may intuit, in a
semi-conscious flash, some guess as to the answer. But in order to be sure, one will reason it out
slowly and carefully: what will be the immediate consequences, then the consequences of these
consequences, and so forth. Almost all prediction is purely unconscious: but when situations get
too uncertain, when they deviate too far from past experience, then consciousness has to
intervene to dealwith things serially, by approximate simulation. In other words, walking down
the street, one chooses a path unconsciously. But leaping through a stream from one rock to the
next, one chooses one's path consciously, weighing each choice in terms of the array of future
choices that it will lead to.

In sum, according to Dennett's "computationalist" vision, consciousness is a phenomenon

1) closely related with,

2) on the same levels as, and

3) dealing largely with the output of

serial, linguistic processing. This conception of consciousness is all that is necessary to fit the
Sapir-Whorf hypothesis together with the pattern-theoretic analysis of language and mind. For it
leads to the conclusion that language helps to determine the world we consciously perceive.

6.3 NIETZSCHE ON CONSCIOUSNESS AND LANGUAGE

Dennett's consciousness-as-meme idea is not a new one, nor is his picture of consciousness as
linguistic deduction. His entire theoretical framework is, in fact, very similar to the view of
consciousness articulated by Friedrich Nietzsche in 1882:


Get any book for free on: www.Abika.com
92
CHAOTIC LOGIC


... Man, like every living being, thinks continually without knowing it; the thinking that rises
to consciousness is only the smallest part of all this -- the most superficial and worst part -- for
only this conscious thinking takes the form of words, which is to say signs of communication,
and this fact uncovers the origin of consciousness.

In brief, the development of language and the development of consciousness ( not of reason
but merely of the way reason enters consciousness) go hand in hand.... The emergence of our
sense impressions into our own consciousness, the ability to fix them and, as it were, exhibit
them externally, increased proportionately with the need to communicate them to others by
means of signs...

... [C]onsequently, given the best will in the world to understand ourselves as individually as
possible, "to know ourselves," each of us will always succeed in becoming conscious only of
what is not individual but "average"...

This is the essence of phenomenalism and perspectivism as I understand them: Owing to the
nature of animal consciousness, the world of which we can become conscious is only a surface-
and sign-world, a world that is made common.... (The Gay Science; 1968b)

Nietzsche interpreted the high degree of consciousness which we humans display as a socio-
cultural phenomenon, an exaggeration of animal consciousness which evolved together with
language -- which evolved, in short, as a meme. But his view of the utility of consciousness was
not quite so rosy as Dennett's. According to Nietzsche, only conscious thinking is forced into the
straightjacket of language, and for this precise reason conscious thinking is much less fertile than
unconscious thinking. Language is for social interaction, therefore that which can be put in the
form of language is precisely that which is common rather than that which is individual,
unusual, unique.

Yet one cannot conclude that Nietzsche felt linguistic, conscious thought to be unimportant
or useless. His attitude was much more complex than that. In a draft of a preface for his never-
written treatise The Will To Power, he wrote "This is a book for thinking, nothing else." But in
the notes for that very book, he wrote of thinking:

Language depends on the most naive prejudices....

We cease to think when we refuse to do so under the constraint of language; we barely
reach the doubt that sees this limitation as a limitation.

Rational thought is interpretation according to a scheme that we cannot throw off.
(p.283)

This is about as Whorfian a statement as one could ever hope to find. Nietzsche valued linguistic,
conscious, rational thought immensely -- for much of his life it was his only solace from physical
suffering. But he did not trust it, he did not see it as objective; he refused to treat it as a religion.

6.3.1. Imaginary Subjects


Get any book for free on: www.Abika.com
93
CHAOTIC LOGIC


Whorf's work focused on the differences in world-view implied by differences in linguistic
structure. Nietzsche, on the other hand, saw certain very simple, very essential elements in
common to all languages, andperceived that they played an essential role in the construction of
the concept of an internal and an external world.

For instance, Whorf wrote of the way English, but not Hopi, refers to lightening as an object.
Nietszche saw this objectification of non-objects -- crucial in the construction of the external
world -- not as a peculiar feature of some languages, but rather as a consequence of the one
central objectification involved in isolating the "self," the inner actor, as distinct from everything
else.

Our bad habit of taking a mnemonic, an abbreviative formula, to be an entity, finally as a
cause, e.g., to say of lightening "it flashes." Or the little word "I."

[H]itherto one believed, as ordinary people do, that in "I think" there was something of
immediate certainty, and that this "I" was the given cause of thought, from which by analogy we
understood all other causal relationships. However habitual and indispensible this fiction may
have become by now -- that in itself proves nothing against its imaginary origin: a belief can be a
condition of life and nonetheless be false. (p.268)

The self, the "I", is understood as the basis of the linguistic concept of subject, of actor. Thus
the construction of a self, and the construction of an external world, are perceived as closely
related, as emanating from the same fundamental principles. The concept of subject, in
Nietszche's view, is a prime example of the subtle inter-connection of language and thought. Our
language assigns imaginary subjects to actions, and we correspondingly assign imaginary
subjects to actions in our conscious and near-conscious thinking; we construct an external world
based largely on subjects. And we postulate an imaginary entity called I, and attribute to this
subject a host of actions that are actually due to the independent and interactive behavior of a
number of different subsystems.

These "imaginary" subjects may be understood as the result of an overextended analogy. First,
events are correlated with other temporally prior events -- e.g. smoke is correlated with fire.
Then, it is observed that in many cases it is useful, and hence satisfying, to explain a large
number of different events in terms of one temporally prior entity. General concepts
like"weather," "hatred," "patriotism," and so forth arise, each one out of the desire to explain a
certain collection of effects with one entity. These concepts refer to definite collections of
specific phenomena; they are simply tools for thinking and remembering.

But then what happens is that, when something cannot be explained in detail, a general
concept is adduced as an "explanation." This is not always a mistake: given limited resources, a
mind cannot explain everything in detail. It must learn to recognize which things can be
explained in terms of well known ideas, and can be ignored until the pressing need to analyze
them arises, and which things are anomalous, requiring special attention so that trouble will not
occur when the need to analyze them arises. But it is a mistake sometimes: a general concept is
adduced as an explanation for a phenomenon to which it simply does not apply. Thus "it flashes"
for lightening.


Get any book for free on: www.Abika.com
94
CHAOTIC LOGIC


"It bit me" is meaningful, it is a general explanation which could easily be backed up by a
detailed explanation. But "it flashes" is not: this is a general explanation which is really unrelated
to any detailed explanation. The only possible related detailed explanation would be of the form
"this and that combination of atomspheric phenomena flashes" -- but that is severely stretching
the concept of it, and in any case it is not the sort of explanation that would come naturally to the
mind of a non-meteorologist. "I did it" is problematic for the same reason "it flashes" is no good.
It is not just a shorthand for some detailed explanation ready at hand, it is an empty abstraction.

P.T. Geach, in Mental Acts, has made this point in a particularly eloquent way:

The word 'I', spoken by P.T.G., serves to draw people's attention to P.T.G.; and if it is not at once
clear who is speaking, there is a genuine question 'Who said that?' or 'Who is "I"?' Now, consider
Descartes brooding ... saying 'I'm getting into an awful muddle -- but then who is this "I" who is
getting into a muddle?' When 'I'm getting into a muddle' is a soliloquy, 'I' certainly does not serve
to direct Descartes' attention to Descartes, or to show that it is Descartes, none other, who is
getting into a muddle. We are not to argue, though, that since 'I' does not refer to the man Rene
Descartes it has some other, more intangible thing to refer to. Rather, in this context the word 'I'
is idle,superfluous, it is used only because Descartes is habituated to the use of 'I' in expressing
his thoughts and feelings to other people.

According to Whorf, this reification of the subject does not happen in Hopi and other non-
Indo-European languages. But on this point I must side with Nietzsche. The grammatical
manifestation of reification may vary from language to language, but I very strongly suspect that
every language postulates some form of imaginary acting entity. This, unlike use of
counterfactuals, emphasis on flux versus stasis, and other linguistically varying phenomena, is
absolutely essential to the concept of language. It is an instinctive application of analogical
reasoning to the act of naming on which all communication is based, and no culture can escape
from it. Humans cannot help but attach a certain amount of concrete reality to the symbols that
they use. We can, as Nietzsche suggested, fight this tendency, but this is a battle which no one
can ever completely win.

An interesting spin-off of this analysis of imaginary subjects is the theory that free will is an
emotion inspired by language. Nietzsche's analyzed free will as

the expression for the complex state of delight of the person exercising volition, who commands
and at the same time identifies himself with the executor of the order -- who, as such, enjoys also
the triumph over obstacles, but thinks within himself that it was really his will itself that
overcame them. In this way the person exercising volition adds the feelings of delight of his
successful executive instruments, the useful 'underwills' or undersouls -- indeed, our body is but
a social structure composed of many souls -- to his feelings of delight as commander. L'effet
c'est moi: what happens here is what happens in every well-constructed and happy
commonwealth; namely, the governing class identifies itself with the successes of the
commonwealth. (1968, p.216)

The feeling of free will, according to Nietszche, involves 1) the feeling that there is indeed an
entity called a "self", and 2) the assignation to this "self" of "responsibility" for one's acts.


Get any book for free on: www.Abika.com
95
CHAOTIC LOGIC


In The Structure of Intelligence, "delight" and related emotions are given a pattern-theoretic
treatment. Following Paulhan, happiness is analyzed as the feelingof increasing order, increasing
interemergence and interconnectedness. Here, let us focus instead on the nature of the delight
involved. Free will is the special kind of happiness derived from a process attributing the
successes of its "servant" processes to itself -- in other words, it is an example of the joy of
making the postulate of an imaginary subject. And this postulate is linguistic in nature, so that
the connection between free will and consciousness is precisely as close as the relation between
language and consciousness.

6.4. A NEW THEORY OF CONSCIOUSNESS

So far, I have discussed some of the correlates of consciousness; but I have not explained
consciousness itself. To get at the true nature of consciousness, one must confront the feeling of
"raw existence" or "self-presence" that is the essence of what we call living.

This is a very difficult task, and I will approach is obliquely, by first looking at consciousness
is through the medium of biology. The biological approach cannot give us the final answer to
what is fundamentally a psychological problem. But it will be remarkably useful in setting us on
the right path.

6.4.1. Consciousness as Perception

Consciousness is self-perception. And self-perception could, theoretically, be achieved in two
ways. First, by special "perception" routines used only for perceiving high-level mental
activities. Or second, by general "perception" routines that are also used for something else.
Evolutionary thinking makes the second possibility seem far more attractive.

For, suppose the first alternative holds. These special self-perception routines would have to
be quite sophisticated. How would they ever get started, in the natural history of the brain?
Clearly, in their initial stages, they could have no adaptive advantage. They would have to arise
as the side-effect of something else. But what?

The second alternative, on the other hand, requires no mysterious "evolution out of the blue."
Lower animals demonstrate progressively more sophisticated neural routines for perceiving the
outer world. If consciousness uses these routines for self-perception, then its evolution is not so
much of an enigma. All that the evolution of consciousness required was the additionof some
new connections onto a complex, fine-tuned, already existing mechanism.

The most reasonable hypothesis, therefore, is that consciousness is the result of taking neural
maps normally used for perceiving the outside world, and applying them, not to the external
stimuli for which they were intended, but to the inner workings of the mind. Of course, the
lowest levels of perceptual processes cannot possibly be applied outside of the context for which
they evolved. But for slightly higher levels, this is not true. What about the processes that
assemble various pictures together into a scene ? What about the processes that distinguish
meaningful sounds or images from background information that is less relevant or interesting.
These are highly developed aspects of the human perceptual mechanism.


Get any book for free on: www.Abika.com
96
CHAOTIC LOGIC


What I am suggesting is that consciousness works by mapping higher-level thought
processes into middle-level sensory data. Consciousness consists of "fooling" the perceptual
mechanism into thinking it is working with constructs built up directly from external sense data,
when it is actually working with transformed versions of patterns from levels above it. This
explains what we mean when we say we are "thinking visually" about something, or "thinking in
words." We mean that our self-perception uses the standard perception routines of the brain,
which evolved for perception of data coming in from particular sense organs: eyes, ears, noses,
taste buds, skin. Our ideas are mapped into pictures, sounds, perhaps even smells, and in this
disguise they are grouped into wholes and "perceived." Then the perceptions obtained in this
way give rise to higher-level patterns, which may be fed back down to the perceptual
mechanisms, repeating the process and giving rise to the familiar circularity of consciousness.

This view is fairly closely related to Edelman's (1989) theory of consciousness. According to
Edelman, consciousness represents the interaction between

1) the recognition of patterns in "interoceptive input," input from neural maps gauging the
state of the body. This categorization is mediated by the hypothalamic and endocrine systems,
the "reptile brain"

2) the recognition of patterns emergent between "interoceptive input" and "exteroceptive"
input. Exteroceptive input, input from outside the body, is mediated by hippocampus, septum
and cingulate gyri; the recognition of emergent patterns takes place in the thalamus and cortex.

The interaction between these two processes is a kind of "re-entry" between higher-level
cognitive emergent-pattern recognition and lower-level "automatic" interoceptive and
exteroceptive pattern recognition.

However, while Edelman explores many interesting neurological details, he omits any detailed
discussion of the intuitive, psychological role of perceptual mechanisms in consciousness. The
issue of "fooling," and its relationship to the subjective experience of consciousness, is never
drawn into the picture. Thus, on a psychological level, Edelman's theory of consciousness is
somewhat disappointing, particularly in comparison to his Neural Darwinist theory of learning,
which is so suggestive both biologically and psychologically.

Finally, it is worth pointing out that none of this contradicts Dennett's "consciousness-as-
meme" idea. I have said that there are neural connections leading from higher-level processes,
through transformation processes into middle-level perceptual processes. These connections have
evolved; they are there in every human brain. But they may be strengthened through repeated
use, or weakened through disuse. Coming into frequent contact with other conscious persons
would seem to be a prerequisite for the strengthening of these connections. In this sense,
therefore, consciousness may be said to be a "meme." The presence of the connections is
genetic, but their strength is memetic.

6.4.2. Consciousness and the Making of Reality




Get any book for free on: www.Abika.com
97
CHAOTIC LOGIC


Now, finally, I am ready to put all the pieces together: consciousness, language, seriality,
thought, and perception. The first step in this unification is to do what neither Dennett,
Jackendoff or any other modern cognitive scientist has done: to say what good consciousness is.
I propose, following Nietzsche, that the function of consciousness is to manufacture reality.

Consciousness is a feedback dynamic involving higher-level "cognitive" processes and
middle-level perceptual processes. What I am suggesting is that a pattern only acquires the
presence, the solidity that we call "reality," if it has repeatedly passed through this feedback loop.

Philip K. Dick defined reality as "That which doesn't go away when you stop believing in it."
Reality is a kind of near imperviousness to mental dynamics, a refusal to be altered by the
natural re-organization processes of the dual network. The dual network constantly readjusts
itself, swapping one subnetwork foranother in quest of greater associativity and fortuitous
genetic creation. But those subnetworks which are real cannot be broken up; their pieces cannot
be swapped for other pieces.

To put it metaphorically, elements of reality are like islands in the sea of mind. As with real
islands, a sufficiently large storm can maul or bury them: there are degrees of restriction. But
normal weather patterns rearrange the sea and leave the islands intact.

Why would passing through the feedback loop from higher-level to middle-level tend to cause
relative imperviousness? The answer to this lies in the specific middle-level perceptual
processes involved. These are, I suggest, primarily

1) those processes which act to combine a group of different sensations from the same sense
organ together into a single cohesive entity -- a "scene," "image," "sound," "physical location,"
etc.

2) those processes which act to combine entities recognized by different senses (hearing,
vision, touch, etc.) into a single, united form.

Each time something is passed through these processes, it attains a degree of cohesion, a
degree of resistance to being broken up. When something is passed through again and again and
again, it achieves a superlative degree of cohesion and resistance -- it becomes real.

The process of grouping disparate elements together into a whole is a complex one. However,
I suggest that one key part of this process is an increase in the degree of restriction against
rearrangements. A subnetwork which cannot easily be disrupted by rearrangement dynamics is
inherently much wholer than one which can. And once it is protected against rearrangement, its
parts have the leisure to slowly adjust themselves to one another, thus attaining yet more refined
wholeness. Finally, passing some X through the restriction-degree-increase routines over and
over again would obviously result in the construction of extremely solid barriers around that X.

In this view, consciousness is a serial process. And it is very similar to the serial processes of
prediction, logical deduction, and syntactic sentence, percept-, or act-formation. All of these
processes involve a re-entry from higher to lower. Something is built up -- a phrase, say, out of


Get any book for free on: www.Abika.com
98
CHAOTIC LOGIC


words; or a future, out of the present. And then it is passed down to the level where its parts
came from: the phrase is plugged into a syntactic operation as if it were a word; the
futurescenario is treated conjecturally as a present and the mental routines for "present-world"
manipulation are applied to it.

But mere similarity is not the only relation between consciousness and deductive, serial
processing. Perhaps more crucial is the fact that in the context of the dual network, structured
transformation systems require the interim asus mption of reality every step of the way.
How could deduction work if one step were altered before the next were complete? How could
prediction work if the one-week prediction were rearranged before the two-week prediction was
done? How could a complex sentence be formed if, while the sentence was being structured on a
global level, the subservient phrases of the sentence were being replaced with phrases of
completely different types? The re-entrant processes involved in applying structured
transformation systems require reality to be introduced at each step. And reality, I have argued,
requires consciousness.

This, I suggest, is the true nature of the relationship between consciousness, language and
thought. Language structures the memory which guides the structured transformation systems of
deductive and predictive thought. But neither sentence formation nor deduction nor prediction
could function without consciousness.

6.4.2.1. Consciousness as Catch-22

Nietzsche lamented the "coarseness" of the ideas contained in consciousness. But this is
inevitable: it is in the very nature of consciousness to construct ideas that are rigid. Unconscious
ideas are bound to be more fluid, more adept at intuitive shifting. But most of these unconscious
ideas were constructed by structured transformation systems , which require local rigidity for
their effective operation.

Specifically, imaginary subjects, which annoyed Nietzsche so, are precisely the price one
pays for having linguistic systems that talk about subjects. Without reifying things, without
assuming and imposing their reality, there is no way to keep them solid in the midst of the
shifting dynamics of the mind; there is no way to keep them in one place long enough to work
with them. Sometimes the reification turns out to be a little too much -- "I" or "lightning" are
reified for one purpose, and then used for another. But the mind is notoriously error-prone; it is a
strict adherent to Murphy's Law. The cost of avoiding this type of error would be great asto make
thought impossible. Consciousness, and reification along with it, are necessary components of
the unconscious creativity which Nietzsche so extolled.

On the other hand, it would be just as futile to lament the unconsciousness of most of the
mind. If everything were made conscious, the mind would freeze up, it would grind to a halt.
Structured transformation systems, which are the main reason consciousness is necessary, also
require associative memory, which is maintained only by the fluidity of subnetworks that have
not been made real through consciousness.




Get any book for free on: www.Abika.com
99
CHAOTIC LOGIC


Thus consciousness represents a sort of psychological Catch-22. In order to produce fluidity,
the mind must produce rigidity. And in order to produce rigidity, the mind must produce fluidity.
The two exist in a careful balance; one cannot abolish one without abolishing the other as well.

6.4.3. Consciousness and Self-Reference

Beginning from considerations loosely biological in nature, I have arrived at a novel
psychological model of consciousness, expressible solely in terms of the dynamics of the dual
network. The feeling of "raw existence," I suggest, is simply the feeling of subnetworks
resisting the natural urge to shift. It is the feeling of solidity resisting fluidity.

And the feeling of "self-presence" is one level up from this; it is the feeling of solidity which
produces solidity. "I am" means "I, this mental process, make myself solid; I maintain my
boundaries against the surrounding flux." This is not merely an egotistical delusion -- one may
formally show that a mental process can make itself solid, by containing a subroutine directing
itself down through the feedback loop of reality-construction. A process can self-referentially
direct itself to the grouping, solidifying centers of the mind.

One way to write such a process is:

X = s, and direct X to the nearest solidifying process, please

Here s is any object of observation; one may omit it, and obtain a process which does nothing but
direct itself.

Or, less formally, one may write

X = s and look at X,

reducing to

X = look at X

in the simplest case, or e.g.

X = I am hungry and look at X

in a more general situation.

In later chapters I will have much more to say about self-referential formulae of this type, and
their validity in psychological modeling. It will be formally demonstrated that such self-
referential constructions can be elements of mind. For now, however, it is enough to suggest that
there is a fundamental importance attached to the self-propelled movement of such processes
through the feedback loop of consciousness. This motion, I claim, is self-awareness.

6.4.4. Conclusion


Get any book for free on: www.Abika.com
100
CHAOTIC LOGIC


This, finally, completes our roundabout excursion into the murky waters of consciousness
theory. The theory presented in this section may be understood on two levels: biological and
psychological. Some of the neurological details have been fairly speculative; all of the biological
statements I have made, however, are testable scientific hypotheses. Once we finish charting the
connections of the brain, we will see exactly what sort of re-entry consciousness involves. If it
involves re-entry into some sort of scene -making or cross-modally connecting perceptual
process, then the biological theory of this section will be proved correct. If not the theory will
have to be modified, or perhaps discarded.

On the other hand, the dual network model very strongly suggests that, whatever the biological
details, the psychology of consciousness is one of iteratively strengthening barriers against
reorganization. This is the only logical role for consciousness in the context of a continually
fluctuating network of mental processes. So, from the point of view of the dual network model,
the barrier-strengthening would have to be accepted even if it did not have interesting
implications. But in fact it does have at least one very interesting application: it explains, from
first principles, the dependence of language and reason on consciousness.

Whorf, Dennett and Nietszche, despite their vastly different theoretical perspectives, have one
important thing in common: they essentially equate consciousness with language and deductive
reason. But this is notsatisfactory; there is a sense in which consciousness is more basic, less
complex. These other processes make use of the inherent nature of consciousness, but do not
define it. The view of consciousness as iterative barrier-strengthening lets one deduce the close
connection between consciousness, language and reason, rather than assuming it.

Recall that, at the start of the chapter, I decomposed the Sapir-Whorf hypothesis into two
separate hypotheses: 1) that the structure of language strongly influences the structure of
thought; 2) that the differences between existing languages are sufficiently great to cause
significant differences in thought patterns. I have said nothing new about the second claim. What
I have done, however, is to derive the first claim from basic properties of the dual network
model. Whorf liked to use the word "pattern"; it was essential to his thought. So it is not terribly
surprising that, in developing a pattern-theoretic model of mind, I have "rediscovered" an
abstract version of Whorfian linguistics.




Chapter Seven

SELF-GENERATING SYSTEMS




Get any book for free on: www.Abika.com
101
CHAOTIC LOGIC


In his recent book Self-Modifying Systems in Biology and Cognitive Science (1991), George
Kampis has outlined a new approach to the dynamics of complex systems. The key idea is that
the Church-Turing thesis applies only to simple systems. Complex biological and psychological
systems, Kampis proposes, must be modeled as nonprogrammable, self-referential systems
called "component-systems."

In this chapter I will approach Kampis's component-systems with an appreciative but critical
eye. And this critique will be followed by the construction of an alternative model of self-
referential dynamics which I call "self-generating systems" theory. Self-generating systems were
devised independently of component-systems, and the two classes of systems have their
differences. But on a philosophical level, both formal notions are getting at the same essential
idea. Both concepts are aimed at describing systems that, in some sense, construct themselves.
As I will show in later chapters, this is an idea of the utmost importance to the study of complex
psychological dynamics.

7.1 COMPONENT-SYSTEMS

A component-system, as defined by Kampis, consists of a collection of components, each of
which can act on other components to produce new components. More precisely,

An abstract component-system can be defined by the following properties:

a) - there is a finite set of non-dividable and permanent building blocks, drawn from a given pool

b) - there is an open-ended variety of the different types of admissible components, built up from
the building blocks according to some composition rule (which may be explicit or implicit)

c) - the components of the system are assembled and disassembled by the processes of the
system such that every admissible component is also realizable. (p.199)

For illustrative purposes, Kampis suggests that the reader visualize the "non-dividable
building blocks" as LEGO blocks, and the "admissible constructions" as different possible
structures buildable out of LEGO blocks. One must merely imagine that each LEGO structure
contains some appropriate means for acting on other LEGO structures to produce new LEGO
structures.

The main biological example of a component system is a "molecular soup" full of organic
molecules acting on one another to form new molecules. Psychologically, on the other hand, one
is supposed to think of ideas acting on each other to produce new ideas. The central thesis of
Self-Modifying Systems is that biological and psychological systems, being component-
systems, are fundamentally uncomputable. This thesis combines two distinct claims:

Claim 1: Formal component-systems display uncomputable behavior.

Claim 2: Formal component-systems are good models for biological and psychological
systems.


Get any book for free on: www.Abika.com
102
CHAOTIC LOGIC


The first claim is a mathematical result, which Kampis calls his "Main Theorem." The second
claim, on the other hand, is obviously a scientific hypothesis.

In this section I will explore these two claims in some detail. This exploration will lead us to a
new class of systems called self-generating systems -- a class of systems which is different
from, but overlapping with, Kampis's class of component-systems. The contrast between self-
generating systems and component-systems will shed a great deal of light on the fundamental
issues of system theory.

7.1.1. Quantum and Stochastic Computation

Before pursuing Kampis's main thesis in any more detail, I will first explore the meaning of
the term "computable." My attitude toward computation has been influenced immensely by
David Deutsch's (1985) work on quantum computation. Deutsch has
demonstratedmathematically that any system modelable by the equations of quantum physics
can be simulated to within arbitrary accuracy by a "quantum computer". A quantum
computer is different from an ordinary Turing machine. However, it cannot compute any
functions besides those which an ordinary Turing machine can compute. Deutsch's "Quantum
Church-Turing Thesis" states that every physically realizable algorithm can be represented as a
program for a quantum computer. In fact, this is not really a thesis but a theorem. In this respect
it is far more impressive than the ordinary Church-Turing Thesis.

There is also another Church-Turing Thesis, intermediate between the standard one and the
quantum version. One may define a stochastic computer as a Turing machine which is capable
of doing "random coin tosses." Then the Stochastic Church-Turing Thesis states that every
algorithm can be represented as a program for a stochastic computer. Deutsch has shown that
stochastic computation is a less general model than quantum computation; and I will make use of
this result now and again in the following.

Kampis's proof of the uncomputability of component-systems -- Claim 1 above -- says nothing
about quantum or stochastic computers. It speaks only of Turing machine computation. In the
following I will argue that this omission is important -- that Kampis's component-systems,
although they are not Turing computable, may sometimes be computable by stochastic
computers. Because stochastic computation is a less general model than quantum computation,
this implies that at least some component-systems are explicable in terms of quantum physics.

One necessary requirement of any theory of complex systems is agreement with microscopic
physics. Those component-systems which are not quantum computable, are in contradiction to
the principles of physics. What this means is that, in a physical sense, the class of component-
systems is too broad.

Actually, there is a hole in this argument -- a tiny hole, but one which must be duly noted.
"Agreement with microscopic physics" is not strictly synonymous with "agreement with
quantum physics." In his best-seller The Emperor's New Mind, Roger Penrose briefly discusses
Deutsch's theorem, but he dismisses it on the grounds that quantum mechanics will soon be
replaced by a unified theory of quantum gravity. The unified theory of quantum gravity,


Get any book for free on: www.Abika.com
103
CHAOTIC LOGIC


Penrose conjectures, will imply that computable systems are fundamentally uncomputable in the
strongsense of being able to compute non-Turing-computable functions.

The weak point of Penrose's argument, however, is that none of the existing approaches to
quantum gravity show any promise of implying uncomputability. For instance, string theory
(Green et al, 1987) is similar to quantum theory in its general mathematical form -- it depends on
the "quantization" of a classical domain using the Feynman path summation formula. So, if some
form of string theory is correct, then it would seem that there is no hope for Penrose's idea.

7.1.2. Kampis's "Main Theorem"

We have broken down Kampis's central thesis into two claims. The first of these, the "Main
Theorem" of Self-Modifying Systems, is as follows:

Main Theorem. In a component-system it is not possible to know the names and the
encoding (the meaning) of the names before the system produces the respective components....
The behaviour of component-systems is fully uncomputable and unpredictable because the
produced new observables are different from the earlier ones.

The basic idea here is that the temporal sequence of states of a component-system is in general
an uncomputable sequence. Since no Turing machine program can generate an uncomputable
sequence, component-systems must be uncomputable.

It seems to me that the key point is clause (c) of the definition, which says that "every
admissible component is realizable." Suppose one assumes that the set of all admissible
components is uncomputable, and that the dynamics of a component-system are capable of
leading to any admissible component. Then it follows logically that the dynamics of a
component-system cannot be specified by any program. For, if one assumed the opposite, one
would obtain a contradiction -- one would have an uncomputable set of entities obtainable from a
computer program.

Let us go back to the LEGO metaphor. It would be easy to build a computable LEGO
universe following Kampis's instructions. For the set of all LEGO structures is countable, and
may therefore be mapped into the set of binary sequences, in a one-to-one manner. And each
binary sequence may be represented as a Turingmachine program, i.e. as a map from binary
sequences to binary sequences. Therefore, using Turing machines, each LEGO structure could be
interpreted as a function acting on other LEGO structures. The only problem with this
arrangement is that it does not satisfy clause (c) of the definition of component-system. Not
every LEGO structure is realizable by our dynamics. Only some computable subset of LEGO
structures is realizable.

But now -- and here is where my thinking differs from Kampis's -- suppose one adds a
random element to one's Turing machine. Suppose each component of the Turing machine is
susceptible to errors! Then, in fact, every possible LEGO structure becomes realizable!
Structures may have negligibly small probability, but never zero probability! This is an example
of a component-system which is computable by a stochastic Turing machine.


Get any book for free on: www.Abika.com
104
CHAOTIC LOGIC


Deutsch has shown that quantum Turing machines are more general than stochastic Turing
machines. So what I have shown is that component-systems are perfectly realizable in terms of
the equations of quantum mechanics. This implies that there is absolutely no problem with the
statement that "molecular soups" or brains are component-systems. But there is a problem with
the statement that these systems are "fundamentally uncomputable." The Turing model of
computation is not in general physically adequate. But quantum computation, something quite
similar to Turing computation, is always physically adequate, at least so far as our present
knowledge of physics goes. And something that is quantum computable is not, in a philosophical
sense, fundamentally uncomputable.

7.1.3. Self-Constructing Robots

To put these ideas in sharper focus, let us now turn to a metaphor which Kampis introduces
around the middle of the book: the self-constructing robot. This idea is a natural extrapolation
of modern industrial technology.

Right now, in Japan, there are robotized factories -- factories in which routine assembly-line
tasks are carried out by robots rather than people. These are not humanoid robots like C3PO in
Star Wars. They look like what they are: sophisticated factory tools. But their capabilities are
astounding -- they combine the spatial common sense of a human worker with the speed and
precision of a calculator. In fact, it is not unlikelythat, somewhere in Japan, there are detailed
plans for a factory in which robots are used to build more robots.

And, of course, the industrial use of robots is not restricted to manufacturing. It is well within
contemporary technology to use robots for repair. It is not yet profitable to use robots to repair
robots, but this is because of simple technical problems, not fundamental engineering obstacles.

The point of all this is: if a robot can repair other robots, why not itself? And if a robot can
repair itself, why not reconstruct itself, even when it is not broken? It is not too far beyond
current technology to build a robot that reconstructs itself. There is no reason not to build a
robot whose software (brain) tells it how to reconstruct its hardware (body, including brain).
Such a self-constructing robot would embody an enchanting sort of loop: self constructing new
self, which constructs new self, which.....

But finally, suppose that someone builds a self-constructing robotw which is, however,
imperfect. It sometimes makes slight random errors; its arms don't always move quite exactly
the way they are supposed to. Then in classic chaotic form, as time goes by, these slight random
errors can be expected to build up into large errors. One has a fundamentally unpredictable
sequence of machines. There is no telling exactly what the robot will make of itself, given say
fifty years time.

To me, a self-constructing robot with errors seems like a wonderfully creative thing! But
Kampis's argument is precisely the opposite. In one particularly striking passage, Kampis
characterizes a component-system as "a strange computer in which also the software is identified
with the hardware." Elaborating, he declares that




Get any book for free on: www.Abika.com
105
CHAOTIC LOGIC


a component system is a computer which, when executing its operations (software) builds a new
hardware.... [W]e have a computer that re-wires itself in a hardware-software interplay: the
hardware defines the software and the software defines new hardware. Then the circle starts
again. (p. 223)

To me, this sounds exactly like the self-reconstructing robot which I have just finished talking
about. But Kampis has something else up his sleeve. He does not believe in the Church-Turing
thesis. He believes that a "component-system" is nonprogrammable, inthe sense that no
algorithm, no set of rules, can completely describe its behavior. He follows up the previous
quotation with a warning to the computationally-inclined:

[A] sceptical reader could say: that's not a big deal. With current-day industrial robot technology
this should be possible. Robots are automata; they are computers. They can assemble other
robots, maybe even themselves. They have a complete behavior algorithm. So, by analogy,
component-systems, too, can have one.

But this is not as easy a matter as it sounds. In a robot the whole software is ready-made and
completely defined from the beginning on, and is stored in an accessible form; in a component-
system, according to the above story, the "algorithm" is nowhere stored completely; software and
hardware define each other without any of them being complete or independent.

The paradigm case of a component-system, according to Kampis, is a "soup" of organic cells.
Each cell acts on each other cell, thus creating other cells, and there is no distinction between
software and hardware.

But let us consider, once again, our imperfect self-constructing robot. This robot is
programmed to modify its own hardware, but it is susceptible to random error. Then it is quite
possibly true that no computer program can predict the behavior of the robot. For the
collection of all possible times and places for random error is very large, and the collection of
sets of times and places for random error is even larger. To predict the behavior of the robot, a
computer program would have to predict what would happen to the robot given each possible
set of random errors. But, for any program of finite length, there is some set of random errors
which cannot be compressed into any program of that length.

The moral of the story is that, in the case of the self-reconstructing robot, stochastic
computation does what Turing computation does not. It gives the potential for true flexibility; for
self-referential creation of the fundamentally, indisputably new. While component-systems
cannot be Turing computable, they can be stochastically computable. This observation casts a
revealing light on the distinction between component-systems and Turing machines.

7.1.4. Creativity

To put the same point another way, I respectfully accuse Kampis of having an overly mystical
notion of creativity. He complains that computer programs can never create anything beyond
what has been put into them -- a very old argument. This is true in the same sense that
mathematical theorems are never original creations -- they are all contained in the basic axioms


Get any book for free on: www.Abika.com
106
CHAOTIC LOGIC


of mathematics. But, even if one accepts this strict notion of creativity, it still does not follow
that stochastic computers are noncreative.

In general, a stochastic computer has a range of output that is incredibly wide, often
uncomputable. In fact, one may very easily construct a stochastic computer that has the
capability to construct anything whatsoever, just by chance. So stochastic self-reconstructing
computers suffer from no lack of potential creativity. How much of this creativity is actualized
depends on the intricate interaction of the deterministic and stochastic components.

This brings us to a basic principle of systems theory: the essence of creativity is the
interplay between rules and randomness. This ancient concept, which received its modern
form in the work of Ross Ashby (1954), is one of the most humanly meaningful implications of
the computer revolution. It is humbling to realize that even the most marvelous works of the
greatest geniuses -- Einstein's General Theory of Relativity, Goethe's Faust, Beethoven's Fifth
Symphony -- were produced by a complex combination of random chance with strict,
deterministic rules. Kampis does not wish us to accept this. But if one is to accept that modern
physical science applies to neural processes, then one must, I suggest, accept the equation of
creativity with quantum computation.

As an afterthought, it is worth briefly questioning the role that random chance plays here.
From the point of view of any one computer -- be it Turing, quantum or stochastic -- there are
certain deterministic sequences of events that are fundamentally indistinguishable from random
sequences of events. These are sequences whose algorithmic complexity exceeds the algorithmic
complexity of the computer who is doing the distinguishing. Gregory Chaitin (1974, 1987) has
shown that this statement is essentially a form of Godel's Incompleteness Theorem.

So, from any one subjective point of view, there is no way of telling if some perceived entity
is stochastically computed, or just plain Turing computed. Now, although component-systems
are not Turing computable, we have seen that they can be stochasticallycomputable. It follows
that, from any one subjective point of view, a component-system might as well be Turing
computable! To any particular entity, "random" just means "too complex for me to understand."

7.2 SELF-GENERATING SYSTEMS

Nowhere in Self-Modifying Systems does Kampis give an adequate formal definition of
"component-system." To my mind, this is the only sizeable flaw in an otherwise outstanding
book. This omission is particularly crucial in that it makes it difficult to mount conceptual attacks
against Kampis's nonprogrammability thesis.

Kampis gives a fairly good reason for this significant omission:

[C]oncepts of formal dynamics do not fit well to component-systems.... [W]hen we consider
component-systems as systems which produce components from components from components,
we may, by the same token, think of transformations producing directly other transformations: ft:
ft --> ft'.



<<

. 4
( 10)



>>