. 1
( 10)



>>

1
CHAOTIC LOGIC




CHAOTIC LOGIC:
Language, Mind and Reality from the
Perspective of Complex Systems
Science

By Ben Goertzel



Get any book for free on: www.Abika.com




Get any book for free on: www.Abika.com
2
CHAOTIC LOGIC




Logic ... an imperative, not to know the true, but to posit and arrange a world that shall be
called true by us.

-- Friedrich Nietzsche



PREFACE

This book summarizes a network of interrelated ideas which I have developed, off and on,
over the past eight or ten years. The underlying theme is the psychological interplay of order
and chaos. Or, to put it another way, the interplay of deduction and induction. I will try to
explain the relationship between logical, orderly, conscious, rule-following reason and fluid,
self-organizing, habit-governed, unconscious, chaos-infused intuition.

My previous two books, The Structure of Intelligence and The Evolving Mind, briefly touched
on this relationship. But these books were primarily concerned with other matters: SI with
constructing a formal language for discussing mentality and its mechanization, and EM with
exploring the role of evolution in thought. They danced around the edges of the order/chaos
problem, without ever fully entering into it.

My goal in writing this book was to go directly to the core of mental process, "where angels
fear to tread" -- to tackle all the sticky issues which it is considered prudent to avoid: the nature
of consciousness, the relation between mind and reality, the justification of belief systems, the
connection between creativity and mental illness,.... All of these issues are dealt with here in a
straightforward and unified way, using a combination of concepts from my previous work with
ideas from chaos theory and complex systems science.

My approach to the mind does not fall into any of the standard "schools of thought." But
neither does it stand completely apart from the contemporary scientific and intellectual scene.
Rather, I draw on ideas from a variety of disciplines, and a host of conflicting thinkers. These
ideas are then synthesized with original conceptions, to obtain a model that, while,
fundamentally novel, has many points of contact with familiar ideas. Perhaps the most obvious
connections are with Kampis's (1991) component-system theory, Edelman's (1987) theory of
neuronal group selection, Nietzsche's (1968) late philosophy of mind, Chaitin's (1988)
algorithmic information theory, Whorf's (1948) well-known analysis of linguistic thought, and
the dynamical psychology of Ralph and Fred Abraham (1992). But there are many other
important connections as well.

The ideas of this book range wide over the conceptual map; indeed, the selection of topics
may appear to the reader to obey a very chaotic logic. And the intended audience is almost
equally wide. The ideas contained here should be thought-provoking not only to theoretical


Get any book for free on: www.Abika.com
3
CHAOTIC LOGIC


psychologists and general systems theorists, but also to anyone with an interest in artificial
intelligence, applied mathematics, social science, biology, philosophy or human personality.
Unfortunately, the nature of the material is such that certain sections of the book will not be easy
going for the general reader. However, I have done my best to minimize the amount of technical
terminology, and I have flagged with (*)'s those few sections containing a significant amount of
formalism. These sections can be skipped without tremendous loss of understanding.

In sum, I am well aware that this book will draw criticism for its ambitious choice of topic. I
also realize that my approach defies the norms of every academic discipline (sometimes quietly,
sometimes ostentatiously). However, I believe that one must follow one's scientific intuition
where it leads. All that I ask of you, as a reader, is that you consider the ideas given here based
on their own intrinsic merits, rather than how "orthodox" or "unorthodox" they may appear.

The symbiosis between logic and intuition is a very tricky thing; perhaps the subtlest
phenomenon we humans have ever tried to comprehend. In order to make progress toward an
understanding of this strange, fundamental symbiosis, we must summon all our powers of
analysis and imagination -- and check our preconceptions at the door.



ACKNOWLEDGEMENTS

The ideas presented here were developed as a solo project. There was very little collaborative
thinking involved, and what little there was involved peripheral issues. Over the years, however,
many people, institutions and organizations have helped my work in less direct ways.

First of all, a few sections of this book overlap significantly with previously published articles.
Thanks are due to the relevant editors and publishers for their permission to duplicate the odd
section, page or paragraph here. The Journal of Social and Evolutionary Systems, Volume 15-1,
edited by Paul Levinson, contains the papers "Psychology and Logic" and "Self-Reference,
Computation and Mind" which overlap considerably with Chapter 4 and Section 7.3
respectively. Paul Levinson is an excellent editor who has been very supportive of my work. The
Proceedings of the First, Second and Third Annual Conferences of the Society for Chaos Theory
in Psychology, edited by Robin Robertson and Allan Combs (to be published shortly by
Erlbaum, perhaps with a more felicitous title), contains the papers "A Cognitive Equation of
Motion" and "Belief Systems as Attractors," which overlap with parts of Chapters Eight and
Nine.

Next there are more personal acknowledgements. My previous two books did not include
"acknowledgements" sections, so the thanks given here apply not only to Chaotic Logic but also
to its prequels: The Structure of Intelligence (Springer-Verlag, 1993), and The Evolving Mind
(Gordon and Breach, 1993).

In no particular order, I would like to acknowledge debts of one kind or another to:




Get any book for free on: www.Abika.com
4
CHAOTIC LOGIC


Simon's Rock College, which I attended from 1982-85 and where I was introduced to
Nietzsche, Whorf, Peirce, formal logic, dynamical systems theory and the philosophy of science,
among other things. My unorthodox approach to intellectual work owes a lot to two Simon's
Rock instructors, George Mandeville and Ed



Misch, and also to the remarkably intelligent group of students who were my classmates at the
Rock, especially Dave Goldberg, Bill Meinhardt, John Hancock, Mike Glanzberg, Scott Hughes,
Ed Keller and Mike Duncan.

The mathematics faculty of Temple University -- their friendliness helped to restore my
passion for mathematics, which, after a year-and-a-half at the Courant Institute, had nearly
vanished for good. In particular, Donald Newman supported me at every stage of the arduous
process of obtaining a Ph.D. in Mathematics.

Those few members of the UNLV Mathematics department who have supported me in my
unusual choice of research topics: Harold Bowman, Malwane Ananda, Rohan Dalpatadu, Ashok
Singh and George Miel.

The computer science department of Waikato University, where I am currently lecturing,
particularly Lloyd Smith, the former department head, who made my schedule for this year! This
book was written in Las Vegas but it was proofread in Hamilton; if it has fewer errors than my
previous books this is because of the research-friendly New Zealand work schedule.

Fred Abraham, Sally Goerner, Larry Vandervert, Robin Robertson and Terry Marks, all
affiliated with the Society for Chaos Theory in Psychology, for being so supportive of my work
(and also for helping me to improve my sometimes too-dense exposition). Thanks especially to
Sally and Fred.

My mother Carol Goertzel and my grandfather Leo Zwell for their unflagging warmth and
encouragement; also my father, Ted Goertzel, for his encouragement and for reading and
critiquing my manuscripts despite their distance from his areas of expertise.

George Klir and George Kampis for placing my book in this series.

And finally, my son Zarathustra, my wife Gwen, and my brand new son Zebulon, for
providing a warm, comfortable atmosphere in which to think, write and live.

Ben Goertzel



Hamilton, New Zealand

April 1994


Get any book for free on: www.Abika.com
5
CHAOTIC LOGIC




TABLE OF CONTENTS

1. INTRODUCTION

2. PATTERN AND PREDICTION

3. THE STRUCTURE OF THOUGHT

4. PSYCHOLOGY AND LOGIC

5. LINGUISTIC SYSTEMS

6. CRUCIAL CONNECTIONS

7. SELF-GENERATING SYSTEMS

8. THE COGNITIVE EQUATION

9. BELIEF SYSTEMS

10. BIOLOGICAL METAPHORS OF BELIEF

11. MIND AND REALITY

12. DISSOCIATIVE DYNAMICS

AFTERWORD

REFERENCES



Chapter One

INTRODUCTION

"Chaos theory" has, in the space of two decades, emerged from the scientific literature into the
popular spotlight. Most recently, it received a co-starring role in the hit movie Jurassic Park.
Chaos theory is billed as a revolutionary new way of thinking about complex systems -- brains,
immune systems, atmospheres, ecosystems, you name it.




Get any book for free on: www.Abika.com
6
CHAOTIC LOGIC


It is always nice to see science work its way into the mass media. But I must admit that, as a
mathematician trained in chaotic dynamics, I find this sudden interest in chaos theory a little odd.
The excitement about chaos theory stems from a perception that it somehow captures the
complex "disorganized order" of the real world. But in fact, chaos theory in the technical sense
has fewer well-developed real world applications than obscure areas of applied math like Lotka-
Volterra equations, Markov chains, Hilbert spaces, and so forth. Where chaos is concerned, there
is a rather large gap between the philosophical, prospective hype and the actual, present-day
science.

To understand this gap in more detail, consider what one studies in a first course on chaos
theory: discrete iterations like the tent map, the Baker map and the logistic iteration (Devaney,
1988); or else elementary nonlinear differential equations such as those leading to the Lorentz
attractor. These systems are all "low-dimensional," in the sense that the state of the system at
each time is specified by a single number, or a short list of numbers. And they are simple, in the
sense that the rule which determines the state of the system at time t+1 from the state of the
system at time t has a brief expression in terms of elementary arithmetic.

All these systems have one novel property in common: whatever state one starts the system in
at "time zero," the odds are that before long the system will converge on a certain region of state
space called the "attractor." The states of the system will then fluctuate around the "attractor"
region forever, apparently at random. This is "chaos," a remarkable, intriguing phenomenon --
and a phenomenon which, on the surface at least, appears to have little to do with complex, self-
organizing systems. It is obvious that complex systems are not pseudo-random in the same sense
that these "toy model" dynamical systems are. Something more is going on.

One way to sidestep this problem is to posit that complex systems like brains present "high-
dimensional dynamics with underlying low-dimensional chaos." There is, admittedly, some
evidence for this view: mood cycles, nostril cycles and EEG patterns demonstrate low-
dimensional chaotic attractors, as do aspects of animal behavior, and of course numerous
parameters of complex weather systems.

But at bottom, the recourse to dimensionality is an evasive maneuver, not a useful explanation.
The ideas of this book proceed from an alternative point of view: that complex, self-organizing
systems, while unpredictable on the level of detail, are interestingly predictable on the level of
structure. This what differentiates them from simple dynamical systems that are almost entirely
unpredictable on the level of structure as well as the level of detail.

In other words, I suggest that the popular hype over chaos theory is actually an enthusiasm
over the study of complex, self-organizing systems -- a study which is much less developed than
technical chaos theory, but also far more pregnant with real-life applications. What most chaos
theorists are currently doing is playing with simple low-dimensional "toy iterations"; but what
most popular expositors of chaos are thinking about is the dynamics of partially predictable
structure. Therefore, I suggest, it is time to shift the focus from simple numerical iterations to
structure dynamics.




Get any book for free on: www.Abika.com
7
CHAOTIC LOGIC


To understand what this means, it suffices to think a little about chaos psychology. Even
though the dynamics of the mind/brain may be governed by a strange attractor, the structure of
this strange attractor need not be as coarse as that of the Lorentz attractor, or the attractor of the
logistic map. The structure of the strange attractor of a complex system contains a vast amount
of information regarding the transitions from onepatterned system state to another. And this, not
the chaos itself, is the interesting part.

Unfortunately, there is no apparent way to get at the structure of the strange attractor of a
dynamical system like the brain, which presents hundreds of billions of interlinked variables
even in the crudest formal models. Therefore, I propose, it is necessary to shift up from the level
of physical parameters, and take a "process perspective" in which the mind and brain are viewed
as networks of interacting, inter-creating processes.

The process perspective on complex systems has considerable conceptual advantages over a
strictly physically-oriented viewpoint. It has a long and rich philosophical history, tracing back
to Whitehead and Nietszche and, if one interprets it liberally enough, all the way back to the
early Buddhist philosophers. But what has driven recent complex-systems researchers to a
process view is not this history, but rather the inability of alternative methods to deal with the
computational complexity of self-organizing systems.

George Kampis's (1991) Self-Modifying Systems presents a process perspective on complex
systems in some detail, relating it with various ideas from chemistry, biology, philosophy and
mathematics. Marvin Minsky's (1986) Society of Mind describes a process theory of mind; and
although his theory is severely flawed by an over-reliance on ideas drawn from rule-based AI
programs, it does represent a significant advance over standard "top-down" AI ideas. And,
finally, Gerald Edelman's (1988) Neural Darwinism places the process view of the brain on a
sound neurological basis.

Here, however, I will move far beyond neural Darwinism, societal computer architecture and
component-system theory, and propose a precise cognitive equation, hypothesized to govern the
creative evolution of the network of mental processes. When one views the mind and brain in
terms of creative process dynamics rather than physical dynamics, one finds that fixed points and
strange attractors take on a great deal of psychological meaning. Process dynamics give rise to
highly structured strange attractors. Chaos is seen to be the substrate of a new and hitherto
unsuspected kind of order.

1.1. COMPLEX SYSTEMS SCIENCE

Chaos theory proper is only a small part of the emerging paradigm of complex systems
science. In thepopular literature the word "chaos" is often interpreted very loosely, perhaps even
as a synonym for "complex systems science." But the distinction is an important one. Chaos
theory has to do with determinism underlying apparent randomness. Complex systems science is
more broadly concerned with the emergent, synergetic behaviors of systems composed of a large
number of interacting parts.




Get any book for free on: www.Abika.com
8
CHAOTIC LOGIC


To explain what complex systems science is all about, let me begin with some concrete
examples. What follows is a a highly ideosyncratic "top twelve" list of some of the work in
complex systems science that strikes me as most impressive. The order of the items in the list is
random (or at least chaotic).

1. Alan Perelson, Rob deBoer (1990) and others have developed computer models of the
immune system as a complex self-organizing system. Using these models, they have arrived at
dozens of new predictions regarding immune optimization, immune memory, the connectivity
structure of the immune network, and other important issues.

2. Stuart Kauffmann (1993) has, over the last three decades, systematically pursued computer
simulations demonstrating the existence of "antichaos." He has found that random Boolean
networks behave in a surprisingly structured way; and he has used these networks to model
various biological and economic systems.

3. Gregory Bateson (1980) has modeled a variety of social and psychological situations using
ideas from cybernetics. For instance, he has analyzed Balinese society as a "steady-state" system,
and he has given system-theoretic analyses of psychological problems such as schizophrenia and
alcoholism.

4. Gerald Edelman (1988) has devised a theory of brain function called Neural Darwinism,
based on the idea that the brain, like the immune system, is a self-organizing evolving system.
Similar ideas have been proposed by other neuroscientists, like Jean-Pierre Changeux (1985).

5. Starting from the classic work of Jason Brown (1988), a number of researchers have used
the concept of "microgenesis" to explore the mind/brain as a self-organizing system. This point
of view has been particularly fruitful for the study of linguistic disorders such as aphasia.

6. There is a very well-established research programme of using nonlinear differential
equations and thermodynamics to study far-from-equilibrium self-organizing systems. The name
most commonly associated with this programm is that of Ilya Prigogine (Prigogine and Stengers,
1984).

7. A diverse community of researchers (Anderson et al, 1987) have used ideas from stochastic
fractal geometry and nonlinear differential equations to model the self-organization inherent in
economic processes (such as the stock market).

8. G. Spencer Brown's classic book Laws of Form (1972) gives a simple mathematical
formalism for dealing with self-referential processes. Louis Kauffmann (1986), Francisco Varela
(1978) and others have developed these ideas and applied them to analyze complex systems such
as immune systems, bodies and minds.

9. For the past few years the Santa Fe Institute has sponsored an annual workshop on
"Artificial Life" (Langton, 1992) -- computer programs that simulate whole living environments.
These programs provide valuable information as to the necessary and sufficient conditions for
generating and maintaining complex, stable structures.


Get any book for free on: www.Abika.com
9
CHAOTIC LOGIC


10. John Holland (1975) and his colleagues such as David Goldberg (1988) have constructed a
research programme of "genetic optimization," in which computer simulations of evolving
populations are used to solve mathematical problems.

11. Over the past decade, a loose-knit group of researchers from different fields have been
exploring the applications of "cellular automata" to model various self-organizing phenomena,
from fluid dynamics to immunodynamics. Cellular automata (Wolfram, 1986) are simple self-
organizing systems that display many elegant emergent properties of an apparently "generic"
character.

12. Vilmos Csanyi (1990), George Kampis (1991) and Robert Rosen (1992), among others,
have kept alive the grand European tradition of General Systems Theory, using sophisticated
ideas from mathematics and physical science to demonstrate that complex self-organizing
systems must be understood to be creating themselves.

Complex systems science is not as yet an official academic discipline; there are no university
departments of complex systems science. However, there are a few research institutes and
professional organizations. For instance, the Santa Fe Institute has supported a wide variety of
research in complex systems science, including the work on immunology, antichaos, artificial
life and genetic optimization mentioned above. In recognition of these efforts, the Institute
recently received a MacArthur Foundation "genius grant."

The Center for Complex Systems in Illinois has also, as one would expect from the name,
been the location of a great deal of complex systems research, mainly dealing with applications
of cellular automata. And, finally, the Society for Chaos Theory in Psychology, now in its third
year, has served to bring together an impressive number of social, behavioral and physical
scientists interested in studying the mind as a complex self-organizing system.

1.1.1. Chaos and "Chaos"

Parenthetically, it is worth noting that the battle for the word "chaos" is not yet over. A few
weeks after I wrote the preceding paragraphs, I ran across an interesting discussion on the
Internet computer network, which really drove this point home. Someone posted a news item on
several computer bulletin boards, declaring the imminent creation of a new bulletin board
focusing on chaos theory. The only problem remaining, the news item said, was the selection of
a name. Many variations were suggested, from "sci.math.nonlinear" to "sci.emergence.chaos" to
"sci.nonlinear" to "sci.chaos" to "sci.math.chaos" to "sci.complexity."

Most discussants rejected the names "sci.chaos and sci.math.chaos" as encouraging a
mistakenly wide interpretation of the word "chaos." But the fact is that there are already several
unofficial newsgroups dealing with the subject of complex systems science. And these are all
named -- "sci.chaos"! No amount of rational argumentation can counteract a habit. This is
nothing else but chaotic logic at work, in a wonderfully self-referential way. It is chaos
regarding "chaos," but only if one accepts the result of this chaos, and calls complex systems
science "chaos."




Get any book for free on: www.Abika.com
10
CHAOTIC LOGIC


Perhaps one should not shed too many tears over the fact that the name "chaos theory" is at
variance with standard mathematical usage. After all, mathematicians did not invent the word
"chaos"! In its original theological meaning, "Chaos" simply referred to the void existing
between Heaven and Earth. In other words, it had virtually nothing to do with any of its current
meanings.

But anyhow, I am amused to report that the newsgroup finally took on the name
"sci.nonlinear." This is also a misnomer, since many nonlinear systems of equations are neither
chaotic nor self-organizing. Also, many complex systems have nothing to do with linear spaces
and arehence not nonlinear but alinear. But, be that as it may, one may chalk up one for the anti-
"chaos" forces!

1.1.2. General Systems Theory

All kidding aside, however, I do think that using the name "chaos theory" for complex systems
science has one sigificant disadvantage. It perpetuates an historical falsehood, by obscuring the
very deep connections between the modern theory of self-organizing systems and the "General
Systems Theory" of the forties and fifties.

Today, it seems, the average scientist's opinion of General Systems Theory is not very good.
One often hears comments to the effect that "There is no general systems theory. What
theoretical statements could possibly be true of every system?" In actual fact, however, the
General Systems Theory research programme was far from being a failure. Its many successes
include Bateson's psychological theories, Ashby's work in cybernetics, McCulloch's
groundbreaking work on neural networks, and a variety of ideas in the field of operations
research.

The truth is simply that after a decade or two, General Systems Theory collapsed under the
weight of its own ambitions. It was not proved "wrong" -- it said what it had to say, and then
slowly disappeared. True, it did not turn out to be nearly as productive as its creators had
envisioned; but this doesn't contradict the fact that it was very productive anyway.

What does modern complex systems science have that General Systems Theory did not? The
answer, I suspect, is remarkably simple: computing power. Of the twelve contributions to
complex systems science listed above, seven -- immune system modeling, "antichaos" modeling,
far-from-equilibrium thermodynamics, artificial life, genetic optimization, cellular automata and
fractal economics -- rely almost entirely on computer simulations of one sort or another. An
eighth, Edelman's theory of Neural Darwinism, relies largely on computer simulations; and a
ninth, Spencer-Brown's self-referential mathematics, was developed in the context of circuit
design.

Computing power has not been the only important factor in the development of complex
systems science. For example, the revolutionary neurobiological ideas of Edelman, Changeux,
Brown and others would not have been possible without recent advances in experimental brain
science. And my own work depends significantly not onlyon ideas derived from computer
simulations, but also on the theory of algorithmic information (Chaitin, 1987), a branch of


Get any book for free on: www.Abika.com
11
CHAOTIC LOGIC


computer science that did not exist until the late 1960's. But still, it is fair to say that greater
computing power was the main agent responsible for turning relatively sterile General Systems
Theory into remarkably fertile complex systems science.

The systems theorists of the forties, fifties and sixties recognized, on an intuitive level, the
riches to be found in the study of complex self-organizing systems. But, as they gradually
realized, they lacked the tools with which to systematically compare their intuitions to real-world
data. We now know quite specifically what it was they lacked: the ability to simulate complex
processes numerically, and to represent the results of complex simulations pictorially. In a very
concrete sense, today's "chaos theory" picks up where yesterday's General Systems Theory left
off.

In the following pages, as I discuss various aspects of language, mind and reality, I will not
often be directly concerned with computer simulations or technical mathematics. However, the
underlying spirit of the book is inextricable from recent advances in mathematical chaos theory,
and more generally in complex systems science. And these advances would not have been
possible without 1) the philosophy of General Systems Theory, and 2) the frame of mind induced
by modern computing power. Science, philosophy and technology are not easily separable.

1.1.3. Feedback Structures

Rather than letting historical reflection get the upper hand, I will end this section with a
concrete example. The basic article of faith underlying complex systems science is that there are
certain large-scale patterns common to the behavior of different self-organizing systems. And
perhaps the simplest such pattern is the feedback structure -- the physical structure or
dynamical process that not only maintains itself but is the agent for its own increase. Some
specific examples of feedback structures are as follows:

1. Autocatalytic reactions in chemistry, such as the Belousov-Zhabotinsky reaction. Once
these chemical reactions get started, they grow by feeding off themselves. Often the rate of
growth fluctuates chaotically.

2. Increasing returns in economics. This refers to a situation in which the more something is
sold, theeasier it becomes to sell. Such situations are apt to be unpredictable -- an historical
example is the competition between VHS and Beta format videotapes.

3. Double binds in psychology. Gregory Bateson's groundbreaking theory of schizophrenia
postulates feedback reactions between family members, according to which miscommunication
leads to more miscommunication.

4. Chaos in immune systems. Mathematical models trace the dynamics of antibody types, as
they stimulate one another to reproduce and then attack each other. In some cases this may result
in concentrations of two antibody types escalating each other by positive feedback. In other
cases it may result in low-level chaotic fluctuations.




Get any book for free on: www.Abika.com
12
CHAOTIC LOGIC


Of course, feedback structures of a simple sort are present in simple systems as well as
complex systems (every guitar player knows this). But the important observation is that feedback
structures appear to be a crucial part of self-organization, regardless of the type of system
involved. Parallels like this are what the complex-systems-science researcher is always looking
for: they hint at general laws of behavior.

And indeed, the cognitive equation of Chapter Seven came about as an attempt to refine the
notion of "complex feedback structure" into a precise, scientifically meaningful concept -- to
rigorously distinguish between the intricate feedback structures present in economies and mind
and the relatively simple feedback involved in a guitar solo.

1.2 LANGUAGE, THOUGHT AND REALITY

In this book I will be concerned with four types of psychological systems: linguistic systems,
belief systems, minds and realities. All of these systems, I suggest, are strange attractors of the
dynamical system which I call the "cognitive equation." And they are, furthermore, related by the
following system of "intuitive equations":

Linguistic system = syntactic system + semantic system

Belief system = linguistic system + self-generating system

Mind = dual network + belief systems

Reality = minds + shared belief system

The meanings of the terms in these four "equations" will be explained a little later. But the
basic idea should be, if not "clear," at least not completelyblurry. The only important caveat is as
follows: the use of the "+" sign should not be taken as a statement that the two entities on the
right side of each equation have significant independent functionality. For instance, syntactic
systems and semantic systems may be analyzed separately in many respects, but neither can truly
function without the other.

A slightly more detailed explanation of the terms in these "equations" is as follows:

1) A linguistic system consists of a deductive,

transformational system called a syntactic system, and an interdefined collection of patterns
called a semantic system, related according to a principle called continuous compositionality.
This view explains the role of logic in reasoning, and the plausibility of the Sapir-Whorf
hypothesis.

2) A self-generating system consists of a collection of stochastically computable processes
which act on one another to create new processes of the same basic nature. The dynamics of
mind may be understood in terms of the two processes of self-generation and pattern
recognition; this idea yields the "cognitive equation."


Get any book for free on: www.Abika.com
13
CHAOTIC LOGIC


3) A belief system is a linguistic system which is also a self-generating system. Belief systems
may be thought of as the "immune system" of the mind; and, just like immune systems, they may
function usefully or pathologically. They are a necessary complement to the fundamental dual
network structure of mind (as outlined in The Evolving Mind).

4) Reality and the self may be viewed as two particularly powerful belief systems -- these are
the "master belief systems," by analogy to which all other belief systems are formed.

Each of the "equations," as these explanations should make clear, represents a novel twist on a
reasonably well-known idea. For instance, the idea of linguistics as semantics plus syntax is
commonplace. But what is new here is 1) a pragmatic definition of "semantics," and 2) the
concept of "continuous compositionality," by which syntactic and semantic systems are proposed
to be connected.

Similarly, the idea that beliefs are linguistic is not a new one, nor is the idea that beliefs
collectively act to create other beliefs. But the specific formulation of these ideas given here is
quite novel, and leads to unprecedentedly clear conclusions regarding the validity of belief
systems.

The idea that mind consists of a data structure populated by belief systems is fairly common in
theAI/cognitive science community. But the relation between the belief system and the data
structure has never been thoroughly examined from a system-theoretic point of view. Neither the
role of feedback in belief maintenance, nor the analogy between immune systems and belief
systems, has previously been adequately explored.

And finally, the view of reality as a collective construction has become more and more
common over the past few decades, not only in the increasingly popular "New Age" literature
but also in the intellectual community. However, up to this point it has been nothing more than a
vague intuition. Never before has it been expressed in a logically rigorous way.

The cognitive equation underlies and guides all of these complex systemic dynamics.
Elements of mind, language, belief and reality exist in a condition of constant chaotic
fluctuation. The cognitive equation gives the overarching structure within which this creative
chaos occurs; it gives the basic shape of the "strange attractor" that is the world.

More specifically, the assertion that each of these systems is an attractor for the cognitive
equation has many interesting consequences. It implies that, as Whorf and Saussure claimed,
languages are semantically closed, or very nearly so. It implies that belief systems are self-
supporting -- although the nature of this self-support may vary depending on the rationality of
the belief systems. It implies that perception, thought, action and emotion form an unbroken
unity, each one contributing to the creation of the others. And it tells us that the relation between
mind and reality is one of intersubjectivity: minds create a reality by sharing an appropriate
type of belief system, and then they live in the reality which they create.

All this is obviously only a beginning: despite numerous examples, it is fairly abstract and
general, and many details remain to be filled in. However, my goal in this book is not to provide


Get any book for free on: www.Abika.com
14
CHAOTIC LOGIC


a canon of unassailable facts, but rather to suggest a new framework for studying the remarkable
phenomena of language, reason and belief. Three hundred years ago, Leibniz speculated about
the possibility of giving an equation of mind. It seems to me that, with complex systems
science, we have finally reached the point where we can take Leibniz seriously -- and transform
his dream into a productive research programme.

1.3 SYNOPSIS

In this section I will give an extremely compressed summary of the main ideas to be given in
the following chapters. These ideas may be somewhat opaque without the explanations and
examples given in the text; however, the reader deserves at least a vague idea of the structure of
the arguments to come. For a more concrete idea of where all this is leading, the reader is invited
to skip ahead to Chapter Eleven, where all the ideas of the previous chapters are integrated and
applied to issues of practical human and machine psychology.

Chapters Two and Three: a review of the concepts of pattern, algorithmic information,
associative memory and multilevel control. These ideas, discussed more thoroughly in SI and
EM, provide a rigorous basis for the analysis of psychological phenomena on an abstract
structural level. A special emphasis is placed here on the issue of parallel versus serial
processing. The mind/brain, it is argued, is essentially a parallel processor ... but some processes,
such as deductive logic, linguistic thought, and simulation of chaotic systems, involve virtual
serial processing -- networks of processes that simulate serial computation by parallel
operations.

Chapter Four: the first part of a multi-chapter analysis of the relationship between language
and thought. Using the concept of a structured transformation system, I consider a very
special kind of linguistic system, Boolean logic, with a focus on the well-known "paradoxes"
which arise when Boolean logic is applied to everyday reasoning. I argue that these "paradoxes"
disappear when Boolean reasoning is considered in the context of associative memory and
multilevel control. This implies that there is nothing problematic about the mind using Boolean
logic in appropriate circumstances -- a point which might seem to be obvious, if not for the fact
that it has never been demonstrated before. The standard approach in formal logic is simply to
ignore the paradoxes!

Chapter Five: this analysis of Boolean logic is extended to more general linguistic systems.
It is argued that, as a matter of principle, a linguistic system cannot be understood except in the
context of a particular mind. In this spirit, I give a new analysis of meaning, very different from
the standard Tarski/Montague possible worlds approach. According to the newapproach, the
meaning of a phrase is the set of all patterns associated with it. This implies that meaning is
fundamentally systematic, because many of the patterns associated with a given phrase have to
do with other phrases. In this view, it is not very insightful to think about the meaning of a
linguistic entity in isolation. The concept of meaning is only truly meaningful in the context of a
whole linguistic system -- which is in turn only meaningful in the context of some particular
mind.




Get any book for free on: www.Abika.com
15
CHAOTIC LOGIC


Chapter Six: the connections between language, logic, reality, thought and consciousness are
explored in detail. First, the pattern-theoretic analysis of language is applied to one of the more
controversial ideas in twentieth-century thought: the Sapir-Whorf hypothesis, which states that
patterns of thought are controlled by patterns of language. Then I discuss the role of
consciousness in integrating language with other thought processes. A new theory of
consciousness is proposed, which clarifies both the biological bases of awareness and the
fundamental relation between mind and the external world.

Chapter Seven: a brief excursion into the most impressive modern incarnation of General
Systems Theory, George Kampis's theory of component-systems, which states that complex self-
organizing systems construct themselves in a very basic sense. After reviewing and critiquing
Kampis's ideas, I introduce the novel concept of a self-generating system.

Chapter Eight: formulates a "dynamical law for the mind," the cognitive equation. This is a
dynamical iteration on the level of processes and structures rather than numerical variables. It is
argued that complex systems such as minds and languages are attractors for this equation: they
supply the structure overlying the chaos of mental dynamics. Learning, and in particular
language acquisition, are explained in terms of the iteration of the cognitive equation.

Chapter Nine: having discussed linguistic systems and self-generating systems, I introduce a
concept which synthesizes them both. This is the belief system. I argue that belief is, in its very
essence, systematic -- that, just as it makes little sense to talk about the meaning of an isolated
word or phrase, it makes little sense to talk about a single belief, in and of itself. Using examples
from psychology and the history of science, I develop the idea that a belief system is a
structured transformation system, fairly similar in construction to a language.

And in this context, I consider also the question of the quality of a belief system. If one takes
the system-theoretic point of view, then it makes little sense to talk about the "correctness" or
"incorrectness" of a single belief. However, it is possible to talk about a productive or
unproductive belief system. Complex systems thinking does not prohibit normative judgements
of beliefs, it just displaces these judgements from the individual-belief level to the level of belief
systems.

Chapter Ten: continuing the analysis of belief, I put forth the argument that belief systems
are functionally and structurally analogous to immune systems. Just as immune systems protect
bodies from infections, belief systems protect expensive, high-level psychological procedures
from input. A belief permits the mind to deal with something "automatically," thus protecting
sophisticated, deliberative mental processes from having to deal with it. In this context, I discuss
the Whorfian/Nietzschean hypothesis that self and external reality itself must be considered as
belief systems.

Next, I propose that beliefs within belief systems can survive for two different reasons:

a) because they are useful for linguistic systems such as logic, or




Get any book for free on: www.Abika.com
16
CHAOTIC LOGIC


b) because they are involved in a group of beliefs that mutually support each other regardless
of external utility: i.e., because they are in themselves attractors of the cognitive equation

Good reasoning, I argue, is done by logical systems coupled with belief systems that support
themselves mainly by process (a). On the other hand, faulty reasoning is done by logical systems
coupled with belief systems that support themselves mainly by process (b).

Chapter Eleven: the relation between mind and reality is discussed from several different
perspectives. First it is argued that self and reality are belief systems . Then hyperset theory and
situation semantics are used to give a mathematical model of the universe in which mind and
reality reciprocally contain one another. Finally, I present a series of philosophically suggestive
speculations regarding the relation between psychology and quantum physics.

Chapter Twelve: the phenomenon of dissociation is used to integrate the ideas of the
previous chapters into a cohesive model of mental dynamics. It is argued that minds naturally
tend to separate into partially disconnected subnetworks, with significantly independent
functionality. This sort of dissociation has traditionally been associated with mental disorders
such as multiple personality disorder and post-traumatic stress syndrome. However, I argue that
it is in fact necessary for normal, effective logical thought. For the competition of dissociated
personality networks provides a natural incentive for the creation of self-sustaining belief
systems -- which are the only type of belief systems capable of supporting creative deduction.

As well as supplying a new understanding of human personality, this idea also gives rise to a
design for a new type of computer program: the A-IS, or "artificial intersubjectivity," consisting
of a community of artificial intelligences collectively living in and creating their own "virtual"
world. It is suggested that A-IS represents the next level of computational self-organization, after
artificial intelligence and artificial life.



Chapter Two

PATTERN AND PREDICTION

Language, thought and reality form an inseparable triad. Each one is defined by the others;
you can't understand any one of them until you have understood the other two. But in order to
speak about this triad, I must noneless begin somewhere. I will begin with thought, mainly
because this is the topic with which most of my previous writings have been concerned. In this
chapter I will review the model of mind presented in my earlier works, embellishing where
necessary and placing an emphasis on those aspects that are most relevant to the task ahead.

The key phrases for understanding this model of mind are pattern, process, and global
structure. The mind is analyzed as a network of regularities, habits, patterns. Each pattern takes
the form of a process for acting on other mental processes. And the avenues of access joining
these processes adhere roughly to a specific global structure called a dual network. This is an
abstract, computational way of looking at the mind. But it fits in well with the qualitative nature


Get any book for free on: www.Abika.com
17
CHAOTIC LOGIC


of current neurological data. And, as we shall see, it gives a great deal of insight into many
concrete issues regarding the human mind.



2.1. THE LOGIC OF PATTERN

Pattern-symbolic expressions are exact, as mathematics is, but are not quantitative. They do
not refer ultimately to number and dimension, as mathematics does, but to pattern and structure.
Nor are they to be confused with the theory of groups orsymbolic logic, though they may be in
some ways akin.

-- Benjamin Lee Whorf

Before I can talk about the structure of the mind, I must first develop an appropriate
vocabulary. In this section and the next, following The Structure of Intelligence, I will present a
general mathematical vocabulary for discussing structure. These ideas, while abstract and
perhaps rather unexciting in themselves, are essential to the psychological ideas of the following
sections.

Before getting formal, let us first take a quick intuitive tour through the main concepts to be
discussed. The natural place to begin is with the concept of pattern. I define a pattern, very
simply, as a representation as something simpler.

A good example is computer image generation. Suppose one wants to tell one's PC to put a
certain picture up on the screen. There are many ways to do this. One is to write a program
telling the computer exactly what color to make each little pixel (each dot) on the screen. But this
makes for a very long program -- there are around twenty thousand pixels on the average screen.

A better way to do it is to come up with some algorithm that exploits the internal structure of
the picture. For instance, if one is dealing with a figure composed of four horizontal stripes,
alternatingly black and white, it is easy to tell the computer "fill in the top quarter white, the next
quarter down black, the next quarter down white, and the bottom quarter white." This program
will be much much shorter than the program giving a pixel-by-pixel description of the picture. It
is a pattern in the picture.

The same approach works with more complicated pictures, even photographs of human faces.
Michael Barnsley (1989), using fractal image compression techniques, has given very a short
program which generates realistic portraits and landscapes. In general, computer graphics experts
know how to write short programs to generate very close approximations to all ordinary pictures
-- houses, people, dogs, clouds, molecules,.... All of these things have a certain internal structure,
which the clever and knowledgeable programmer can exploit.

A screen filled with static, on the other hand, has no internal structure, and there is no short-
cut to generating it. One can rapidly generate "pseudo-random" static that will look to the human




Get any book for free on: www.Abika.com
18
CHAOTIC LOGIC


eye like random static, but one will not be getting a close approximation to the particular screen
of static in question.

In general, a pattern is a short-cut -- a way of getting some entity that is in some sense simpler
than the entity itself. A little more formally, suppose the process y leads to the entity x. Then y
is a pattern in x if the complexity of x exceeds the complexity of y.

2.1.1. Structure

From pattern, it is but one small step to structure. The structure of an entity may be defined as
the set of all patterns in that entity. For instance, in a figure consisting of a circle next to a
square, there are at least two patterns -- the program for generating the circle and the program for
generating the square.

Next, consider a picture of 100 concentric circles. Cut the picture in half to form two parts, A
and B. Neither of the two parts A or B contains a pattern involving concentric circles. But the
combination of the two does! A pattern has emerged from the combination of two entities. In
general, the emergence between two entities A and B may be defined as the set of all processes
that are patterns in the combination of A and B but not in either A or B individually.

In all this talk about pattern, one technical point repeatedly arises. Two processes, that are both
patterns in the same entity, may provide different degrees of simplification. The intensity of a
process relative to a given entity, defined formally in the following section, is a measure of how
much the process simplifies the entity -- how strongly the process is a pattern in the entity. It has
to do with the ratio of the complexity of the process to the complexity of the entity.

If one considers each pattern to have an intensity, then the structure of an entity becomes what
is known as a "fuzzy set." It contains all the patterns in the entity, but it contains some more
"intensely" than others. And, similarly, the emergence between two entities becomes a fuzzy set.

The structural distance between two entities A and B may then be defined quite naturally as
the total intensity of all the patterns that are either in A or notB, but in B or not A. This measures
how much structure differentiates A from B. Thus, for instance, the structural distance between
two random entities would be zero, since there would be no structure in either entity -- the
amount of structure differentiating two structureless entities is zero.

These concepts may be used to measure the total amount of structure in an entity -- a
quantity which I call the structural complexity. The definition of this quantity is somewhat
technical, but it is not hard to describe the basic idea. If all the patterns in an entity were totally
unrelated to one another (as, perhaps, with this picture of the square next to the circle discussed
above), then one could define the structural complexity of an entity as the sum of the
complexities of all its patterns. But the problem is, often all the patterns will not be totally
unrelated to each other -- there can be "overlap." Basically, in order to compute the structural
complexity of an entity, one begins by lining up all the patterns in the entity: pattern one, pattern
two, pattern three, and so on. Then one starts with the complexity of one of the patterns in the
entity, adds on the complexity of whatever part of the second pattern was not already part of the


Get any book for free on: www.Abika.com
19
CHAOTIC LOGIC


first pattern, then adds on the complexity of whatever part of the third pattern was not already
part of the first or second patterns, and so on.

These concepts, as described here, are extremely general. Very shortly I will outline a very
specific way of developing these concepts in the context of binary sequences computing
machines. A few chapters later, this analysis of the complexity of sequences and machines will
be extended to deal with mathematical entities called hypersets. However, these technical
specifications should not cause one to lose sight of the extreme generality of the concepts of
"pattern," "structure" and "emergence." These concepts, in and of themselves, have nothing to do
with sequences, machines, or hypersets -- they are completely general and philosophical in
nature. It is essential to have concrete models to work with, but one must always keep in mind
that these models are only secondary tools.

Finally, one comment regarding is in order regarding complexity. I have been speaking of the
complexity of an entity as though it were an "objectively" defined quantity, which an entity
possessed in itself independently of any observer. But the story does not end here. One may
define a process to be a pattern in A relative to a given observer if the result of the processis A,
and if the process appears simpler to A relative to that observer.

2.2. PATTERN AND INFORMATION (*)

Now it is time to make the concept of pattern more precise -- to give a specific, "objective"
measure of complexity. The best way to do this is with the obscure but powerful branch of
mathematics known as algorithmic information theory.

The concept of algorithmic information was conceived in the 1960's, by Kolmogorov (1965),
Chaitin (1974) and Solomonoff (1964). Where U is a universal Turing machine understood to
input and output potentially infinite binary sequences, and x is a finite binary sequence, it may be
defined as follows:

Definition: The algorithmic information I(x) contained in x is the length of the shortest self-
delimiting program for computing x on U given the (infinite) input string ...000...

A self-delimiting program is, roughly speaking, a program which explicitly specifies its own
length; this restriction to self-delimiting programs is desirable for technical reasons which we
need not go into here (Chaitin, 1974). It is not hard to show, using simulation arguments, that as
the length of x approaches infinity, the quantity I(x) becomes machine independent.

Bennett (1982) has criticized this definition, on the grounds that what it really measures is
"degree of randomness" and not "degree of structure." It assigns a random sequence maximum
complexity, and a completely repetitive sequence like ...000... minimum complexity. He defines
the logical depth of a binary sequence x, relative to a universal Turing machine U, to be the
running time on U of the shortest self-delimiting program which computes x on U from the
(infinite) input ...000... .




Get any book for free on: www.Abika.com
20
CHAOTIC LOGIC


The sequence consisting of the first billion digits of pi has low algorithmic information, but,
apparently, high logical depth. It can be proved that, as n goes to infinity, the vast majority of
binary sequences of length n have near-maximal algorithmic information and logical depth.

Moshe Koppel (1987) has formulated a third measure of complexity, which he calls
"sophistication" or "meaningful complexity." He has shown that for large n its behavior is similar
to that of logical depth. Anapproximate opposite of the sophistication of a sequence is given by
the crudity defined as follows. Instead of simply considering a program y for computing the
sequence x, let us consider a program y that computes the sequence x from the input sequence z.
Then the crudity of a pair (y,z) may be defined as |z|/|y|, where |z| denotes the length of the
sequence z and |y| denotes the length of the sequence y.

SI discusses in detail the qualitative properties of sophistication, algorithmic information,
logical depth, crudity and a number of hybrid complexity measures. It also introduces a
completely new measure of complexity, called the structural complexity. Structural complexity
differs significantly from all of the algorithmic information based complexity measures
discussed above. It does not refer to one distinguished way of computing a sequence -- the
shortest, the most "sophisticated," etc. Rather, it considers all possible economical strategies for
computing a sequence, where an economical strategy for computing x -- or more succinctly a
pattern in x -- may be defined as follows, given a fixed universal Turing machine U.

Definition: A pattern in x is a self-delimiting program y which computes x on U from the
input ...000z000... (it is understood that z extends to the right of the tape head of U), so that the
length of y plus the length of z is less than the length of x. Where | | denotes length, this may be
written

|y| + |z| < |x|

The intensity of (y,z) in x is the quantity

1 - (|y| + |z|)/ |x|

(note that intensity is always positive if (y,z) is actually a pattern in x, and it never exceeds one).

Note that no generality would be lost if z were set equal to 0, or some other constant value.
However, in many applications the (y,z) notation is useful.

We have introduced algorithmic information as an "objective" complexity measure, which
makes the theory of pattern concrete. But this "objective" measure may be used to generate other,
"subjective" complexity measures. To see how this can be done, assume some standard
"programming language" L, which assigns to each program y a certain binary sequence L(y).
The specifics of L are irrelevant, so long as it is computable on a Turing machine, and it does not
assign the same sequence to anytwo different programs. Where U is a universal Turing machine
and v and w are binary sequences, one may then propose:




Get any book for free on: www.Abika.com
21
CHAOTIC LOGIC


Definition: The relative information I(v|w), relative to U and L, is the length of the shortest
self-delimiting program which computes v, on U, from input L(y), where y is a minimal-length
self-delimiting program for computing v on U.

Obviously, if v and w have nothing in common, I(v,w)=I(v). And, on the other hand, if v and
w have a large common component, then both I(v,w) and I(w,v) are very small. If one sets |y| =
I(y|x), one has a measure of complexity relative to x.

2.2.1. Fuzzy Sets and Infons

Intuitively, a "fuzzy set" is a set in which membership is not "either/or" but gradual. A good
example is the set of tall people. Being nearly six foot, I belong to the set of tall people to a
somewhat higher degree than my friend Mike who is five nine, and to a much higher degree than
Danny DeVito, but to a much lower degree than Magic Johnson.

Formally, a fuzzy subset of a given set E is defined as a function dE from E into some interval
[0,a]. Where x is in E, I will write dE(x) for the degree of membership of x in E. dE(x)=0 means
that x is not an element of E. Unless it is specified otherwise, the reader should assume that a=1,
in which case dE(x)=1 means that x is completely an element of E. The usual algebra placed on
fuzzy sets is

dE(x union y) = max{ dE(x), dE(y) },

dE(x intersect y) = min{ dE(x), dE(y) }

but I shall not require these operations (Kandel, 1986). The only operation I will require is the
fuzzy set distance |E - F|, defined for finite sets as the sum over all x of the difference |dE(x)-
dF(x)|.

In Chapter Three I will introduce a few ideas from situation semantics (Barwise and Perry,
1981; Barwise, 1989), which speaks about infons and situations . I will define an infon as a
fuzzy set of patterns, and will make sporadic use of the following quasi-situation-theoretic
notations:

s |-- i means that i is a fuzzy set of patterns in s

s |-- i //x means that i is a fuzzy set of patterns in s, where

complexity is measured relative to x, i.e. by I(,x)

s |-- i //x to degree a,

(s,i,x,a), and

d(s,i,x) = a all mean that the intensity of i in s, according to




Get any book for free on: www.Abika.com
22
CHAOTIC LOGIC


the complexity measure I(,x), is d. Here the intensity of i in s may be defined as the average
over all w in i of the product [intensity of w in s] * [degree of membership of w in i].

In later chapters, I will call a quadruple such as (s,i,x,a) a belief. x is the belief-holder, i is the
entity believed, s is what i is believed about, and a is the degree to which it is believed.

2.2.2. Structure and Related Ideas

As in Section 2.1, having formulated the concept of pattern, the next logical step is to define
the structure of an entity to be the set of all patterns in that entity. This may be considered as a
fuzzy set -- the degree of membership of w in the structure of x is simply the intensity of w as a
pattern in x. But for now I shall ignore this fuzziness, and consider structure as a plain old set.

The structural complexity of an entity, then, measures the size of the structure of an entity.
This is a very simple concept, but certain difficulties arise when one attempts to formulate it
precisely. An entity may manifest a number of closely related patterns, and one does not wish to
count them all separately. In words: when adding up the sizes of all the patterns in x, one must
adhere the following process: 0) put all the patterns in a certain order, 1) compute the size of the
first pattern, 2) compute the size of that part of the second pattern which is not also part of the
first pattern, 3) compute the size of that part of the third pattern which is not also part of the first
or second patterns, etc. One may then define the size |S| of a set S as the average over all
orderings of the elements of S, of the number obtained by the procedure of the previous
paragraph.

Where St(x) is the set of all patterns in x, one may now define the structural complexity of x
to be the quantity |St(x)|. This is the size of the set of all patterns in x -- or, more intuitively, the
total amount of regularity observable in x. It is minimal for arandom sequence, and very small
for a repetitive sequence like 000...0. It deems 0101010...10 slightly more complex than 000...0,
because there are more different economical ways of computing the former (for instance, one
may repeat 10's, or one may repeat 01's and then append a 0 at the end). It considers all the
different ways of "looking at" a sequence.

For future reference, let us define the structure St(D;r,s) of a discrete dynamical system D on
the interval (r,s) as the set of all approximate patterns in the ordered tuple [D(r),...,D(s)], where
D(t) denotes the state of S at time t.

And, finally, let us define the emergence Em(x,y) of two sequences x and y as the set St(xy) -
St(x) - St(y), where xy refers to the sequence obtained by juxtaposing x and y. This measures
what might be called the gestalt of x and y -- it consists of those patterns that appear when x and
y are considered together, but not in either x or y separately. This is an old idea in psychology
and it is now popping up in anthropology as well. For instance, Lakoff (1987,p.486-87) has
found it useful to describe cultures in terms of "experiential gestalts" -- sets of experiences that
occurs so regularly that the whole collection becomes somehow simpler than the sum of its parts.

2.3. STRUCTURE AND CHAOS




Get any book for free on: www.Abika.com
23
CHAOTIC LOGIC


Before diving into computational psychology, let us briefly return to a topic raised in the
Introduction: the meaning of "chaos." In Chapter Three it will be shown that the concept of chaos
is related quite closely with certain psychological matters, such as the nature of intelligence and
induction.

In mathematics, "chaos" is typically defined in terms of certain technical properties of
dynamical systems. For instance, Devaney (1988) defines a time-discrete dynamical system to be
chaotic if it possesses three properties: 1) sensitivity to initial conditions, 2) topological
transitivity, and 3) density of periodic points. On the other hand, the intuitive concept of chaos --
apparent randomness emergent from underlying determinism -- seems to have a meaning that
goes beyond formal conditions of this sort. The mathematical definitions approximate the idea of
chaos, but do not capture it.

In physical and mathematical applications of chaos theory, this is only a minor problem. One
identifies chaos intuitively, then uses the formal definitions for detailed analysis. But when one
seeks to apply chaos theory to psychological or social systems, the situation becomes more acute.
Chaos appears intuitively to be present, but it is difficult to see the relevance of conditions such
as topological transitivity and density of periodic points. Perhaps these conditions are met by
certain low-dimensional subsystems of the system in question, but if so, this fact would seem to
have nothing to do with the method by which we make the educated guess that chaos is present.
"Chaos" has a pragmatic meaning that has transcends the details of point-set topology.

2.3.1. Structural Predictability

In this section I will outline an alternative point of view. For starters, I define a temporal
sequence to be structurally predictable if knowing patterns in the sequence's past allows one to
roughly predict patterns in the sequence's future. And I define a static entity to be structurally
predictable if knowing patterns in one part of the entity allows one to predict patterns in other
parts of the entity. This allows us to, finally, define an environment to be structurally
predictable if it is somewhat structurally predictable at each time as well as somewhat
structurally predictable over time.

One may give this definition a mathematical form, modeled on the standard epsilon-delta
definition of continuity, but I will omit that here. The only key point is that, if an environment is
structurally predictable, then patterns of higher degree have in a certain sense a higher "chance"
of being found repeatedly. This shows that the assumption of a structurally predictable
environment implies Charles S. Peirce's declaration that the world possesses a "tendency to take
habits." The more prominent and rigid habits are the more likely to be continued.

It is interesting to think about the relationship between structural predictability and chaos. For
example, one key element of chaotic behavior is sensitive dependence on initial conditions (or,
in physicists' language, positive Liapunov exponent). Sensitive dependence means, informally,
that slightly vague knowledge of the past leads to extremely vague knowledge of the future. In
practical terms, if a system displays sensitive dependence, this means that it is hopeless to try to
predict the exact value of its future state. Structural predictability is compatible with sensitive
dependence. It is quite possible for a system to possess sensitive dependence on initial


Get any book for free on: www.Abika.com
24
CHAOTIC LOGIC


conditions, so that one can never accurately predict its future state, but still display enough
regularity of overall structure that one can roughly predict future patterns. Intuitively, this
appears to be the case with complex systems in the real world: brains, ecosystems, atmospheres.
Exact prediction of these systems' behavior is impossible, but rough prediction of the regularities
in their behavior is what we do every day.

But sensitive dependence does not, in itself, make chaos -- it is only one element of chaotic
behavior. There are many different definitions of chaos, but they all center around the idea that a
chaotic dynamical system is one whose behavior is deterministic but appears random.

A pattern-theoretic definition of chaos is as follows: an entity x is structurally chaotic if there
are patterns in x, but if the component parts of x have few patterns besides those which are also
patterns in the whole. For instance, consider the numerical sequence consisting of the first
million digits of the pi: 3.1415926535... There are patterns in this sequence -- every
mathematical scheme for generating the expansion of pi is such a pattern. But if one takes a
subsequence -- say digits 100000 through 110000 -- one is unlikely to find any additional
patterns there. There may be some extra patterns here and there -- say, perhaps, some strings of
repeated digits -- but these won't amount to much.

Structural chaos is a weak kind of chaos. All the commonly studied examples of chaotic
dynamical systems have the property that, if one records their behavior over time, one obtains a
structurally chaotic series (the easiest way to see this is to use symbolic dynamics). But on the
other hand, the interesting structurally predictable series are not structurally chaotic.

2.3.2. Attractors, Strange and Otherwise

To probe more deeply into the relation between chaos and prediction, one must consider the
notion of an "attractor." Let us begin with the landmark work of Walter Freeman (1991) on the
sense of smell. Freeman has written down a differential equations model of the olfactory cortex
of a reptile (very similar to that of a human), and studied these equations via
computersimulations. The result is that the olfactory cortex is a dynamical system which has an
"attractor with wings." Recall that an attractor for a dynamical system is a region of the space
of possible system states with the property that:

1) states "sufficiently close" to those in the attractor lead eventually to states within the
attractor

2) states within the attractor lead immediately to other states within the attractor.

An attractor which consists of only one state is called a "fixed point." It is a "steady state" for
the system -- once the system is close to that state, it enters that state; and once the system is in
that state, it doesn't leave. On the other hand, an attractor which is, say, a circle or an ellipse is
called a "limit cycle." A limit cycle represents oscillatory behavior: the system leaves from one
state, passes through a series of other states, then returns to the first state again, and so goes
around the cycle again and again.




Get any book for free on: www.Abika.com
25
CHAOTIC LOGIC


And a "strange attractor," finally, is a kind of attractor which is neither a fixed point nor a limit
cycle but rather a more complex region. Behavior of the system within the set of states
delineated by the "strange attractor" is neither steady nor oscillatory, but continually fluctuating
in a chaotic manner. More specific definitions of "strange attractor" can be found in the technical
literature -- for instance, "a topologically transitive attractor" or "a topologically transitive
attractor with a transversal homoclinic orbit." But, like the formal definitions of "chaos," these
characterizations seem to skirt around the essence of the matter.

Freeman found that the olfactory cortex has a strange attractor -- a fixed set of states, or region
of state space, within which it varies. But this strange attractor is not a formless blob -- it has a
large number of "wings," protuberances jutting out from it. Each "wing" corresponds to a certain
recognized smell. When the system is presented with something new to smell, it wanders
"randomly" around the strange attractor, until it settles down and restricts its fluctuations to one
wing of the attractor, representing the smell which it has decided it is perceiving.

This is an excellent intuitive model for the behavior of complex self-organizing systems. Each
wing of Freeman's attractor represents a certain pattern recognized -- smell is chemical, it is
just a matter of recognizing certain molecular patterns. In general, the states of a complex self-
organizing systems fluctuatewithin a strange attractor that has many wings, sub-wings, sub-sub-
wings, and so on, each one corresponding to the presence of a certain pattern or collection of
patterns within the system. There is chaotic, pseudo-random movement within the attractor, but
the structure of the attractor itself imposes a rough global predictability. From each part of the
attractor the system can only rapidly get to certain other parts of the attractor, thus imposing a
complex structural predictability that precludes structural chaos.

In other words, the structure of the dynamics of a complex system consists of the patterns in
its strange attractor. The strange attractors which one usually sees in chaos texts, such as the
Lorentz attractor, have very little structure to them; they are not structurally complex. But that is
because these systems are fundamentally quite simple despite their chaos. A truly complex
system has a highly patterned strange attractor, reflecting the fact that, in many cases, states
giving rise to pattern X are more likely to lead to states giving rise to pattern Y than they are to
states giving rise to pattern Z. The states within the attractor represent patterned states; the
patterns of the attractor represent patterns of transition. And these two sets of patterns are not
unrelated.



Chapter Three

THE STRUCTURE OF THOUGHT

Hundreds of thousands of pages have been written on the question: what is mind? Here I will
dispense with the question immediately. In good mathematical form, I will define it away. A
mind is the structure of an intelligent system.




Get any book for free on: www.Abika.com
26
CHAOTIC LOGIC


This definition has its plusses and minuses. One may endlessly debate whether it captures
every nuance of the intuitive concept of mind. But it does situate mind in the right place: neither
within the physical world, nor totally disconnected from the physical world. If a mind is the
structure of a certain physical system, then mind is made of relations between physical entities.

The question, then, is whether this system of relations that is mind has any characteristic
structure. Are all minds somehow alike? Locomotion can be achieved by mechanisms as
different as legs and the wheel -- is this kind of variety possible for the mechanisms of
intelligence? I suggest that it is not. There is much room for variety, but the logic of intelligence
dictates a certain uniformity of overall structure. The goal of this chapter is to outline what this
uniform global structure is.

Of course, one cannot reasonably define mind in terms of intelligence unless one has a
definition of intelligence at hand. So, let us say that intelligence is the ability to optimize
complex functions of complex environments. By a "complex environment," I mean an
environment which is unpredictable on the level of details, but somewhat structurally
predictable. And by a "complex function," I mean a function whose graph is unpredictable on
the level of details, but somewhat structurally predictable.

The "complex function" involved in the definition of intelligence may be anything from
finding a mate to getting something to eat to building a transistor or browsing through a library.
When executing any of these tasks, a person has a certain goal, and wants to know what set of
actions to take in order to achieve it. There are many different possible sets of actions -- each
one, call it X, has a certain effectiveness at achieving the goal.

This effectiveness depends on the environment E, thus yielding an "effectiveness function"
f(X,E). Given an environment E, the person wants to find X that maximizes f -- that is maximally
effective at achieving the goal. But in reality, one is never given complete information about the
environment E, either at present or in the future (or in the past, for that matter). So there are two
interrelated problems: one must estimate E, and then find the optimal X based on this estimate.

If you have to optimize a function that depends on a changing environment, you'd better be
able to predict at least roughly what that environment is going to do in the future. But on the
other hand, if the environment is too predictable, it doesn't take much to optimize functions that
depend on it. The interesting kind of environment is the kind that couples unpredictability on the
level of state with rough predictability on the level of structure. That is: one cannot predict the
future state well even from a good approximation to the present and recent past states, but one
can predict the future structure well from a good approximation to the present and recent past
structure.

This is the type of partial unpredictability meant in the formulation "Intelligence is the ability
to optimize complex functions of partially unpredictable environments." In environments
displaying this kind of unpredictability, prediction must proceed according to pattern
recognition. An intelligent system must recognize patterns in the past, store them in memory,
and construct a model of the future based on the assumption that some of these patterns will
approximately continue into the future.


Get any book for free on: www.Abika.com
27
CHAOTIC LOGIC


Is there only one type of structure capable of doing this? I claim the answer is yes.

3.1. THE PERCEPTUAL-MOTOR HIERARCHY

My hypothesis is a simple one: every mind is a superposition of two structures: a
structurallyassociative memory (also called "heterarchical network") and a multilevel control
hierarchy ("perceptual-motor hierarchy" or "hierarchical network"). Both of these structures are
defined in terms of their action on certain patterns. By superposing these two distinct structures,
the mind combines memory, perception and control in a creative an effective way.

Let us begin with multilevel control. To solve a problem by the multilevel methodology, one
divides one's resources into a number of levels -- say, levels ...,3,2,1,0. Level 0 is the "bottom
level", which contains a number of problem-solving algorithms. Each process on level N
contains a number of subsidiary processes on levels k = 1, 2, ..., N-1 -- it tells them what to do,
and in return they give it feedback as to the efficacy of its instructions.

This is a simple idea of very broad applicability. One clear-cut example is the hierarchical
power structure of the large corporation. Level 0 consists of those employees who actually
produce goods or provide services for individuals outside the company. Level 1 consists of
foremen and other low-level supervisors. And so on. The highest level comprises the corporate
president and the board of directors.

3.1.1. Perception

A vivid example is the problem of perception. One has a visual image P, and one has a large
memory consisting of various images z1, z2,..., zM. One wants to represent the perceived image in
terms of the stored images. This is a pattern recognition problem: one wants to find a pair of the
form (y,z), where y*z=P and z is a subset of {z1,...,zM}. In this case, the multilevel methodology
takes the form of a hierarchy of subroutines. Subroutines on the bottom level -- level 0 -- output
simple patterns recognized in the input image P. And, for i>0, subroutines on level i output
patterns recognized in the output of level i-1 subroutines. In some instances a subroutine may
also instruct the subroutines on the level below it as to what sort of patterns to look for.

It appears that this is one of the key strategies of the human visual system. Two decades ago,
Hubel and Wiesel (Hubel, 1988) demonstrated that the brain possesses specific neural clusters
which behave as subroutines for judging the orientation of line segments. Since that time, many
other neural clusters executing equally specific visual "subroutines" have been found. As well as
perhaps being organized in other ways, these clusters appear to be organized in levels.

At the lowest level, in the retina, gradients are enhanced and spots are extracted -- simple
mechanical processes. Next come simple moving edge detectors. The next level up, the second
level up from the retina, extracts more sophisticated information from the first level up from the
retina -- and so on. Admittedly, little is known about the processes two or more levels above the
retina. It is clear, however, that there is a very prominent hierarchical structure, although it may
be supplemented by more complex forms of parallel information processing (Ruse and Dubose,
1985).

. 1
( 10)



>>