. 1
( 13)



>>

Moral Sentiments and
Material Interests
Economic Learning and Social Evolution
General Editor
Ken Binmore, Director of the Economic Learning and Social
Evolution Centre, University College London

1. Evolutionary Games and Equilibrium Selection, Larry Samuelson, 1997
2. The Theory of Learning in Games, Drew Fudenberg and David K.
Levine, 1998
3. Game Theory and the Social Contract, Volume 2: Just Playing, Ken
Binmore, 1998

4. Social Dynamics, Steven N. Durlauf and H. Peyton Young, editors,
2001

5. Evolutionary Dynamics and Extensive Form Games, Ross Cressman,
2003

6. Moral Sentiments and Material Interests: The Foundations of Cooperation
in Economic Life, Herbert Gintis, Samuel Bowles, Robert Boyd, and
Ernst Fehr, editors, 2005
Moral Sentiments and
Material Interests
The Foundations of
Cooperation in Economic
Life




edited by
Herbert Gintis, Samuel
Bowles, Robert Boyd, and
Ernst Fehr




The MIT Press
Cambridge, Massachusetts
London, England
( 2005 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any elec-
tronic or mechanical means (including photocopying, recording, or information storage
and retrieval) without permission in writing from the publisher.

MIT Press books may be purchased at special quantity discounts for business or sales
promotional use. For information, please e-mail special_sales@mitpress.mit.edu or write
to Special Sales Department, The MIT Press, 55 Hayward Street, Cambridge, MA 02142.

This book was set in Palatino on 3B2 by Asco Typesetters, Hong Kong, and was printed
and bound in the United States of America.

Library of Congress Cataloging-in-Publication Data

Moral sentiments and material interests : the foundations of cooperation in economic life
/ edited by Herbert Gintis . . . [et al.].
p. cm. ” (Economic learning and social evolution ; 6)
Includes bibliographical references and index.
ISBN 0-262-07252-1 (alk. paper)
1. Cooperation. 2. Game theory. 3. Economics”Sociological aspects. I. Gintis, Herbert.
II. MIT Press series on economic learning and social evolution ; v. 6.
HD2961.M657 2004
330 0 .01 0 5193”dc22 2004055175

10 9 8 765 4321
To Adele Simmons who, as President of the John D. and Catherine
T. MacArthur Foundation, had the vision and courage to support
unconventional transdisciplinary research in the behavioral sciences.
Contents




Series Foreword ix
Preface xi

1
I Introduction

1 Moral Sentiments and Material Interests: Origins, Evidence, and
3
Consequences
Herbert Gintis, Samuel Bowles, Robert Boyd, and Ernst Fehr

41
II The Behavioral Ecology of Cooperation

43
2 The Evolution of Cooperation in Primate Groups
Joan B. Silk

3 The Natural History of Human Food Sharing and Cooperation:
A Review and a New Multi-Individual Approach to the
75
Negotiation of Norms
Hillard Kaplan and Michael Gurven

115
4 Costly Signaling and Cooperative Behavior
Eric A. Smith and Rebecca Bliege Bird

149
III Modeling and Testing Strong Reciprocity

151
5 The Economics of Strong Reciprocity
Ernst Fehr and Urs Fischbacher
viii Contents



193
6 Modeling Strong Reciprocity
Armin Falk and Urs Fischbacher

215
7 The Evolution of Altruistic Punishment
Robert Boyd, Herbert Gintis, Samuel Bowles, and Peter J. Richerson

229
8 Norm Compliance and Strong Reciprocity
Rajiv Sethi and E. Somanathan

251
IV Reciprocity and Social Policy

253
9 Policies That Crowd out Reciprocity and Collective Action
Elinor Ostrom

277
10 Reciprocity and the Welfare State
Christina M. Fong, Samuel Bowles, and Herbert Gintis

303
11 Fairness, Reciprocity, and Wage Rigidity
Truman Bewley

339
12 The Logic of Reciprocity: Trust, Collective Action, and Law
Dan M. Kahan

13 Social Capital, Moral Sentiments, and Community Governance
379
Samuel Bowles and Herbert Gintis

Contributors 399
Index 401
Series Foreword




The MIT Press series on Economic Learning and Social Evolution
re¬‚ects the continuing interest in the dynamics of human interaction.
This issue has provided a broad community of economists, psycholo-
gists, biologists, anthropologists, mathematicians, philosophers, and
others with such a strong sense of common purpose that traditional in-
terdisciplinary boundaries have melted away. We reject the outmoded
notion that what happens away from equilibrium can safely be
ignored, but think it no longer adequate to speak in vague terms of
bounded rationality and spontaneous order. We believe the time has
come to put some beef on the table.
The books in the series so far are:

Evolutionary Games and Equilibrium Selection, by Larry Samuelson
0


(1997). Traditional economic models have only one equilibrium and
therefore fail to come to grips with social norms whose function is to
select an equilibrium when there are multiple alternatives. This book
studies how such norms may evolve.
The Theory of Learning in Games, by Drew Fudenberg and David
0


Levine (1998). John Von Neumann introduced ˜˜¬ctitious play™™ as a
way of ¬nding equilibria in zero-sum games. In this book, the idea is
reinterpreted as a learning procedure and developed for use in general
games.
Just Playing, by Ken Binmore (1998). This book applies evolutionary
0


game theory to moral philosophy. How and why do we make fairness
judgments?
Social Dynamics, edited by Steve Durlauf and Peyton Young (2001).
0


The essays in this collection provide an overview of the ¬eld of social
dynamics, in which some of the creators of the ¬eld discuss a variety
x Series Foreword



of approaches, including theoretical model-building, empirical studies,
statistical analyses, and philosophical re¬‚ections.
Evolutionary Dynamics and Extensive Form Games, by Ross Cressman
0


(2003). How is evolution affected by the timing structure of games?
Does it generate backward induction? The answers show that ortho-
dox thinking needs much revision in some contexts.

Authors who share the ethos represented by these books, or who
wish to extend it in empirical, experimental, or other directions, are
cordially invited to submit outlines of their proposed books for con-
sideration. Within our terms of reference, we hope that a thousand
¬‚owers will bloom.
Preface




The behavioral sciences have traditionally offered two contrasting ex-
planations of cooperation. One, favored by sociologists and anthro-
pologists, considers the willingness to subordinate self-interest to the
needs of the social group to be part of human nature. Another, favored
by economists and biologists, treats cooperation as the result of the
interaction of sel¬sh agents maximizing their long-term individual ma-
terial interests. Moral Sentiments and Material Interests argues that a sig-
ni¬cant fraction of people ¬t neither of these stereotypes. Rather, they
are conditional cooperators and altruistic punishers. We show that a high
level of cooperation can be attained when social groups have a suf¬-
cient fraction of such types, which we call strong reciprocators, and we
draw implications of this phenomenon for political philosophy and so-
cial policy.
The research presented in this book was conceived in 1997, inspired
by early empirical results of Ernst Fehr and his coworkers at the Uni-
¨
versity of Zurich and the analytical models of cultural evolution pio-
neered by Robert Boyd and Peter Richerson. Behavioral scientists from
several disciplines met at the University of Massachusetts in October
1998 to explore preliminary hypotheses. We then commissioned a se-
ries of papers from a number of authors and met again at the Santa Fe
Institute in March 2001 to review and coordinate our results, which,
suitably revised and updated, together with some newly commissioned
papers, are presented in the chapters below.
This research is distinctive not only in its conclusions but in its meth-
odology as well. First, we rely on data gathered in controlled labora-
tory and ¬eld environments to make assertions concerning human
motivation. Second, we ignore the disciplinary boundaries that have
thwarted attempts to develop generally valid analytical models of hu-
man behavior and combine insights from economics, anthropology,
xii Preface



evolutionary and human biology, social psychology, and sociology.
We bind these disciplines analytically by relying on a common lexicon
of game theory and a consistent behavioral methodology.
We would like to thank those who participated in our research
conferences but are not represented in this book. These include Leda
Cosmides, Joshua Epstein, Steve Frank, Joel Guttman, Kevin McCabe,
Arthur Robson, Robert Solow, Vernon Smith, and John Tooby. We
bene¬tted from the generous ¬nancial support and moral encourage-
ment of the John D. and Catherine T. MacArthur Foundation, which
allowed us to form the Network on the Nature and Origins of Norms
and Preferences, to run experiments, and to collect and analyze data
from several countries across ¬ve continents. We extend special thanks
to Ken Binmore, who contributed to our ¬rst meeting and encouraged
us to place this volume in his MIT Press series, Economic Learning and
Social Evolution, and to Elizabeth Murry, senior editor at The MIT
Press, who brought this publication to its fruition. We extend a special
expression of gratitude to Adele Simmons who, as president of the
MacArthur Foundation, championed the idea of an interdisciplinary
research project on human behavior and worked indefatigably to turn
it into a reality.
I Introduction
1 Moral Sentiments and
Material Interests: Origins,
Evidence, and
Consequences

Herbert Gintis, Samuel
Bowles, Robert Boyd, and
Ernst Fehr



1.1 Introduction

Adam Smith™s The Wealth of Nations advocates market competition as
the key to prosperity. Among its virtues, he pointed out, is that compe-
tition works its wonders even if buyers and sellers are entirely self-
interested, and indeed sometimes works better if they are. ˜˜It is not
from the benevolence of the butcher, the brewer, or the baker that we
expect our dinner,™™ wrote Smith, ˜˜but from their regard to their own
interest™™ (19). Smith is accordingly often portrayed as a proponent of
Homo economicus”that sel¬sh, materialistic creature that has tradition-
ally inhabited the economic textbooks. This view overlooks Smith™s
second”and equally important”contribution, The Theory of Moral
Sentiments, in which Smith promotes a far more complex picture of the
human character.
˜˜How sel¬sh soever man may be supposed,™™ Smith writes in The
Theory of Moral Sentiments, ˜˜there are evidently some principles in his
nature, which interest him in the fortunes of others, and render their
happiness necessary to him, though he derives nothing from it, except
the pleasure of seeing it.™™ His book is a thorough scrutiny of human be-
havior with the goal of establishing that ˜˜sympathy™™ is a central emo-
tion motivating our behavior towards others.
The ideas presented in this book are part of a continuous line of
intellectual inheritance from Adam Smith and his friend and mentor
David Hume, through Thomas Malthus, Charles Darwin, and Emile
Durkheim, and more recently the biologists William Hamilton and
Robert Trivers. But Smith™s legacy also led in another direction,
through David Ricardo, Francis Edgeworth, and Leon Walras, to con-
temporary neoclassical economics, that recognizes only self-interested
behavior.
4 Gintis, Bowles, Boyd, and Fehr



The twentieth century was an era in which economists and policy
makers in the market economies paid heed only to the second Adam
Smith, seeing social policy as the goal of improving social welfare
by devising material incentives that induce agents who care only for
their own personal welfare to contribute to the public good. In this
paradigm, ethics plays no role in motivating human behavior. Albert
Hirschman (1985, 10) underscores the weakness of this approach in
dealing with crime and corruption:

Economists often propose to deal with unethical or antisocial behavior by rais-
ing the cost of that behavior rather than proclaiming standards and imposing
prohibitions and sanctions. . . . [Yet, a] principal purpose of publicly proclaimed
laws and regulations is to stigmatize antisocial behavior and thereby to in¬‚u-
ence citizens™ values and behavior codes.

Hirschman argues against a venerable tradition in political philoso-
phy. In 1754, ¬ve years before the appearance of Smith™s Theory of
Moral Sentiments, David Hume advised ˜˜that, in contriving any system
of government . . . every man ought to be supposed to be a knave and
to have no other end, in all his actions, than his private interest™™ (1898
[1754]). However, if individuals are sometimes given to the honorable
sentiments about which Smith wrote, prudence recommends an alter-
native dictum: Effective policies are those that support socially valued out-
comes not only by harnessing sel¬sh motives to socially valued ends, but also
by evoking, cultivating, and empowering public-spirited motives. The re-
search in this book supports this alternative dictum.
We have learned several things in carrying out the research de-
scribed in this book. First, interdisciplinary research currently yields
results that advance traditional intradisciplinary research goals. While
the twentieth century was an era of increased disciplinary specializa-
tion, the twenty-¬rst may well turn out to be an era of transdisciplin-
ary synthesis. Its motto might be: When different disciplines focus on the
same object of knowledge, their models must be mutually reinforcing and
consistent where they overlap. Second, by combining economic theory
(game theory in particular) with the experimental techniques of social
psychology, economics, and other behavioral sciences, we can em-
pirically test sophisticated models of human behavior in novel ways.
The data derived from this uni¬cation of disciplinary methods allows
us to deduce explicit principles of human behavior that cannot be
unambiguously derived using more traditional sources of empirical
data.
Moral Sentiments and Material Interests 5



The power of this experimental approach is obvious: It allows delib-
erate experimental variation of parameters thought to affect behavior
while holding other parameters constant. Using such techniques, ex-
perimental economists have been able to estimate the effects of prices
and costs on altruistic behaviors, giving precise empirical content to a
common intuition that the greater the cost of generosity to the giver
and the less the bene¬t to the recipient, the less generous is the typi-
cal experimental subject (Andreoni and Miller 2002).1 The resulting
˜˜supply function of generosity,™™ and other estimates made possible
by experiments, are important in underlining the point that other-
regarding behaviors do not contradict the fundamental ideas of ratio-
nality. They also are valuable in providing interdisciplinary bridges
allowing the analytical power of economic and biological models,
where other-regarding behavior is a commonly used method, to be
enriched by the empirical knowledge of the other social sciences,
where it is not.
Because we make such extensive use of laboratory experiments in
this book, a few caveats about the experimental method are in order.
The most obvious shortcoming is that subjects may behave differently
in laboratory and in ˜˜real world™™ settings (Loewenstein 1999). Well-
designed experiments in physics, chemistry, or agronomy can exploit
the fact that the behavior of entities under study”atoms, agents, soils,
and the like”behave similarly whether inside or outside of a labora-
tory setting. (Murray Gell-Mann once quipped that physics would
be a lot harder if particles could think). When subjects can think, so-
called ˜˜experimenter effects™™ are common. The experimental situation,
whether in the laboratory or in the ¬eld, is a highly unusual setting
that is likely to affect behavioral responses. There is some evidence
that experimental behaviors are indeed matched by behaviors in non-
experimental settings (Henrich et al. 2001) and are far better predictors
of behaviors such as trust than are widely used survey instruments
(Glaeser et al. 2000). However, we do not yet have enough data on
the behavioral validity of experiments to allay these concerns about
experimenter effects with con¬dence. Thus, while extraordinarily valu-
able, the experimental approach is not a substitute for more conven-
tional empirical methods, whether statistical, historical, ethnographic,
or other. Rather, well-designed experiments may complement these
methods. An example, combining behavioral experiments in the ¬eld,
ethnographic accounts, and cross-cultural statistical hypotheses testing
is Henrich et al. 2003.
6 Gintis, Bowles, Boyd, and Fehr



This volume is part of a general movement toward transdisciplinary
research based on the analysis of controlled experimental studies of
human behavior, undertaken both in the laboratory and in the ¬eld”
factories, schools, retirement homes, urban and rural communities, in
advanced and in simple societies. Anthropologists have begun to use
experimental games as a powerful data instrument in conceptualizing
the speci¬city of various cultures and understanding social variability
across cultures (Henrich et al. 2003). Social psychologists are increas-
ingly implementing game-theoretic methods to frame and test hypoth-
eses concerning social interaction, which has improved the quality and
interpretability of their experimental data (Hertwig and Ortmann
2001). Political scientists have found similar techniques useful in mod-
eling voter behavior (Frohlich and Oppenheimer 1990; Monroe 1991).
Sociologists are ¬nding that analytically modeling the social interac-
tions they describe facilitates their acceptance by scholars in other be-
havioral sciences (Coleman 1990; Hechter and Kanazawa 1997).
But the disciplines that stand to gain the most from the type of re-
search presented in this volume are economics and human biology. As
we have seen, economic theory has traditionally posited that the basic
structure of a market economy can be derived from principles that
are obvious from casual examination. An example of one of these as-
sumptions is that individuals are self-regarding.2 Two implications of
the standard model of self-regarding preferences are in strong con¬‚ict
with both daily observed preferences and the laboratory and ¬eld ex-
periments discussed later in this chapter. The ¬rst is the implication
that agents care only about the outcome of an economic interaction and
not about the process through which this outcome is attained (e.g., bar-
gaining, coercion, chance, voluntary transfer). The second is the impli-
cation that agents care only about what they personally gain and lose
through an interaction and not what other agents gain or lose (or the
nature of these other agents™ intentions). Until recently, with these
assumptions in place, economic theory proceeded like mathematics
rather than natural science; theorem after theorem concerning individ-
ual human behavior was proven, while empirical validation of such
behavior was rarely deemed relevant and infrequently provided. In-
deed, generations of economists learned that the accuracy of its predic-
tions, not the plausibility of its axioms, justi¬es the neoclassical model
of Homo economicus (Friedman 1953). Friedman™s general position is
doubtless defensible, since all tractable models simplify reality. How-
ever, we now know that predictions based on the model of the self-
Moral Sentiments and Material Interests 7



regarding actor often do not hold up under empirical scrutiny, render-
ing the model inapplicable in many contexts.
A similar situation has existed in human biology. Biologists have
been lulled into complacency by the simplicity and apparent explana-
tory power of two theories: inclusive ¬tness and reciprocal altruism
(Hamilton 1964; Williams 1966; Trivers 1971). Hamilton showed that
we do not need amorphous notions of species-level altruism to explain
cooperation between related individuals. If a behavior that costs an in-
dividual c produces a bene¬t b for another individual with degree of
biological relatedness r (e.g., r ¼ 0:5 for parent-child or brother, and
r ¼ 0:25 for grandparent-grandchild), then the behavior will spread if
r > c=b. Hamilton™s notion of inclusive ¬tness has been central to the
modern, and highly successful, approach to explaining animal behav-
ior (Alcock 1993). Trivers followed Hamilton in showing that even a
sel¬sh individual will come to the aid of an unrelated other, provided
there is a suf¬ciently high probability the aid will be repaid in the
future. He also was prescient in stressing the ¬tness-enhancing effects
of such seemingly ˜˜irrational™™ emotions and behaviors as guilt, grati-
tude, moralistic aggression, and reparative altruism. Trivers™ reciprocal
altruism, which mirrors the economic analysis of exchange between
self-interested agents in the absence of costless third-party enforcement
(Axelrod and Hamilton 1981), has enjoyed only limited application to
nonhuman species (Stephens, McLinn, and Stevens 2002), but became
the basis for biological models of human behavior (Dawkins 1976;
Wilson 1975).
These theories convinced a generation of researchers that, except for
sacri¬ce on behalf of kin, what appears to be altruism (personal sacri-
¬ce on behalf of others) is really just long-run material self-interest.
Ironically, human biology has settled in the same place as economic
theory, although the disciplines began from very different starting
points, and used contrasting logic. Richard Dawkins, for instance,
struck a responsive chord among economists when, in The Sel¬sh Gene
(1989[1976], v.), he con¬dently asserted ˜˜We are survival machines”
robot vehicles blindly programmed to preserve the sel¬sh molecules
known as genes. . . . This gene sel¬shness will usually give rise to self-
ishness in individual behavior.™™ Re¬‚ecting the intellectual mood of the
times, in his The Biology of Moral Systems, R. D. Alexander asserted,
˜˜Ethics, morality, human conduct, and the human psyche are to be un-
derstood only if societies are seen as collections of individuals seeking
their own self-interest. . . .™™ (1987, 3).
8 Gintis, Bowles, Boyd, and Fehr



The experimental evidence supporting the ubiquity of non“self-
regarding motives, however, casts doubt on both the economist™s and
the biologist™s model of the self-regarding human actor. Many of these
experiments examine a nexus of behaviors that we term strong reciproc-
ity. Strong reciprocity is a predisposition to cooperate with others, and to
punish (at personal cost, if necessary) those who violate the norms of coopera-
tion, even when it is implausible to expect that these costs will be recovered at
a later date.3 Standard behavioral models of altruism in biology, politi-
cal science, and economics (Trivers 1971; Taylor 1976; Axelrod and
Hamilton 1981; Fudenberg and Maskin 1986) rely on repeated interac-
tions that allow for the establishment of individual reputations and the
punishment of norm violators. Strong reciprocity, on the other hand,
remains effective even in non-repeated and anonymous situations.4
Strong reciprocity contributes not only to the analytical modeling of
human behavior but also to the larger task of creating a cogent political
philosophy for the twenty-¬rst century. While the writings of the great
political philosophers of the past are usually both penetrating and
nuanced on the subject of human behavior, they have come to be inter-
preted simply as having either assumed that human beings are essen-
tially self-regarding (e.g., Thomas Hobbes and John Locke) or, at least
under the right social order, entirely altruistic (e.g., Jean Jacques Rous-
seau, Karl Marx). In fact, people are often neither self-regarding nor al-
truistic. Strong reciprocators are conditional cooperators (who behave
altruistically as long as others are doing so as well) and altruistic pun-
ishers (who apply sanctions to those who behave unfairly according to
the prevalent norms of cooperation).
Evolutionary theory suggests that if a mutant gene promotes self-
sacri¬ce on behalf of others”when those helped are unrelated and
therefore do not carry the mutant gene and when selection operates
only on genes or individuals but not on higher order groups”that the
mutant should die out. Moreover, in a population of individuals who
sacri¬ce for others, if a mutant arises that does not so sacri¬ce, that
mutant will spread to ¬xation at the expense of its altruistic counter-
parts. Any model that suggests otherwise must involve selection on a
level above that of the individual. Working with such models is natu-
ral in several social science disciplines but has been generally avoided
by a generation of biologists weaned on the classic critiques of group
selection by Williams (1966), Dawkins (1976), Maynard Smith (1976),
Crow and Kimura (1970), and others, together with the plausible alter-
natives offered by Hamilton (1964) and Trivers (1971).
Moral Sentiments and Material Interests 9



But the evidence supporting strong reciprocity calls into question the
ubiquity of these alternatives. Moreover, criticisms of group selection
are much less compelling when applied to humans than to other ani-
mals. The criticisms are considerably weakened when (a) Altruistic
punishment is the trait involved and the cost of punishment is rela-
tively low, as is the case for Homo sapiens; and/or (b) Either pure cul-
tural selection or gene-culture coevolution are at issue. Gene-culture
coevolution (Lumsden and Wilson 1981; Durham 1991; Feldman and
Zhivotovsky 1992; Gintis 2003a) occurs when cultural changes render
certain genetic adaptations ¬tness-enhancing. For instance, increased
communication in hominid groups increased the ¬tness value of con-
trolled sound production, which favored the emergence of the modern
human larynx and epiglottis. These physiological attributes permitted
the ¬‚exible control of air ¬‚ow and sound production, which in turn
increased the value of language development. Similarly, culturally
evolved norms can affect ¬tness if norm violators are punished by
strong reciprocators. For instance, antisocial men are ostracized in
small-scale societies, and women who violate social norms are unlikely
to ¬nd or keep husbands.
In the case of cultural evolution, the cost of altruistic punishment is
considerably less than the cost of unconditional altruism, as depicted
in the classical critiques (see chapter 7). In the case of gene-culture
coevolution, there may be either no within-group ¬tness cost to the
altruistic trait (although there is a cost to each individual who dis-
plays this trait) or cultural uniformity may so dramatically reduce
within-group behavioral variance that the classical group selection
mechanism”exempli¬ed, for instance, by Price™s equation (Price 1970,
1972)”works strongly in favor of selecting the altruistic trait.5
Among these models of multilevel selection for altruism is pure ge-
netic group selection (Sober and Wilson 1998), according to which the
¬tness costs of reciprocators is offset by the tendency for groups with
a high fraction of reciprocators to outgrow groups with few reciproca-
tors.6 Other models involve cultural group selection (Gintis 2000; Hen-
rich and Boyd 2001), according to which groups that transmit a culture
of reciprocity outcompete societies that do not. Such a process is as
modeled by Boyd, Gintis, Bowles, and Richerson in chapter 7 of this
volume, as well as in Boyd et al. 2003. As the literature on the coevolu-
tion of genes and culture shows (Feldman, Cavalli-Sforza, and Peck
1985; Bowles, Choi, and Hopfensitz 2003; Gintis 2003a, 2003b), these
two alternatives can both be present and mutually reinforcing. These
10 Gintis, Bowles, Boyd, and Fehr



explanations have in common the idea that altruism increases the ¬t-
ness of members of groups that practice it by enhancing the degree of
cooperation among members, allowing these groups to outcompete
other groups that lack this behavioral trait. They differ in that some
require strong group-level selection (in which the within-group ¬tness
disadvantage of altruists is offset by the augmented average ¬tness of
members of groups with a large fraction of altruists) whereas others re-
quire only weak group-level selection (in which the within-group ¬t-
ness disadvantage of altruists is offset by some social mechanism that
generates a high rate of production of altruists within the group itself).
Weak group selection models such as Gintis (2003a, 2003b) and chap-
ter 4, where supra-individual selection operates only as an equilib-
rium selection device, avoid the classic problems often associated with
strong group selection models (Maynard Smith 1976; Williams 1966;
Boorman and Levitt 1980).
This chapter presents an overview of Moral Sentiments and Material
Interests. While the various chapters of this volume are addressed
to readers independent of their particular disciplinary expertise, this
chapter makes a special effort to be broadly accessible. We ¬rst sum-
marize several types of empirical evidence supporting strong reciproc-
ity as a schema for explaining important cases of altruism in humans.
This material is presented in more detail by Ernst Fehr and Urs Fisch-
bacher in chapter 5. In chapter 6, Armin Falk and Urs Fischbacher
show explicitly how strong reciprocity can explain behavior in a vari-
ety of experimental settings. Although most of the evidence we report
is based on behavioral experiments, the same behaviors are regularly
observed in everyday life, for example in cooperation in the protection
of local environmental public goods (as described by Elinor Ostrom
in chapter 9), in wage setting by ¬rms (as described by Truman Bewley
in chapter 11), in political attitudes and voter behavior (as described
by Fong, Bowles, and Gintis in chapter 10), and in tax compliance
(Andreoni, Erard, and Feinstein 1998).
˜˜The Origins of Reciprocity™™ later in this chapter reviews a variety of
models that suggest why, under conditions plausibly characteristic of
the early stages of human evolution, a small fraction of strong recipro-
cators could invade a population of self-regarding types, and a stable
equilibrium with a positive fraction of strong reciprocators and a high
level of cooperation could result.
While many chapters of this book are based on some variant of
the notion of strong reciprocity, Joan Silk™s overview of cooperation in
Moral Sentiments and Material Interests 11



primate species (chapter 2) makes it clear that there are important
behavioral forms of cooperation that do not require this level of sophis-
tication. Primates form alliances, share food, care for one another™s
infants, and give alarm calls”all of which most likely can be explained
in terms of long-term self-interest and kin altruism. Such forms of co-
operation are no less important in human society, of course, and strong
reciprocity can be seen as a generalization of the mechanisms of kin
altruism to nonrelatives. In chapter 3, Hillard Kaplan and Michael
Gurven argue that human cooperation is an extension of the complex
intrafamilial and interfamilial food sharing that is widespread in con-
temporary hunter-gatherer societies. Such sharing remains important
even in modern market societies.
Moreover, in chapter 4, Eric Alden Smith and Rebecca Bliege Bird
propose that many of the phenomena attributed to strong reciprocity
can be explained in a costly signaling framework. Within this frame-
work, individuals vary in some socially important quality, and higher-
quality individuals pay lower marginal signaling costs and thus have a
higher optimal level of signaling intensity, given that other members of
their social group respond to such signals in mutually bene¬cial ways.
Smith and Bliege Bird summarize an n-player game-theoretical signal-
ing model developed by Gintis, Smith, and Bowles (2001) and discuss
how it might be applied to phenomena such as provisioning feasts, col-
lective military action, or punishing norm violators. There are several
reasons why such signals might sometimes take the form of group-
bene¬cial actions. Providing group bene¬ts might be a more ef¬cient
form of broadcasting the signal than collectively neutral or harmful
actions. Signal receivers might receive more private bene¬ts from ally-
ing with those who signal in group-bene¬cial ways. Furthermore, once
groups in a population vary in the degree to which signaling games
produce group-bene¬cial outcomes, cultural (or even genetic) group
selection might favor those signaling equilibria that make higher con-
tributions to mean ¬tness.
We close this chapter by describing some applications of this mate-
rial to social policy.

1.2 The Ultimatum Game

In the ultimatum game, under conditions of anonymity, two players
are shown a sum of money (say $10). One of the players, called the pro-
poser, is instructed to offer any number of dollars, from $1 to $10, to the
12 Gintis, Bowles, Boyd, and Fehr



second player, who is called the responder. The proposer can make only
one offer. The responder, again under conditions of anonymity, can
either accept or reject this offer. If the responder accepts the offer, the
money is shared accordingly. If the responder rejects the offer, both
players receive nothing.
Since the game is played only once and the players do not know
each other™s identity, a self-regarding responder will accept any posi-
tive amount of money. Knowing this, a self-regarding proposer will
offer the minimum possible amount ($1), which will be accepted. How-
ever, when the ultimatum game is actually played, only a minority of
agents behave in a self-regarding manner. In fact, as many replications of
this experiment have documented, under varying conditions and with
varying amounts of money, proposers routinely offer respondents very
substantial amounts (¬fty percent of the total generally being the
modal offer), and respondents frequently reject offers below thirty per-
¨
cent (Camerer and Thaler 1995; Guth and Tietz 1990; Roth et al. 1991).
The ultimatum game has been played around the world, but mostly
with university students. We ¬nd a great deal of individual variability.
For instance, in all of the studies cited in the previous paragraph, a
signi¬cant fraction of subjects (about a quarter, typically) behave in a
self-regarding manner. Among student subjects, however, average per-
formance is strikingly uniform from country to country.
Behavior in the ultimatum game thus conforms to the strong reci-
procity model: ˜˜fair™™ behavior in the ultimatum game for college
students is a ¬fty-¬fty split. Responders reject offers less than forty per-
cent as a form of altruistic punishment of the norm-violating proposer.
Proposers offer ¬fty percent because they are altruistic cooperators, or
forty percent because they fear rejection. To support this interpretation,
we note that if the offer in an ultimatum game is generated by a com-
puter rather than a human proposer (and if respondents know this),
low offers are very rarely rejected (Blount 1995). This suggests that
players are motivated by reciprocity, reacting to a violation of behav-
ioral norms (Greenberg and Frisch 1972).
Moreover, in a variant of the game in which a responder rejection
leads to the responder receiving nothing, but allowing the proposer
to keep the share he suggested for himself, respondents never reject
offers, and proposers make considerably smaller (but still positive)
offers. As a ¬nal indication that strong reciprocity motives are opera-
tive in this game, after the game is over, when asked why they offer
Moral Sentiments and Material Interests 13



more than the lowest possible amount, proposers commonly say that
they are afraid that respondents will consider low offers unfair and re-
ject them. When respondents reject offers, they usually claim they want
to punish unfair behavior.

1.3 Strong Reciprocity in the Labor Market

¨
In Fehr, Gachter, and Kirchsteiger 1997, the experimenters divided a
group of 141 subjects (college students who had agreed to participate
in order to earn money) into a set of ˜˜employers™™ and a larger set of
˜˜employees.™™ The rules of the game are as follows: If an employer hires
an employee who provides effort e and receives wage w, his pro¬t is
100e À w. The wage must be between 1 and 100, and the effort between
0.1 and 1. The payoff to the employee is then u ¼ w À c°eÞ, where c°eÞ
is the ˜˜cost of effort™™ function, which is increasing and convex (the
marginal cost of effort rises with effort). All payoffs involve real money
that the subjects are paid at the end of the experimental session.
The sequence of actions is as follows. The employer ¬rst offers a
˜˜contract™™ specifying a wage w and a desired amount of effort e à . A
contract is made with the ¬rst employee who agrees to these terms.
An employer can make a contract °w; e Ã Þ with at most one employee.
The employee who agrees to these terms receives the wage w and sup-
plies an effort level e, which need not equal the contracted effort, e à . In
effect, there is no penalty if the employee does not keep his or her
promise, so the employee can choose any effort level, e between .1
and 1 with impunity. Although subjects may play this game several
times with different partners, each employer-employee interaction is a
one-shot (non-repeated) event. Moreover, the identity of the interact-
ing partners is never revealed.
If employees are self-regarding, they will choose the zero-cost effort
level, e ¼ 0:1, no matter what wage is offered them. Knowing this,
employers will never pay more than the minimum necessary to get the
employee to accept a contract, which is 1. The employee will accept
this offer, and will set e ¼ 0:1. Since c°0:1Þ ¼ 0, the employee™s payoff
is u ¼ 1. The employer™s payoff is °0:1 ‚ 100Þ À 1 ¼ 9.
In fact, however, a majority of agents failed to behave in a self-
regarding manner in this experiment.7 The average net payoff to
employees was u ¼ 35, and the more generous the employer™s wage
offer to the employee, the higher the effort the employee provided.
14 Gintis, Bowles, Boyd, and Fehr



1.0
0.9
Contracted Effort
0.8
0.7
Average Effort




0.6
0.5
0.4 Delivered Effort
0.3
0.2
0.1
0
0-5 6-10 11-15 16-20 21-25 26-30 31-35 36-40 41-45 46-50

Payoff Offer to Employee

Figure 1.1
Relation of contracted and delivered effort to worker payoff (141 subjects). From Fehr,
¨
Gachter, and Kirchsteiger (1997).


In effect, employers presumed the strong reciprocity predispositions
of the employees, making quite generous wage offers and receiving
higher effort, as a means of increasing both their own and the em-
ployee™s payoff, as depicted in ¬gure 1.1. Similar results have been
observed in Fehr, Kirchsteiger, and Riedl (1993, 1998).
Figure 1.1 also shows that although there is a considerable level of
cooperation, there is still a signi¬cant gap between the amount of effort
agreed upon and the amount actually delivered. This is because, ¬rst,
only ¬fty to sixty percent of the subjects are reciprocators, and second,
only twenty-six percent of the reciprocators delivered the level of effort
they promised! We conclude that strong reciprocators are inclined to
compromise their morality to some extent.
This evidence is compatible with the notion that the employers are
`
purely self-regarding, since their bene¬cent behavior vis-a-vis their
employees was effective in increasing employer pro¬ts. To see if em-
ployers are also strong reciprocators, the authors extended the game
following the ¬rst round of experiments by allowing the employers to
respond reciprocally to the actual effort choices of their workers. At a
cost of 1, an employer could increase or decrease his employee™s payoff
by 2.5. If employers were self-regarding, they would of course do nei-
ther, since they would not interact with the same worker a second
time. However, sixty-eight percent of the time employers punished
Moral Sentiments and Material Interests 15



employees that did not ful¬ll their contracts, and seventy percent of
the time employers rewarded employees who overful¬lled their con-
tracts. Indeed, employers rewarded forty-one percent of employees
who exactly ful¬lled their contracts. Moreover, employees expected this
behavior on the part of their employers, as shown by the fact that their
effort levels increased signi¬cantly when their bosses gained the power
to punish and reward them. Underful¬lling contracts dropped from
eighty-three to twenty-six percent of the exchanges, and overful¬lled
contracts rose from three to thirty-eight percent of the total. Finally,
allowing employers to reward and punish led to a forty-percent in-
crease in the net payoffs to all subjects, even when the payoff reduc-
tions resulting from employer punishment of employees are taken into
account.
We conclude from this study that the subjects who assume the role
of employee conform to internalized standards of reciprocity, even
when they are certain there are no material repercussions from behav-
ing in a self-regarding manner. Moreover, subjects who assume the
role of employer expect this behavior and are rewarded for acting
accordingly. Finally, employers draw upon the internalized norm of
rewarding good and punishing bad behavior when they are permitted
to punish, and employees expect this behavior and adjust their own
effort levels accordingly.

1.4 The Public Goods Game

The public goods game has been analyzed in a series of papers by the so-
cial psychologist Toshio Yamagishi (1986, 1988a, 1998b), by the politi-
cal scientist Elinor Ostrom and her coworkers (Ostrom, Walker, and
Gardner 1992), and by economists Ernst Fehr and his coworkers
¨chter and Fehr 1999; Fehr and Gachter 2000a, 2002). These research-
¨
(Ga
ers uniformly found that groups exhibit a much higher rate of cooperation
than can be expected assuming the standard model of the self-regarding actor,
and this is especially the case when subjects are given the option of in-
curring a cost to themselves in order to punish free-riders.
A typical public goods game has several rounds, say ten. The sub-
jects are told the total number of rounds and all other aspects of the
game and are paid their winnings in real money at the end of the
session. In each round, each subject is grouped with several other
subjects”say three others”under conditions of strict anonymity.
Each subject is then given a certain number of ˜˜points,™™ say twenty,
16 Gintis, Bowles, Boyd, and Fehr



redeemable at the end of the experimental session for real money. Each
subject then places some fraction of his points in a ˜˜common account™™
and the remainder in the subject™s own ˜˜private account.™™
The experimenter then tells the subjects how many points were con-
tributed to the common account and adds to the private account of
each subject some fraction of the total amount in the common account,
say forty percent. So if a subject contributes his or her whole twenty
points to the common account, each of the four group members will re-
ceive eight points at the end of the round. In effect, by putting her or
his whole endowment into the common account, a player loses twelve
points but the other three group members gain a total of twenty-four
(¼ 8 ‚ 3) points. The players keep whatever is in their private accounts
at the end of each round.
A self-regarding player will contribute nothing to the common ac-
count. However, only a fraction of subjects in fact conform to the self-
interest model. Subjects begin by contributing on average about half of
their endowments to the public account. The level of contributions
decays over the course of the ten rounds, until in the ¬nal rounds most
players are behaving in a self-regarding manner (Dawes and Thaler
1988; Ledyard 1995). In a metastudy of twelve public goods experi-
ments, Fehr and Schmidt (1999) found that in the early rounds, aver-
age and median contribution levels ranged from forty to sixty percent
of the endowment, but in the ¬nal period seventy-three percent of all
individuals (N ¼ 1042) contributed nothing, and many of the other
players contributed close to zero. These results are not compatible
with the sel¬sh-actor model (which predicts zero contribution in all
rounds), although they might be predicted by a reciprocal altruism
model, since the chance to reciprocate declines as the end of the experi-
ment approaches.
However this is not in fact the explanation of the moderate but dete-
riorating levels of cooperation in the public goods game. The subjects™
own explanation of the decay of cooperation after the experiment is
that cooperative subjects became angry with others who contributed
less than themselves and retaliated against free-riding low contributors
in the only way available to them”by lowering their own contribu-
tions (Andreoni 1995).
Experimental evidence supports this interpretation. When subjects
are allowed to punish noncontributors, they do so at a cost to them-
selves (Orbell, Dawes, and Van de Kragt 1986; Sato 1987; Yamagishi
1988a, 1988b, 1992). For instance, in Ostrom, Walker, and Gardner
Moral Sentiments and Material Interests 17



(1992), subjects interacted for twenty-¬ve periods in a public goods
game. By paying a ˜˜fee,™™ subjects could impose costs on other subjects
by ˜˜¬ning™™ them. Since ¬ning costs the individual who uses it, and the
bene¬ts of increased compliance accrue to the group as a whole,
assuming agents are self-regarding, no player ever pays the fee, no
player is ever punished for defecting, and all players defect by con-
tributing nothing to the common pool. However, the authors found
a signi¬cant level of punishing behavior in this version of the public
goods game.
These experiments allowed individuals to engage in strategic behav-
ior, since costly punishment of defectors could increase cooperation
in future periods, yielding a positive net return for the punisher. Fehr
¨
and Gachter (2000a) set up an experimental situation in which the pos-
sibility of strategic punishment was removed. They employed three
different methods of assigning study subjects to groups of four in-
dividuals each. The groups played six- and ten-round public goods
games with costly punishment allowed at the end of each round. There
were suf¬cient subjects to run between ten and eighteen groups simul-
taneously. Under the partner treatment, the four subjects remained in
the same group for all ten rounds. Under the stranger treatment, the
subjects were randomly reassigned after each round. Finally, under
the perfect stranger treatment, the subjects were randomly reassigned
and assured that they would never meet the same subject more than
once.
¨
Fehr and Gachter (2000a) performed their experiment over ten
rounds with punishment and then over ten rounds without punish-
ment.8 Their results are illustrated in ¬gure 1.2. We see that when
costly punishment is permitted, cooperation does not deteriorate, and
in the partner game, despite strict anonymity, cooperation increases to
almost full cooperation, even in the ¬nal round. When punishment is
not permitted, however, the same subjects experience the deterioration
of cooperation found in previous public goods games. The contrast in
cooperation rates between the partner and the two stranger treatments
is worth noting, because the strength of punishment is roughly the
same across all treatments. This suggests that the credibility of the pun-
ishment threat is greater in the partner treatment because the punished
subjects are certain that, once they have been punished in previous
rounds, the punishing subjects remain in their group. The impact of
strong reciprocity on cooperation is thus more strongly manifested
when the group is the more coherent and permanent.
18 Gintis, Bowles, Boyd, and Fehr


20
Punishment Punishment Option
Permitted Removed
18

16 Partner
Average Contribution




14
Stranger
12
Partner
10
Perfect Stranger
8 Stranger

6

4

2
Perfect Stranger
0
0 2 4 6 8 10 12 14 16 18 20
Period

Figure 1.2
Average contributions over time in the partner, stranger, and perfect stranger treatments
¨
when the punishment condition is played ¬rst. Adapted from Fehr and Gachter 2000a.


1.5 Intentions or Outcomes?

One key fact missing from the discussion of public goods games is a
speci¬cation of the relationship between contributing and punishing.
The strong reciprocity interpretation suggests that high contributors
will be high punishers and punishees will be below-average contribu-
¨
tors. This prediction is borne out in Fehr and Gachter (2002), where
seventy-¬ve percent of the punishment acts carried out by the 240 sub-
jects were executed by above-average contributors, and the most im-
portant variable in predicting how much one player punished another
was the difference between the punisher™s contribution and the pun-
ishee™s contribution.
Another key question in interpreting public goods games is: Do
reciprocators respond to fair or unfair intentions or do they respond
to fair or unfair outcomes? The model of strong reciprocity unambigu-
ously favors intentions over outcomes. To answer this question, Falk,
Fehr, and Fischbacher (2002) ran two versions of the ˜˜moonlighting
game™™”an intention treatment (I-treatment) where a player™s inten-
tions could be deduced from his action, and a no-intention treatment
(NI-treatment), where a player™s intentions could not be deduced.
They provide clear and unambiguous evidence for the behavioral rele-
Moral Sentiments and Material Interests 19



vance of intentions in the domain of both negatively and positively re-
ciprocal behavior.
The moonlighting game consists of two stages. At the beginning of
the game, both players are endowed with twelve points. At the ¬rst
stage player A chooses an action a in fÀ6; À5; . . . ; 5; 6g. If A chooses
a > 0, he gives player B a tokens, while if he chooses a < 0, he takes
away jaj tokens from B. In case a b 0, the experimenter triples a so that
B receives 3a. After B observes a, he can choose an action b in
fÀ6; À5; . . . 17; 18g. If b b 0, B gives the amount b to A. If b < 0, B loses
jbj, and A loses j3bj. Since A can give and take while B can reward or
sanction, this game allows for both positively and negatively reciprocal
behavior. Each subject plays the game only once.
If the Bs are self-regarding, they will all choose b ¼ 0, neither
rewarding nor punishing their A partners, since the game is played
only once. Knowing this, if the As are self-regarding, they will all
choose a ¼ À6, which maximizes their payoff. In the I-treatment, A
players are allowed to choose a, whereas in the NI-treatment, A™s
choice is determined by a roll of a pair of dice. If the players are not
self-regarding and care only about the fairness of the outcomes and
not intentions, there will be no difference in the behavior of the B
players across the I- and the NI-treatments. Moreover, if the A players
believe their B partners care only about outcomes, their behavior will
not differ across the two treatments. If the B players care only about
the intentions of their A partners, they will never reward or punish in
the NI-treatment, but they will reward partners who choose high a > 0
and punish partners who choose a < 0.
The experimenters™ main result was that the behavior of player B
in the I-treatment is substantially different from the behavior in the
NI-treatment, indicating that the attribution of fairness intentions is
behaviorally important. Indeed, As who gave to Bs were generally
rewarded by Bs in the I-treatment much more that in the NI-treatment
(signi¬cant at the 1 level), and As who took from Bs were generally
punished by Bs in the I-treatment much more than in the NI-treatment
(signi¬cant at the 1 level).
Turning to individual patterns of behavior, in the I-treatment, no
agent behaved purely sel¬shly (i.e., no agent set b ¼ 0 independent of
a), whereas in the NI-treatment thirty behaved purely sel¬shly. Con-
versely, in the I-treatment seventy-six percent of subjects rewarded or
sanctioned their partner, whereas in the NI-treatment, only thirty-nine
percent of subjects rewarded or sanctioned. We conclude that most
20 Gintis, Bowles, Boyd, and Fehr



agents are motivated by the intentionality of their partners, but a sig-
ni¬cant fraction care about the outcome, either exclusively or in addi-
tion to the intention of the partner.

1.6 Crowding Out

There are many circumstances in which people voluntarily engage in
an activity, yet when monetary incentives are added in an attempt to
increase the level of the activity, the level actually decreases. The rea-
son for this phenomenon, which is called crowding out, is that the num-
ber of contributors responding to the monetary incentives is more than
offset by the number of discouraged voluntary contributors. This phe-
nomenon was ¬rst stressed by Titmuss (1970), noting that voluntary
blood donation in Britain declined sharply when a policy of paying
donors was instituted alongside the voluntary sector. More recently,
Frey (1997a, 1997b, 1997c) has applied this idea to a variety of situa-
tions. In chapter 9 of this volume, Elinor Ostrom provides an extremely
important example of crowding out. Ostrom reviews the extensive evi-
dence that when the state regulates common property resources (such
as scare water and depletable ¬sh stocks) by using ¬nes and subsidies
to encourage conservation, the overuse of these resources may actually
increase. This occurs because the voluntary, community-regulated, sys-
tem of restraints breaks down in the face of relatively ineffective formal
government sanctions.
In many cases, such crowding out can be explained in a parsimoni-
ous manner by strong reciprocity. Voluntary behavior is the result of
what we have called the predisposition to contribute to a cooperative en-
deavor, contingent upon the cooperation of others. The monetary incen-
tive to contribute destroys the cooperative nature of the task, and the
threat of ¬ning defectors may be perceived as being an unkind or hos-
tile action (especially if the ¬ne is imposed by agents who have an
antagonistic relationship with group members). The crowding out of
voluntary cooperation and altruistic punishment occur because the pre-
conditions for the operation of strong reciprocity are removed when
explicit material incentives are applied to the task.
This interpretation is supported by the laboratory experiment of
¨
Fehr and Gachter (2000b), who show that in an employer“employee
setting (see Strong Reciprocity in the Labor Market) if an employer
explicitly threatens to ¬ne a worker for malfeasance, the worker™s will-
ingness to cooperate voluntarily is signi¬cantly reduced. Similarly,
Moral Sentiments and Material Interests 21



Fehr and List (2002) report that chief executive of¬cers respond in a less
trustworthy manner if they face a ¬ne compared to situations where
they do not face a ¬ne.
As a concrete example, consider Fehr and Rockenbach™s (2002) ex-
periment involving 238 subjects. Mutually anonymous subjects are
paired, one subject having the role of investor, the other responder. They
then play a trust game in which both subjects receive ten money units
(MUs). The investor can transfer any portion of his endowment to the
responder and must specify a desired return from the responder, which
could be any amount less than or equal to what the responder receives
as a result of tripling the investor™s transfer. The responder, knowing
both the amount sent and the amount the investor wants back, chooses
an amount to send back to the investor (not necessarily the amount
investor requested). The investor receives this amount (which is not
tripled), and the game is over.
There were two experimental conditions”a trust condition with no
additional rules and an incentive condition that adds one more rule:
the investor has the option of precommitting to impose a ¬ne of four
MUs on the responder should the latter return less than the investor™s
desired return. At the time the investor chooses the transfer and the
desired return, he also must specify whether to impose the ¬ne condi-
tion. The responder then knows the transfer, the desired return, and
whether the ¬ne condition was imposed by the investor.
Since all the interactions in this game are anonymous and there is
only one round, self-regarding respondents will return nothing in the
trust condition and at most four MUs in the incentive condition. Thus,
self-regarding investors who expect their partners to be self-regarding
will send nothing to responders in the trust condition and will not ask
for more than four MUs back in the incentive condition. Assuming a
respondent will only avoid the ¬ne if he can gain from doing so, the
investor will transfer two MUs and ask for three MUs back, the res-
ponder will get six MUs and return three MUs to the investor. It fol-
lows that if all agents are self-regarding and all know that this is the
case, investors will always choose to impose the ¬ne condition and
end up with eleven MUs, while the responders end up with thirteen
MUs.
In contrast to this hypothesis, responders actually paid back sub-
stantial amounts of money under all conditions. In addition, res-
ponders™ returns to investors were highest when the investor refrained
from imposing the ¬ne in the incentive condition and were lowest
22 Gintis, Bowles, Boyd, and Fehr



when the investor imposed the ¬ne condition in the incentive condi-
tion. Returns were intermediate under the trust condition where ¬nes
could not be imposed.
The experimenters ascertained that the greater return when the ¬ne
was not imposed could not be explained either by investors in that sit-
uation transferring more to the responders or by investors requesting
more modest returns from the respondents. But if we assume that im-
posing the ¬ne condition is interpreted as a hostile act by the respon-
dent, and hence not imposing this condition is interpreted as an act
of kindness and trust, then strong reciprocity supplies a plausible rea-
son why responders increase their compliance with investors™ requests
when the investors refrain from ¬ning them.

1.7 The Origins of Strong Reciprocity

Some behavioral scientists, including many sociologists and anthropol-
ogists, are quite comfortable with the notion that altruistic motivations
are an important part of the human repertoire and explain their preva-
lence by cultural transmission. Support for a strong cultural element in
the expression of both altruistic cooperation and punishment can be
drawn from the wide variation in strength of both cooperation and
punishment exhibited in our small-scale societies study (Henrich et al.
[2001] and this chapter™s discussion of the ultimatum game), and our
ability to explain a signi¬cant fraction of the variation in behavior in
terms of social variables (cooperation in production and degree of mar-
ket integration). Even though altruists must bear a ¬tness cost for their
behavior not shared by self-regarding types, in most cases this cost is
not high”shunning, gossip, and ostracism, for instance (Bowles and
Gintis 2004). Indeed, as long as the cultural system transmits altruistic
values strongly enough to offset the ¬tness costs of altruism, society
can support motivations that are not ¬tness-maximizing inde¬nitely
(Boyd and Richerson 1985; Gintis 2003b). Moreover, societies with cul-
tural systems that promote cooperation will outcompete those that
do not, and individuals tend to copy the behaviors characteristic of
successful groups. Together, these forces can explain the diffusion of
group-bene¬cial cultural practices (Soltis, Boyd, and Richerson 1995;
Boyd and Richerson 2002).
While culture is part of the explanation, it is possible that strong rec-
iprocity, like kin altruism and reciprocal altruism, has a signi¬cant
genetic component. Altruistic punishment, for instance, is not cultur-
Moral Sentiments and Material Interests 23



ally transmitted in many societies where people regularly engage in it
(Brown 1991). In the Judeo-Christian tradition, for example, charity
and forgiveness (˜˜turn the other cheek™™) are valued, while seeking re-
venge is denigrated. Indeed, willingness to punish transgressors is not
seen as an admirable personal trait and, except in special circumstan-
ces, people are not subject to social opprobrium for failing to punish
those who hurt them.
If this is the case, the altruistic behaviors documented and modeled
in this book indicate that gene-culture coevolution has been operative
for human beings. This is indeed what we believe to be the case, and
in this section we describe some plausible coevolutionary models that
could sustain strong reciprocity. It is thus likely that strong reciprocity
is the product of gene-culture coevolution. It follows that group level-
characteristics that enhance group selection pressures”such as rela-
tively small group size, limited migration, or frequent intergroup
con¬‚icts”coevolved with cooperative behaviors. This being the case,
we concluded that cooperation is based in part on the distinctive
capacities of humans to construct institutional environments that limit
within-group competition and reduce phenotypic variation within
groups, thus heightening the relative importance of between-group
competition and allowing individually-costly but ingroup-bene¬cial
behaviors to coevolve within these supporting environments through
a process of interdemic group selection.
The idea that the suppression of within-group competition may be a
strong in¬‚uence on evolutionary dynamics has been widely recognized
in eusocial insects and other species. Boehm (1982) and Eibl-Eibesfeldt
(1982) ¬rst applied this reasoning to human evolution, exploring the
role of culturally transmitted practices that reduce phenotypic varia-
tion within groups. Examples of such practices are leveling institu-
tions, such as monogamy and food sharing among nonkin (namely,
those practices which reduce within-group differences in reproductive
¬tness or material well-being). By reducing within-group differences
in individual success, such structures may have attenuated within-
group genetic or cultural selection operating against individually-
costly but group-bene¬cial practices, thus giving the groups adopting
them advantages in intergroup contests. Group-level institutions are
thus constructed environments capable of imparting distinctive di-
rection and pace to the process of biological evolution and cultural
change. Hence, the evolutionary success of social institutions that re-
duce phenotypic variation within groups may be explained by the
24 Gintis, Bowles, Boyd, and Fehr



fact that they retard selection pressures working against ingroup“
bene¬cial individual traits and that high frequencies of bearers of these
traits reduces the likelihood of group extinctions (Bowles, Choi, and
Hopfensitz 2003).
In chapter 8, Rajiv Sethi and E. Somanathan provide an overview of
evolutionary models of reciprocity conforming to the logic described in
the previous paragraph and also present their own model of common
property resource use. In their model, there are two types of individu-
als: reciprocators who choose extraction levels that are consistent with
ef¬cient and fair resource use, monitor other users, and punish those
who over-extract relative to the norm; and opportunists who choose
their extraction levels optimally in response to the presence or absence
of reciprocators and do not punish. Since monitoring is costly, and
opportunists comply with the norm only when it is in their interest
to do so, reciprocators obtain lower payoffs than opportunists within
all groups, regardless of composition. However, since the presence of
reciprocators alters the behavior of opportunists in a manner that bene-
¬ts all group members, a population of opportunists can be unstable
under random (non-assortative) matching. More strikingly, even when
a population of opportunists is stable, Sethi and Somanathan show
that stable states in which a mix of reciprocators and opportunists is
present can exist.
In chapter 7, Robert Boyd, Herbert Gintis, Samuel Bowles, and Peter
J. Richerson explore a deep asymmetry between altruistic coopera-
tion and altruistic punishment. They show that altruistic punishment
allows cooperation in quite large groups because the payoff disadvan-
tage of altruistic cooperators relative to defectors is independent of the
frequency of defectors in the population, while the cost disadvantage
of those engaged in altruistic punishment declines as defectors become
rare. Thus, when altruistic punishers are common, selection pressures
operating against them are weak. The fact that punishers experience
only a small disadvantage when defectors are rare means that weak
within-group evolutionary forces, such as conformist transmission,
can stabilize punishment and allow cooperation to persist. Computer
simulations show that selection among groups leads to the evolution
of altruistic punishment when it could not maintain altruistic coopera-
tion without such punishment.
The interested reader will ¬nd a number of related cultural and
gene-culture coevolution models exhibiting the evolutionary stability
of altruism in general, and strong reciprocity in particular, in recent
Moral Sentiments and Material Interests 25



papers (Gintis 2000; Bowles 2001; Henrich and Boyd 2001; and Gintis
2003a).

1.8 Strong Reciprocity: Altruistic Adaptation or Self-Interested
Error?

There is an alternative to our treatment of altruistic cooperation and
punishment that is widely offered in reaction to the evidence upon
which our model of strong reciprocity is based. The following is our
understanding of this argument, presented in its most defensible light.
Until about 10,000 years ago”before the advent of sedentary
agriculture, markets, and urban living”humans were generally sur-
rounded by kin and long-term community consociates. Humans were
thus rarely called upon to deal with strangers or interact in one-shot
situations. During the formative period in our evolutionary history,
therefore, humans developed a cognitive and emotional system that
reinforces cooperation among extended kin and others with whom
one lives in close and frequent contact, but developed little facility for
behaving differently when facing strangers in non-repeatable and/or
anonymous settings. Experimental games therefore confront sub-
jects with settings to which they have not evolved optimal responses.
It follows that strong reciprocity is simply irrational and mistaken
behavior. This accounts for the fact that the same behavior patterns
and their emotional correlates govern subject behavior in both anony-
mous, one-shot encounters and when subjects™ encounters with kin
and long-term neighbors. In sum, strong reciprocity is an historically
evolved form of enlightened self- and kin-interest that falsely appears
altruistic when deployed in social situations for which it was not an
adaptation.
From an operational standpoint, it matters little which of these views
is correct, since human behavior is the same in either case. However, if
altruism is actually misapplied self-interest, we might expect altruistic
behavior to be driven out of existence by consistently self-regarding
individuals in the long run. If these arguments are correct, it would
likely lead to the collapse of the sophisticated forms of cooperation
that have arisen in civilized societies. Moreover, the alternative suggests
that agents can use their intellect to ˜˜learn™™ to behave sel¬shly when
confronted with the results of their suboptimal behavior. The evidence,
however, suggests that cooperation based on strong reciprocity can
26 Gintis, Bowles, Boyd, and Fehr



unravel when there is no means of punishing free-riders but that it
does not unravel simply through repetition.
What is wrong with the alternative theory? First, it is probably not
true that prehistoric humans lived in groups comprised solely of close
kin and long-term neighbors. Periodic social crises in human prehis-
tory, occurring at roughly thirty-year intervals on average, are proba-
ble, since population contractions were common (Boone and Kessler
1999) and population crashes occurred in foraging groups at a mean
rate of perhaps once every thirty years (Keckler 1997). These and re-
lated archaeological facts suggest that foraging groups had relatively
short lifespans.
If the conditions under which humans emerged are similar to the
conditions of modern primates and/or contemporary hunter-gatherer
societies, we can reinforce our argument by noting that there is a con-
stant ¬‚ow of individuals into and out of groups in such societies. Exog-
amy alone, according to which young males or females relocate to
other groups to seek a mate, gives rise to considerable intergroup
mixing and frequent encounters with strangers and other agents with
whom one will not likely interact in the future. Contemporary foraging
groups, who are probably not that different in migratory patterns from
their prehistoric ancestors, are remarkably outbred compared to even
the simplest farming societies, from which we can infer that dealing
with strangers in short-term relationships was a common feature of
our evolutionary history. Henry Harpending (email communication)
has found in his studies of the Bushmen in the Kalahari that there
were essentially random patterns of mating over hundreds of kilo-
meters. See Fix (1999) for an overview and analysis of the relevant
data on this issue.
Second, if prehistoric humans rarely interacted with strangers,
then our emotional systems should not be ¬nely tuned to degrees of
familiarity”we should treat all individuals as neighbors. But we in
fact are quite attuned to varying degrees of relatedness and propin-
quity. Most individuals care most about their children, next about their
close relatives, next about their close neighbors, next about their cona-
tionals, and so on, with decreasing levels of altruistic sentiment as
the bonds of association grow weaker. Even in experimental games,
repetition and absence of anonymity dramatically increase the level of
cooperation and punishment. There is thus considerable evidence that
altruistic cooperation and punishment in one-shot and anonymous set-
tings is the product of evolution and not simply errant behavior.
Moral Sentiments and Material Interests 27



1.9 Strong Reciprocity and Cultural Evolution

Strong reciprocity is a behavioral schema that is compatible with a wide
variety of cultural norms. Strong reciprocators are predisposed to co-
operate in social dilemmas, but the particular social situations that will
be recognized as appropriate for cooperation are culturally variable.
Strong reciprocators punish group members who behave sel¬shly,
but the norms of fairness and the nature of punishment are culturally
variable.
In this section, we ¬rst present evidence that a wide variety of cul-
tural forms are compatible with strong reciprocity. We then argue that
the strong reciprocity schema is capable of stabilizing a set of cultural
norms, whether or not these norms promote the ¬tness of group mem-
bers. Finally, we suggest that the tendency for strong reciprocity to be
attached to prosocial norms can be accounted for by intergroup com-
petition, through which societies prevail over their competitors to the
extent that their cultural systems are ¬tness enhancing.

1.9.1 Cultural Diversity
What are the limits of cultural variability, and how does strong reci-
procity operate in distinct cultural settings? To expand the diversity
of cultural and economic circumstances of experimental subjects, we
undertook a large cross-cultural study of behavior in various games
including the ultimatum game (Henrich et al. 2001; Henrich et al.
2003). Twelve experienced ¬eld researchers, working in twelve coun-
tries on four continents, recruited subjects from ¬fteen small-scale soci-
eties exhibiting a wide variety of economic and cultural conditions.
These societies consisted of three foraging groups (the Hadza of East
Africa, the Au and Gnau of Papua New Guinea, and the Lamalera of
Indonesia), six slash-and-burn horticulturists and agropasturalists (the
´ ´
Ache, Machiguenga, Quichua, Tsimane, and Achuar of South America,
and the Orma of East Africa), four nomadic herding groups (the Tur-
guud, Mongols, and Kazakhs of Central Asia, and the Sangu of East Af-
rica) and two sedentary, small-scale agricultural societies (the Mapuche
of South America and Zimbabwean farmers in Africa).

. 1
( 13)



>>