Love is a virtue unknown to beings of the Vulcan race, according to the annals of Star Trek. The precision of the Vulcans, exemplified by Mr. Spock, and their stern powers of judgement, conform rigidly to the tenets of the mathematical theory of games. One writer has described the theory of games as the Theory of How to Get Your Own Way. And that raises the question, How can we be relied upon to do what is truly right, to love our neighbour, or anyone else, if all we have to guide us is self-interest?
The short and surprising answer is that selfishness serves as an excellent guide to moral behaviour. This paradoxical result is explained and explored at length in Matt Ridley's The Origins of Virtue, a book of popular morality by a zoologist-turned-economist.
Ridley rests much of his argument on a careful assessment of The Prisoner's Dilemma. This is a decision problem of the vexatious type known as a "social trap". A social trap is a game in which pure reason dictates only one possible decision to the players; but if they all make that "rational" decision, everybody loses. In other words, rational self-interest often conflicts with the good of everyone.
The game of Prisoner's Dilemma is set up so that both players are strongly tempted to cheat on one another, but if they both try to do so, they both lose. If they co-operate, they both can win. But in a single encounter, the only rational move is to rat on your co-player. "What we are seeking," says Ridley, "is the logically `best' action in a moral vacuum, not the `right' thing to do." So rational players cheat.
The Prisoner's Dilemma sprays its rational curse over many situations where economic choices can lead to destruction: in fisheries, in marriages, in industrial waste management, in the rain forest, and elsewhere. Under the best strategy, choice and mercy are denied the players, and logic drips suicide into their veins.
The best brains in game theory chewed on this dreadful lesson for many years. Then, stories of the truces that repeatedly broke out among combatants during the First World War inspired Robert Axelrod, a young political scientist. They led him to ask whether it was possible for co-operation to evolve, even in a population of ruthless egoists, and even without a central authority to keep the peace. In 1979 Axelrod programmed a computer to explore the logic of co-operation, and thereby opened up a realm of moral studies unknown to all earlier ages.
Axelrod set up a computer tournament, inviting experts to submit entries that would play the Prisoner's Dilemma two hundred times against every other strategy submitted. At the end of this vast contest the scores of each strategy would be compared. You might think that the nasty strategy "Always-defect"-unanimously favoured by the rational game theorists-would come out on top.
To everyone's amazement, out of the fourteen strategies submitted, the "nice" ones did best. The nicest and simplest strategy of all, "Tit-for-tat", was submitted by Anatol Rapoport, the Toronto political scientist. His strategy simply began by co-operating with its co-player and then did whatever the other guy did last time. It won the grand prize. So Axelrod hosted another tournament and this time sixty-two strategies set out to beat Tit-for-tat. Once again Tit-for-tat came out on top. The computer setup showed that, played repeatedly, the Prisoner's Dilemma ceases to be a zero-sum game. Both sides can win.
Sounds great. Biologists noticed that Tit-for-tat displays reciprocity. That means, you scratch my back and I'll scratch yours, and it requires self-interest. The first real-life example of reciprocal altruism was found in-ugh!-vampire bats. These grisly beasts lead a lean life, for they often fail to reap their harvest of blood every night. Sixty hours without its blood, and the vampire will approach death by starvation.
Reciprocal altruism comes to the rescue. When the bats do get a meal they tend to drink more blood than they need so they can offer the surplus to another bat by regurgitating some. This produces your classical Prisoner's Dilemma payoff: bats who feed each other are better off than bats that don't; bats that take but never give, are best off of all; and the bats that always give but never receive fare worst.
In the bat's world, everything's in place to maintain favourable vampire payoffs over many years. The bats tend to roost in the same places; they live for up to eighteen years; they get to know each other as individuals; and they get the opportunity to play the game repeatedly. Since the bats are not closely related, nepotism doesn't explain their generosity. So they play Tit-for-tat pure and simple. Cheats are quickly detected and reciprocity rules the roost.
As the art of computer morality advanced, critics set fiendish tests for Tit-for-tat and its competitors. They set up tournaments where randomness varied the action. Players made certain mistakes, or switched between tactics randomly. It all produced a dizzying succession of results.
In some of these games Tit-for-tat failed to come out on top, but a near relation called Generous-Tit-for-tat won out. Unfortunately Generous itself is vulnerable because it allows more naive strategies to spread. The pacifist strategy, "Always-co-operate", can flourish among Generous players. But Always-cooperate falls easy victim to Always-defect, the Ivan Boesky Wall Street thug among strategies, which then flourishes. So Generous beats Tit-for-tat but encourages Always-cooperate, which lets in the Ivan Boeskys and Gordon Geckos, and we're back to Thomas Hobbes's war of all against all. It's getting hard to follow the form sheet. The final, best, and most stable strategy remains elusive.
Hoping for a winner, researchers tried a strategy known as "Pavlov", which learns by experience in a rough way. Pavlov is able to switch its bets, like a simple-minded roulette player. If he keeps winning on red, he stays there. Once he loses he switches to black. Pavlov switches from being nice to being nasty according to the behaviour of his co-players. Pavlov is "nice" because it starts by co-operating, like Tit-for-tat. But it has a vindictive streak which lets it exploit weak pacifists like Always-co-operate. So it wins a bundle off the suckers. But Pavlov collapses completely in the face of Always-defect. We seem to be back where we began, in Hobbesland.
Not quite. Played in the more realistic world of probabilities, Pavlov adjusts its ratio of nice-to-nasty according to the behaviour of its environment. In the end Pavlov becomes nice enough for everyone to live comfortably, yet nasty enough to stamp out the thugs and ganefs. We have found morality in the computer. Or have we? Again, not quite. The problem is, that in all the computer games studied so far, the players move simultaneously. But in real life the vampire bats do not do each other favours at the same moment. They take turns.
That suggested a tournament of "asynchronous" Prisoner's Dilemma and sure enough a strategy evolved that beat Pavlov. Call this one "Firm-but-fair". This is just a wee bit nicer than Pavlov. Firm-but-fair continues to co-operate even after being taken for a sucker in the previous round. The point to notice, says Ridley, "is that making the game asynchronous makes guarded generosity even more rewarding." It pays to elicit co-operation by being nice. It's better to meet new customers with a smile.
So! We begin to find that public reputation counts for something. Other people's feelings have somehow sneaked into this purely selfish world of reason. But we're not out of the jungle yet. The Prisoner's Dilemma strategy has to be very finely tuned to make reciprocity pay, and this is only a two-handed game. In real life we find many-handed games, played by groups who have limited resources, and theorists have argued that no reciprocal strategy can tolerate even a rare defection in a large group.
Experiments in the wild seem to show that the power of social ostracism is enough to preserve reciprocity, even among fish. Sticklebacks tend to pick the same partners, and reject others, when they go out to inspect predators. If fish can keep score on who's reliable and who isn't, the higher mammals ought to find it easy. Certainly human beings have enough mental capacity to pick those few reliable friends encountered in hundreds and even thousands of unique meetings. The process is called "discriminating altruism." We generally know who can be trusted, even after a thirty-minute encounter, and we desire to be trusted ourselves.
Summing all this up, Ridley argues that natural selection has engraved reciprocity into our genes during the millions of years of evolution during which we developed into civilized beings, and we can't get away from those inborn, nice instincts.
These games seem so cold-blooded it's hard to know how they can lead to morality as we actually live and suffer by it. We are entitled to wonder if there can be any connection at all between the gene-centred cynicism of game theorists, and the traditional passions and sentiments: our intuitions of sympathy, trust, duty, love, responsibility, and fairness. It looks as though the only lesson the computer can teach us is that morality is nothing but the highest form of expediency.
That's certainly an advance on the heartless view of the influential American philosopher Richard Rorty, who states that for the "liberal ironist" like himself, "There is no answer to the question `Why not be cruel?' " But we who are not liberal ironists know that we must not be cruel, and we would like to fit that invincible fact into the rest of science. If we can't, then so much the worse for science.
We saw a hint of a connection between logic and sentiment in the two-handed, asynchronous strategy of Firm-but-fair, where it began to look as though public reputation could play a part in calculated game-playing. The honest player gets more out of each game if honest players in the future will want to play with him. The shadow of the future, as Axelrod calls it, casts an aura of concern into Mr. Spock the Vulcan's dark, logical selfishness.
Players must solve the problem of how to communicate commitment, that is, trustworthiness, in a world where everybody knows that everybody else has his own selfish interests at heart. The argument goes that players can do their best by showing they will go to irrational lengths to fulfill their promises. One quick way is to display deep emotions, which most people find difficult and exhausting to falsify. With this strategy, it turns out to be rational to behave irrationally.
"Emotions alter the rewards of commitment problems," Ridley writes, "bringing forward to the present distant costs that would not have arisen in the rational calculation. Rage deters transgressors; guilt makes cheating painful for the cheat, envy unmasks self-interest; contempt earns respect; shame punishes; compassion elicits reciprocal passion. And," he adds, introducing what many consider to be the most important of the passions, "love commits us to a relationship."
Love, some people think, is the most irrational of the passions. But for the gene-team of humanity, it pays to fall in love. Intense romantic love may not last, but its power to rapidly cement lasting habits keeps families together, on the average at least. And when it comes to surviving into the future, genes act as though they know favourable averages are good enough. So we have this pretty paradox: love is selfless and even altruistic, but it is a selfish altruism, a calculated risk that arises from rational calculation. The ultimate challenge for Prisoner's Dilemma players is to attract the right partner. And that's how the wider society of not-so-indifferent bystanders exerts its influence on the cramped and selfish strategies of the two-handed game.
With these convincing arguments Ridley shows us how we may at last solve what the Germans have called "Das Adam Smith Problem". Smith's first book, Theory of Moral Sentiments, published in 1759, assumes, apparently irrationally, that people are driven by moral sentiments. He appears to contradict himself in An Inquiry into the Nature & Causes of the Wealth of Nations, published in 1776, in which he derives the wellsprings of a successful economy from the notion of rational self-interest.
Ridley builds on this structure of passionate rationality, and applies it to the much larger problems we face in society today. He applies his explanatory principles to the most dangerous questions: those of war, tribal hatreds, markets, property, trade, and the environment. He shows how, in each of these often intractable areas of conflict, the sturdy armature of our evolved, innate, and quirky human nature can lead us to solutions.
Isaiah Berlin's rendering of Immanuel Kant's dictum comes to mind: "Out of the crooked timber of humanity no straight thing was ever made." Kant was right; but it is only by facing up to our own crookedness, and that of the entire natural world, that we will succeed in cobbling together social structures that are, at best, chaotically stable. We should not try to do better than that. Stability of the chaotic and incurably uncomfortable kind is all our odd, inherited twists will ever permit us.
Ridley's train of reasoning tells us that our moral human nature arises from the many layers of our primitive instincts, millions of years old and stratified, deep and hard. These survival instincts do not always agree among themselves even in the simplest cases. Even less do they harmonize with the topmost layer of our minds, the layer of civilization only 11,000 years old, where novel, emergent demands barely cover the more primitive understrata. The civilized art of life warns me I will wreck myself if I overindulge in food, drink, and sex, whereas the primitive ice-age man within me grasps for feasts and comforts at every available opportunity. In our sudden predicament of plenty, each one of us is doomed to argue among our many evolved selves and debate every decision without end, long after we have committed it.
It is an interesting question whether the type of naturalistic explanation so ably set out in Ridley's book is sufficient to keep our more primitive instincts in line. Perhaps we need to invoke some higher form of order that transcends anything computers can do. If so, then whatever gives that order to our world, if anything does, exhibits a curiously convoluted character, and an ironical one at that.
Richard Lubbock is a writer on natural philosophy.