I am fed up with hearing how irrational taxi drivers are.
That probably needs a little bit of explanation. Psychologists and economists talk about a curious phenomenon whereby taxi drivers seem not to maximise income.
On sunny days, few people want to ride in a taxi. On rainy days, lots of people do. Thus, you can earn more money in an hour on a rainy day than you can on a sunny day. Therefore, reason the economists, if one works a long day when it’s rainy and a short day on sunny days, one maximises income for the same leisure time. But they observe that in reality, taxi drivers typically work long days when it is sunny and short days when it’s rainy. It seems that taxi drivers prefer to work to a daily target and keep going until they reach that target.
This, say the psychologists and economists, is irrational and a mistake.
It’s the basis for theories about how we make decisions that go into “behavioural economics”, and this week, the BBC science documentary series “Horizon” discussed these theories and how they came about. And it started with the taxi driver puzzle.
I’m going to show why the taxi drivers are rational and the economists are wrong.
Here’s a game. I toss a coin, and then you score a 2.5 if it’s heads, and a zero if it’s tails. The aim is to score at least seven in seven tosses. Sounds easy, right? You only need three heads out of seven and you’re well ahead (a score of 7.5, in fact). But what if the first three tosses all come up tails (a 1 in 8 chance) and your home, medical care (the taxi drivers who sparked the puzzle were USAian and therefore dependent on private healthcare at the time) and food depend on reaching that target of 7? Now you need 3 heads in 4, which has only a 1 in 4 chance of occurring, and your life is on the line. Now, wouldn’t you prefer to have worked rather longer hours on the sunny days?
In essence, this is the concept of the random walk and catastrophe point. The most basic random walk is to toss a coin and move one step left if it’s heads and one step right if it’s tails. To your right there is a steep cliff and you will fall and die if you travel too far to your right. If the cliff is right next to you, then you have a 50% chance of dying on your first toss. If it’s one step away, then you can’t die on the first toss, but if you toss the coin twice you have a 25% chance of dying. This is why rich people can afford to start businesses and poorer people can’t: the rich people can absorb much more in terms of early losses, whereas if things go wrong early on for the poorer people then they can’t afford to keep going. A concept one would have thought economists would understand.
The taxi driver might be able to make more profits by using a rainy day working system, but he cannot afford to absorb early losses from adopting it if there happens to be a heatwave. Therefore, he adopts a business model that minimises risk and gives the best chance that he will still have a roof over his head and food in his belly next week.
There is a fundamental bias in the work that the psychologists do, that is based in class privilege (as exemplified by the random walk logic outlined above). This bias is that money represents winning and losing. Middle class people get to view finances in this way because they typically have stable financial circumstances: a nice home, a well-paid job with benefits attached, a nice car, etc. It’s no accident that several movies have used the threat of losing that stable basis as the instigating trauma that leads to a struggle for survival (for example, Trading Places, Enemy Of The State, give two different versions). With that stable setting, loss or gain of income is above all a matter of prestige and relative wellbeing.
For a lot of people, though, money represents things you need versus things you would like. If someone offers me £100, I think immediately in terms of the weekly grocery shop, replacing some worn-out clothes, etc. If someone offers me £1,000 then I think in terms of saving it for a rainy day (or, if I’m a taxi driver, for a heatwave) or for if some disaster requires urgent payment. In that sense, £1,000 actually seems less significant than £100, and I am more likely to gamble with it (if it started out as theirs, not mine) than with £100. Similarly, £10 represents a sizeable chunk of my weekly groceries but to other people it sounds like a bottle of wine to share with friends.
The point being, you can’t draw conclusions about irrational biases based on these experiments that use money as a scoring system. You have to understand what the money represents before you can detect the underlying logic and reasons for the behaviour.
But it doesn’t stop there. Suppose I offer you £800 now, or £100 a week for the next 9 weeks, or £1,000 12 weeks from now. What conclusions could I draw from each answer? Well, I already explained how £100 could represent something real and tangible, and nine weeks of not worrying about how much I spend on groceries sounds very attractive. I might pick that one. But, I might not trust the person offering the money to keep their word, so I could grab the £800 and run. Or I could reason that 12 weeks of interest on £800 isn’t likely to equal £200, so £1,000 in three months’ time seems like a good investment. But someone else might reason differently about all three situations because their experiences and circumstances are different from mine.
Economics is not merely about winning and losing: it’s also about social interactions. The Horizon programme showed at the end how monkeys trained to use tokens as “currency” also show some of the “biases” that researchers claim humans have. In particular, they claimed to have demonstrated “loss aversion”. This was done by having two researchers offer the monkeys food. One offered three bits of food but then took one bit away before making the exchange. The other offered only one bit of food, but then added one before making the exchange. The result is the same in each case: the monkey gets two bits of food. The monkeys invariably favoured the researcher who offered one piece and then added a second, over the one who offered three and took away one. This, claimed the researchers, proved loss-aversion.
I would make a different claim. I would say that it demonstrates “deception aversion”. Specifically, I would say that it demonstrates that monkeys (and humans, when they behave similarly) tend to punish those whom they feel have acted dishonestly or deceptively to take advantage of us (for example, by adding hidden costs to the price of an item). That person is considered to be a social threat and should be excised from the group. We suspect them of planning to take other things away as well, or of perhaps charging even more the next time. We judge their character (or at least, their social status and threat to our own wellbeing) by their behaviour towards us. The person whose initial offer seems a worse deal than they actually give us, on the other hand, is someone worth keeping sweet so that we get the same deal next time. Even if we know everyone is getting the same deal, at the very least we feel that we have not been cheated and this person is less likely to steal from us in future.
Obviously, that’s still a form of bias in one sense: con artists use such thinking to get people to believe they’re going to be better off when actually they’re robbing people blind. But it’s a much more rational thought process, because it is based on building social and emotional connections. Con artists set out to game or hack those social connection urges to win people over. Typically, psychopaths are described as being good at faking those social connections in a similar way. Just because these people can act deceptively within that matrix, does not change that there are rational strategies associated with rewarding “trustworthy” behaviour and punishing “deceptive” behaviour.
This is the problem with so much research. Richard Feynman discusses in “Surely You’re Joking, Mr Feynman”, that a good scientist should look at all the possible explanations for the results, not just the one that confirms his theory. So much social science (such as psychology and economics) seems to ignore this fundamental principle. As a result, you get confirmation bias creeping in, affected by the privilege due to social status that is enjoyed by researchers.