In a famous experiment, some people are asked to choose between $100 today or $120 tomorrow. Many choose the first. Meanwhile, some people are asked to choose between $100 sixty days from now or $120 sixty-one days from now. Almost everyone choose the laster. The puzzle is this: why are people willing to sacrifice $20 to avoid waiting a day right now but not in the future?
The standard explanation is hyperbolic discounting: humans tend to weigh immediate effects much more strongly than distant ones. But I think the actual psychological effect at work here is just the percentage fallacy. If I ask for the money now, I may have to wait 60 seconds. But if I get it tomorrow I have to wait 143900%more. By contrast, waiting 61 days is only 1.6% worse than waiting 6 days. Why not wait an extra 2% when you get 16% more money for it?
Has anyone done a test confirming the percentage fallacy? A good test would be to show people treat the $100 vs. $120 tradeoff as equivalent to the $1000 to $1200 tradeoff.
Doesn’t either explanation lead to the same empirical predictions in all cases? I can’t think of any experiment that would separate the two ideas. If so, they’re really equivalent.
posted by Evan Harper
on October 7, 2010 #
This type of research is often used to try to infer a “natural” discount rate for net present value calculations, but I think it misses an element of human nature that sort of mimics the probability of default. For example, if the choice is $100 right now or $120 tomorrow, I have to consider that you may not show up tomorrow or you might say you were only kidding about offering me money. If I elect to take the $100 right now, I will know immediately if the offer is serious.
With sixty and sixty-one days in the future, the probability of default is effectively the same (both give you plenty of time to leave town and not pay up). And, as you say, once the recipient has accepted the concept of deferred gratification, there is little reason not to wait an extra day for 20% more.
posted by Ian
on October 7, 2010 #
This is the Time Value of Money - an important reason why interest rates exist. 100 today is 105 a year from now, when dealing in macro. Of course to just you and I 100 today is 120 tomorrow. Same with a car or home loan, paying a discounted chunk right now that adds up to a big interest markup is more valuable than shelling out the discounted full value now, with no interest rate markup.
Experiments are happening all around us, right now.
posted by Daniel
on October 7, 2010 #
My initial reaction (not having ever read the actual papers, and so knowing nothing about the experimental setup) was the same as Ian’s. This seems more about the risk of default if deferring the payoff, and/or the hassle value of having to show up again/stay in contact/etc..
How about this setup: I will give you either a check dated today for $100, or one dated today + deltaT for $100 + $X; which would you prefer?
This seems to make the two cases more equal on all dimensions except the amount of money.
posted by Emile
on October 7, 2010 #
Percentage fallacy seems like a distinct and interesting thing. Intuitively I’m not convinced it’s all of what’s going on here. It’d be interesting to see data for many combinations of delay and reward choices. But sort of like Evan says, it wouldn’t be obvious how to nail down which bias (or blend of biases) is behind the pattern; it probably wouldn’t exactly match any theory. I think there must be entirely other experiments that need to be done here, but I don’t know where to start designing them.
I’m sympathetic with Ian’s guess that hyperbolic discounting might have to do with uncertainty, but I wonder if the relationship’s a little different. It’s not necessarily that we’re rationally modeling the risk of default in particular; rather, maybe the reason we’re wired not to worry much about the future is that our world used to be so volatile that for planning two months ahead was mostly a waste of time — the food you stored would rot or get stolen, etc. So that uncertainty is more or less hardwired in as disregard for the distant future.
You can also make that kind of “second-best” argument about overoptimism — “second-best” here meaning “it’s not rational, but it produced about the right behaviors back when our brains were evolving.” Overoptimism might give us an incorrect picture of the facts, but lead us to make good investments we wouldn’t otherwise make because we’d undervalue the returns.
There are other explanations for discounting, too — e.g., we don’t have the capacity to “taste” far-off rewards the way we do close ones because we can’t viscerally tie the reward to what we did to get it in the same way. It might be that long-range vision is a trait we could have evolved but didn’t since it wasn’t much use long ago, or that difficulty deferring gratification is just a fundamental problem with how anything like the human brain learns.
I think all I really know is that we’re far from done empirically describing people’s problems making decisions, and further from understanding their sources and implications. Some “fixes” might have bad side effects. It’s like trying to debug our crusty, buggy, but basically working firmware — hard and full of surprises.
Anyone recommend particular books, etc. on this stuff? Liked Dan Ariely’s books, liked the research in Nudge but hated the writing, and liked Robert Cialdini’s Influence even though a lot of it’s written like a marketing primer.
posted by Randall
on October 7, 2010 #
It seems like a straightforward trust issue, rather than either of those things. I guess I’m agreeing with Ian: the difference between option one, where you can stand between the dude and the door until he gives you money and the other three options is that you have to go on trust for the other three.
If people’s mothers made the offer, I imagine there’s a good chance you’d see a different result.
Well, depending on the mother.
posted by quinn
on October 7, 2010 #
Assuming the following: implicit trust of the person being questioned in the interviewer and the interviewer’s associated institution, that collecting tomorrow would mean any time after today, no inconvenience in collecting the money later and that “many” would mean a substantially large portion across a varied size range of samplings, it would seem that peoples tendency to anthropomorphize (give human characteristics to) everything would be at play. Trust issues and peoples experience with their own and others “fits of generosity” would be factors that would form in the subconscious and manifest as decisions in spite of every bit of trust a person would purport have.
What lies behind the loci of focus in recognizing our own species is as primitive as it ever was.
posted by James
on October 7, 2010 #
A bird in the hand is worth >1.2 in the bush.
posted by Will
on October 10, 2010 #
I linked to this and your previous post on the percentage fallacy in a thread on Less Wrong, and there were some useful responses. Worth reading the thread, I think.
In particular, it turned out that something similar or identical had been discussed there before, and it cited an experiment demonstrating this effect by Tversky and Kahneman themselves. (Here’s the original paper. See page 457.) However, they didn’t describe it as a distinct fallacy as you did, but rather, explained it in terms of something they called “psychological accounts” (page 456). There’s probably some follow-up research on that.
(As for hyperbolic discounting, though, I think it’s been confirmed pretty strongly, in that it’s been shown not just that “humans tend to weigh immediate effects much more strongly than distant ones” but that this actually tends to follow a hyperbolic function. If you want to hypothesize that this effect is caused by the percentage fallacy, then you’d have to show that it predicts this particular pattern.)
posted by Adam
on October 24, 2010 #
You can also send comments by email.
Comments
Doesn’t either explanation lead to the same empirical predictions in all cases? I can’t think of any experiment that would separate the two ideas. If so, they’re really equivalent.
posted by Evan Harper on October 7, 2010 #
This type of research is often used to try to infer a “natural” discount rate for net present value calculations, but I think it misses an element of human nature that sort of mimics the probability of default. For example, if the choice is $100 right now or $120 tomorrow, I have to consider that you may not show up tomorrow or you might say you were only kidding about offering me money. If I elect to take the $100 right now, I will know immediately if the offer is serious.
With sixty and sixty-one days in the future, the probability of default is effectively the same (both give you plenty of time to leave town and not pay up). And, as you say, once the recipient has accepted the concept of deferred gratification, there is little reason not to wait an extra day for 20% more.
posted by Ian on October 7, 2010 #
This is the Time Value of Money - an important reason why interest rates exist. 100 today is 105 a year from now, when dealing in macro. Of course to just you and I 100 today is 120 tomorrow. Same with a car or home loan, paying a discounted chunk right now that adds up to a big interest markup is more valuable than shelling out the discounted full value now, with no interest rate markup.
Experiments are happening all around us, right now.
posted by Daniel on October 7, 2010 #
My initial reaction (not having ever read the actual papers, and so knowing nothing about the experimental setup) was the same as Ian’s. This seems more about the risk of default if deferring the payoff, and/or the hassle value of having to show up again/stay in contact/etc..
How about this setup: I will give you either a check dated today for $100, or one dated today + deltaT for $100 + $X; which would you prefer?
This seems to make the two cases more equal on all dimensions except the amount of money.
posted by Emile on October 7, 2010 #
Percentage fallacy seems like a distinct and interesting thing. Intuitively I’m not convinced it’s all of what’s going on here. It’d be interesting to see data for many combinations of delay and reward choices. But sort of like Evan says, it wouldn’t be obvious how to nail down which bias (or blend of biases) is behind the pattern; it probably wouldn’t exactly match any theory. I think there must be entirely other experiments that need to be done here, but I don’t know where to start designing them.
I’m sympathetic with Ian’s guess that hyperbolic discounting might have to do with uncertainty, but I wonder if the relationship’s a little different. It’s not necessarily that we’re rationally modeling the risk of default in particular; rather, maybe the reason we’re wired not to worry much about the future is that our world used to be so volatile that for planning two months ahead was mostly a waste of time — the food you stored would rot or get stolen, etc. So that uncertainty is more or less hardwired in as disregard for the distant future.
You can also make that kind of “second-best” argument about overoptimism — “second-best” here meaning “it’s not rational, but it produced about the right behaviors back when our brains were evolving.” Overoptimism might give us an incorrect picture of the facts, but lead us to make good investments we wouldn’t otherwise make because we’d undervalue the returns.
There are other explanations for discounting, too — e.g., we don’t have the capacity to “taste” far-off rewards the way we do close ones because we can’t viscerally tie the reward to what we did to get it in the same way. It might be that long-range vision is a trait we could have evolved but didn’t since it wasn’t much use long ago, or that difficulty deferring gratification is just a fundamental problem with how anything like the human brain learns.
I think all I really know is that we’re far from done empirically describing people’s problems making decisions, and further from understanding their sources and implications. Some “fixes” might have bad side effects. It’s like trying to debug our crusty, buggy, but basically working firmware — hard and full of surprises.
Anyone recommend particular books, etc. on this stuff? Liked Dan Ariely’s books, liked the research in Nudge but hated the writing, and liked Robert Cialdini’s Influence even though a lot of it’s written like a marketing primer.
posted by Randall on October 7, 2010 #
It seems like a straightforward trust issue, rather than either of those things. I guess I’m agreeing with Ian: the difference between option one, where you can stand between the dude and the door until he gives you money and the other three options is that you have to go on trust for the other three.
If people’s mothers made the offer, I imagine there’s a good chance you’d see a different result.
Well, depending on the mother.
posted by quinn on October 7, 2010 #
Assuming the following: implicit trust of the person being questioned in the interviewer and the interviewer’s associated institution, that collecting tomorrow would mean any time after today, no inconvenience in collecting the money later and that “many” would mean a substantially large portion across a varied size range of samplings, it would seem that peoples tendency to anthropomorphize (give human characteristics to) everything would be at play. Trust issues and peoples experience with their own and others “fits of generosity” would be factors that would form in the subconscious and manifest as decisions in spite of every bit of trust a person would purport have.
What lies behind the loci of focus in recognizing our own species is as primitive as it ever was.
posted by James on October 7, 2010 #
A bird in the hand is worth >1.2 in the bush.
posted by Will on October 10, 2010 #
I linked to this and your previous post on the percentage fallacy in a thread on Less Wrong, and there were some useful responses. Worth reading the thread, I think.
In particular, it turned out that something similar or identical had been discussed there before, and it cited an experiment demonstrating this effect by Tversky and Kahneman themselves. (Here’s the original paper. See page 457.) However, they didn’t describe it as a distinct fallacy as you did, but rather, explained it in terms of something they called “psychological accounts” (page 456). There’s probably some follow-up research on that.
(As for hyperbolic discounting, though, I think it’s been confirmed pretty strongly, in that it’s been shown not just that “humans tend to weigh immediate effects much more strongly than distant ones” but that this actually tends to follow a hyperbolic function. If you want to hypothesize that this effect is caused by the percentage fallacy, then you’d have to show that it predicts this particular pattern.)
posted by Adam on October 24, 2010 #
You can also send comments by email.