The below research is a great reality check for people serious about human behavior.
Just Wrong: Behavioral Econ/Finance
We have been consistent and persistent antagonists to behavioral theories and esp behavioral econ/finance. (BE). It has always been far more marketing and salesmanship, successful, than science and sensible. It just makes no sense. The most intractable problems of humans and society, government are going to succumb to “nudges!?” It’s simplistic thinking — but very popular.
Specifically, the notions of humans, or any animals acting “irrationally” is patently false and a deeply cynical ideological and rhetorical campaign to bolster failed econ ideas and so-called “theories.” The universal and uncritical implicit use of misleading rhetoric suggests it’s not that econ theories are wrong, and have chronically failed, but we mere mortals are just darn “irrational” — the worst sin to academics it appears. It’s sinister.
By definition “irrational” life forms died off long ago.
Kahnemann’s latest cartoonish notion of just two kinds of thinking is so purposefully ignorant of any recent brain science that we have to assume his goals are to sell books and not share anything useful. Suppose he was writing as an educator and not a scientist. Still, the data we see quickly debunks his theories.
Below is part of a great paper. It’s just one piece of research but let’s hope it’s not smothered and ignored and replications follow. It’s hard (impossible) to combat pop pseudo-science so cleverly and programmatically sold like BE — but the facts will out. Read and you decide. This is excerpted from the original paper here. The paper has a lot of equations but ignore them – the point is clear and strong as outlined below. It’s one study against an endless stream of airport books, conferences, university courses, magazine articles, etc. The PR and marketing machine for anything with the word “behavioral” is a media darling. It will pass.
Disclosure: The lead author of this paper appears to strongly disagree with our characterization of BE as globally unsupported by experimental evidence. He asked us not to publish his article. Strong disagreement, based on facts, we always welcome. Emotional disagreements are irrelevant. We hope we are wrong on pretty much everything we share. We want only the best ideas and evidence and make no pretense of ever having that. We just share what we know. The full article is here.
Please disagree and send in better info and data.
“The conventional view in current psychology is that…
“In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors (Tversky and Kahneman, 1973, p. 237)”
This conclusion is based on series of systematic and reliable biases in people’s judgements of probability, many identified in the 1970’s and 1980’s by Tversky, Kahneman and colleagues. The heuristics and biases approach has reached a level of popularity rarely seen in psychology, with Kahneman recieving a Nobel Prize for his work in this area, and with numerous popular psychology books and newspaper articles presenting this view to the general public. Indeed, the idea that people cannot reason withprobabilities has become a widespread truism…
In this paper we describe a very simple alternative to the heuristics and biases view. In this account people reason about probability according to the rules of probability theory, but are subject to random variation or noise in this reasoning process. While it may seem that noise in the workings of a rational system will result in nothing more than error variance centered around a normative response”(Shar and Leboeuf, 2002), in fact there are a number of ways inwhich random variation can produce systematic and reliable deviations from the normatively correct responses. These systematic deviations often correspond to the observed patterns of bias seen in people’s probability judgments.
This `probability theory plus noise’ account of biases in human probabilistic reasoning has a critical characteristic that differentiates it from the heuristic…account: the random variation account predicts that there should, on average, be no systematic bias or error in people’s judgements for some specific probabilities.
The above result is based on a specific expression XE(A; B) that cancels out the effect of noise in people’s probability judgements. When noise is cancelled in this way we get a mean value for XE(A; B) that is almost exactly equal to that predicted by probability theory. This close agreement with probability theory occurs alongside reliable and systematic biases towards conjunction and
disjunction errors in the same probability judgments.
These results provide strong evidence that, when making judgments under uncertainty, people do in fact closely follow the calculus of chance provided by probability theory, and that the observed patterns of bias are due to the systematic influence of noise on those judgements.
Note that we are not suggesting that people are consciously aware of the equations of probability theory when estimating probabilities. That is evidently not the case, given the high rates of conjunction and disjunction errors in people’s probability judgments. Indeed we doubt whether any of the participants in our experiment were aware of the invariant required by probability theory’s addition law, or would be able to consciously apply that law to their probability estimations. Instead we propose that people’s probability judgments are derived from a `black box’ module of cognition that estimates the probability of an event A by retreiving (some analogue of) a count of instances of A from memory. Such a mechanism is necessarily subject to the requirements of set theory and therefore implicitly embodies the equations of probability theory (which derive from set theory).
We expect this probability estimation module to be unconscious, automatic, rapid, parallel, relatively undemanding of cognitive capacity, and evolutionarily `old’ and so shared between humans and animals. In the terms used by Stanovich and West Stanovich and West (2000), this is a System 1 process, albeit one that is not based on heuristics but instead on the normatively correct rules of probability theory. Support from this modular view comes from that fact that people can make probability judgments rapidly and easily, and typically do not have access to the reasons behind their estimations (using our experiment as an example, if we asked a participant why they judged that rain had 70% probability of occurring, they would not be able to give any answer beyond saying that that was their estimate based on their experience). Further support comes from experimental results showing that animals effectively judge probabilities (for instance, the probability of obtaining food from a given source) and that their judged probabilities are typically close to optimal (for example, see Kheifets and Gallistel, 2012).
We also expect this automatic probability estimation process to be subject to occasional intervention by other cognitive systems, and particularly by conscious, symbolic and slow `System 2′ processes that can check the logical validity of estimates produced by the probability module. We expect this type of intervention to be both relatively rare, and effortful. To quote one participant in an earlier experiment where participants had to choose between betting on a single event or on a conjunction containing that event: `I know that the right answer is always to bet on the single one, but sometimes I’m really drawn to the double one, and it’s hard to resist’.”