The main reason I left teaching psychology is that I couldn’t go on teaching lies.
When I analyzed the source of my discontent, I came up with a dim view of my chosen field, a criticism of widely held pre-theoretic assumptions that could not be challenged or changed. (I documented these in my book, Scientific Introspection. See bit.ly/scientific-introspection).
Since turning my attention to a more honest profession, writing fiction, I have left my frustration with psychology behind, but it still flares up when I see articles in the popular press featuring “scientific” psychology and its findings.
One such was an article in The Economist, November 7, 2015 edition, in the ironically named “Science and Technology” section. (Religion and Altruism: Matthew 22:39).
This article reported a study that claimed to prove that people brought up in a religious household are less generous than those who are not. Interesting finding, if true. But, even acknowledging that this is secondary reporting and I haven’t read the original study, it looks pretty bad.
The psychologists recruited 1170 families to study, in several countries, including Canada, China, Jordan, South Africa, and Turkey. They picked out one child in each family to study.
Okay, right away I have multiple problems. First the sample size is so large that the statistical “power” of any test is going to be biased to detect extremely small but meaningless distinctions. A properly done study must report an index of statistical power to help a reader interpret the results, but such subtlety is far beyond the depth of popular reporting, so based on the information given we simply cannot interpret any results based on this report and we should stop reading immediately. But let’s go on.
It becomes even more fun. They selected one child per family? How? Randomly? Coin toss? I doubt that very much. Imagine how it would work, getting permission from the families. So we have admitted sample bias before we start. Were the selected children controlled for age, gender, health, and education? I doubt that also. At this point we should toss the article away because no matter what it finds, the results will be uninterpretable.
But wait, it gets better. The samples were selected from several different countries with wildly differing cultures, histories, literatures, and religions. Does “altruism” even have the same meaning across such cultures? Not a hint of caution is breathed. Again, any result will be meaningless.
Undeterred, the psychologists assessed by questionnaire how religious each family was. Imagine if a team of psychologists administered a questionnaire to your family to determine how religious it was. Do you think the results would be accurate? Me neither.
Moving on though, the psychologists offered each child a collection of 30 prizes (“attractive stickers”), and invited the child to select 10. I won’t spend any time wondering how the psychologists knew that a “sticker” is an equally attractive prize for all these children in such diverse cultures.
The children were then asked if they’d like to give away some of their 10 stickers to classmates who were excluded from the study. That was the measure of altruism. Yes, you read that right. “Altruism” is operationally defined by number of “stickers” a child is willing to give away. That is such a poor definition that it immediately casts doubt upon the “external validity” of the whole study. Low external validity means results cannot be generalized beyond the samples studied.
After some highly questionable manipulation of the data, throwing out responses from Jews, Buddhists and Hindus because of “small sample size” (!) the results were: Muslim children gave away 3.2 stickers on average, while Christian children gave away 3.3. Draw your own conclusion from that stunning finding.
Moreover, the study noted, generosity (formerly called altruism) was inversely related to the degree of the family’s (self-reported) religiosity. In other words, the more religious the family believes it is, the less generous its children are (or not – depending on how much of this execrable “study” you take seriously!).
I should blame The Economist for failure to properly report this study’s methodology, flaws and findings, but I understand that the state of scientific knowledge in this country (and in the world) is extremely low and ability to evaluate research is virtually zero. So wouldn’t it be better to publish an article that helps people evaluate research instead of blithely reporting without comment, meaningless studies like this? But they’re a magazine and their job is to sell advertising, not to educate.
It’s no wonder that so many people distrust scientific research findings. I once heard a politician dismiss an objection to his propositions by saying, “You can prove anything with facts.” Considering what passes for facts these days, he was not entirely wrong.
If I cared enough, I would search out the original research report and see if it was as useless as this article suggests it was, but doing that would just make me more upset, and anyway, I have already done such detailed research analyses in my book, Scientific Introspection, and that didn’t change the world by much, did it?
At least nowadays, I write fiction that is clearly labeled as such.