Pages

Showing posts with label Psychology. Show all posts
Showing posts with label Psychology. Show all posts

Wednesday, November 20, 2024

Storytelling Beats Facts in Social Media Mental Health Battle

The idea that social media causes children mental health distress is plausible, but unfortunately it isn’t true. Trouble is, if you read what the press has written about it, you wouldn’t know. Scientists have described it as a “moral panic” that isn’t backed by data, which has been promoted most prominently by one man: Jonathan Haidt.

Tuesday, February 06, 2024

No Evidence that Social Media Affects Mental Health, Zuckerberg Says

Last week, the US senate had a hearing on the dangers of social media in preparation of a legislation to improve child safety online. In this hearing, Meta CEO Mark Zuckerberg [[image of the guy with name]] claimed that it has not been scientifically proved that social media causes mental health problems in adolescents.

This upset a lot of people who think that the link is obvious. But I am afraid Zuckerberg is right. Let’s have a look.

Saturday, April 29, 2023

Is being trans a social fad among teenagers?

Should transgender teens transition? This rather personal question occupies a prominent place in the American culture war. One the one side you have people claiming that it’s a socially contagious fad among the brainwashed woke who want to mutilate your innocent children. On the other side there are those saying that it’s saving the lives of minorities who’ve been forced to stay in the closet for too long. And then there are normal people, like you and I, who think both sides are crazy and could someone please summarise the facts in simple words, which is what I’m here for.



Transcript, references, and discussion on Patreon.

Saturday, April 01, 2023

Addicted to social media -- Is this even possible?

I recently learned there’s a new trend on social media: monk mode. It means cutting out distractions and going into self-isolation to become more productive. It’s supposedly based on science and particularly concerned with avoiding social media, because that’s addictive. But can social media really be addictive? Does the monk mode work? And what’s the science behind it? In this video, we'll sort it out.



Transcript, references, and discussion on Patreon.

Saturday, June 05, 2021

Why do we see things that aren't there?

[This is a transcript of the video embedded below.]

A few weeks ago, we talked about false discoveries, scientists who claimed they’d found evidence for alien life. But why is it that we get so easily fooled and jump to conclusions. How bad is it and what can we do about it? That’s what we will talk about today.


My younger daughter had spaghetti for the first time when she was about two years old. When I put the plate in front of her she said “hair”.

The remarkable thing about this is not so much that she said this, but that all of you immediately understood why she said it. Spaghetti are kind of like hair. And as we get older and learn more about the world we find other things that also look kind of like hair. Willows, for example. Or mops. Even my hair sometimes looks like hair.

Our brains are pattern detectors. If you’ve seen one thing, it’ll tell you if you come across similar things. Psychologists call this apophenia, we see connections between unrelated things. These connections are not wrong, but they’re not particularly meaningful. That we see these connections, therefore, tells us more about the brain than about the things that our brain connects.

The famous Rorschach inkblot test, for example, uses apophenia in the attempt to find out what connections the patient readily draws. Of course these tests are difficult to interpret because if you start thinking about it, you’ll come up with all kinds of things for all kinds of reasons. Seeing patterns in clouds is also an example of apophenia.

And there are some patterns which we are particularly good at spotting, the ones that are most important for our survival, ahead of all: Faces. We see faces everywhere and in anything. Psychologists call this pareidolia.

The most famous example may be Jesus on a toast. But there’s also a Jesus on the butt of that dog. There’s a face on Mars, a face in this box, a face in this pepper, and this washing machine has had enough.

The face on Mars is worth a closer look to see what’s going on. In 1976, the Viking mission sent back images from its orbit around Mars. When one of them looked like a face, a guy by name y Richard C. Hoagland went on TV to declare it was evidence of lost Martian civilization. But higher resolution images of the same spot from later missions don’t look like faces to us anymore. What’s going on?

What’s going on is that, when we lack information, our brain fills in details with whatever it thinks is the best matching pattern. That’s also what happened, if you remember my earlier video, with the canals on Mars. There never were any canals on Mars. They were imaging artifacts, supported by vivid imagination.

Michael Shermer, the American science writer and founder of The Skeptics Society, explains this phenomenon in his book “The believing brain”. He writes: “It is in this intersection of non-existing theory and nebulous data that the power of belief is at its zenith and the mind fills in the blanks”.

He uses as example what happened when Galileo first observed Saturn, in 1620. Galileo’s telescope at the time had a poor resolution, so Galileo couldn’t actually see the rings. But he could see there was something strange about Saturn, it didn’t seem to be round. Here is a photo that Jordi Busque took a few months ago with a resolution similar to what Galileo must have seen. What does it look like to you? Galileo claimed that Saturn was a triple planet.

Again, what’s happening is that the human brain isn’t just a passive data analysis machine. The brain doesn’t just look at an image and says: I don’t have enough data, maybe it’s noise or maybe it isn’t. No, it’ll come up with something that matches the noise, whether or not it has enough data to actually draw that conclusion reliably.

This makes sense from an evolutionary perspective. It’s better to see a mountain lion when there isn’t one than to not see a mountain lion when there is one. Can you spot the mountain lion? Pause the video before I spoil your fun. It’s here.

A remarkable experiment to show how we find patterns in noise was done in 2003 by researchers from Quebec and Scotland. They showed images of random white noise to their study participants. But the participants were told that half of those images contained the letter “S” covered under noise. And sure enough, people saw letters where there weren’t any.

Here’s the fun part. The researchers then took the images which the participants had identified as containing the letter “S” and overlaid them. And this overlay clearly showed an “S”.

What is going on? Well, if you randomly scatter points on a screen, then every once in a while they will coincidentally look somewhat like an “S”. If you then selectively pick random distributions that look a particular way, and discard the others, you indeed find what you were looking for. This experiment shows that the brain is really good at finding patterns. But it’s extremely bad at calculating the probability that this pattern could have come about coincidentally.

A final cognitive bias that I want to mention which is built into our brain is anthropomorphism, that means we assign agency to inanimate objects. That’s why we, for example, get angry at our phones or cars though that makes absolutely no sense.

Anthropomorphism was first studied in 1944 by Fritz Heider and Marianne Simmel. They showed people a video in which squares and triangles were moving around. And they found the participants described the video as if the squares and triangles had intentions. We naturally make up such stories. This is also why we have absolutely no problem with animation movies whose “main characters” are cars, sponges, or potatoes.

What does this mean? It means that our brains have a built-in tendency to jump to conclusions and to see meaningful connections when there aren’t any. That’s why we have astrophysicists who yell “aliens” each time they have unexplained data, and why we have particle physicists who get excited about each little “anomaly” even though they should full well know that they are almost certainly wasting their time. And it’s why, if I hear Beatles songs playing on two different radio stations at the same time, I’m afraid Paul McCartney died.

Kidding aside, it’s also why so many people fall for conspiracy theories. If someone they know gets ill, they can’t put it down as an unfortunate coincidence. They will look for an explanation, and if they look, they will find one. Maybe that’s some kind of radiation, or chemicals, or the evil government. Doesn’t really matter, the brain wants an explanation.

So, this is something to keep in mind: Our brains come up with a lot of false positives. We see patterns that aren’t there, we see intention where there isn’t any, and sometimes we see Jesus on the butt of a dog.

Saturday, December 14, 2019

How Scientists Can Avoid Cognitive Bias

Today I want to talk about a topic that is much, much more important than anything I have previously talked about. And that’s how cognitive biases prevent science from working properly.


Cognitive biases have received some attention in recent years, thanks to books like “Thinking Fast and Slow,” “You Are Not So Smart,” or “Blind Spot.” Unfortunately, this knowledge has not been put into action in scientific research. Scientists do correct for biases in statistical analysis of data and they do correct for biases in their measurement devices, but they still do not correct for biases in the most important apparatus that they use: Their own brain.

Before I tell you what problems this creates, a brief reminder what a cognitive bias is. A cognitive bias is a thinking shortcut which the human brain uses to make faster decisions.

Cognitive biases work much like optical illusions. Take this example of an optical illusion. If your brain works normally, then the square labelled A looks much darker than the square labelled B.

[Example of optical illusion. Image: Wikipedia]
But if you compare the actual color of the pixels, you see that these squares have exactly the same color.
[Example of optical illusion. Image: Wikipedia]
The reason that we intuitively misjudge the color of these squares is that the image suggests it is really showing a three-dimensional scene where part of the floor is covered by a shadow. Your brain factors in the shadow and calculates back to the original color, correctly telling you that the actual color of square B must have been lighter than that of square A.

So, if someone asked you to judge the color in a natural scene, your answer would be correct. But if your task was to evaluate the color of pixels on the screen, you would give a wrong answer – unless you know of your bias and therefore do not rely on your intuition.

Cognitive biases work the same way and can be prevented the same way: by not relying on intuition. Cognitive biases are corrections that your brain applies to input to make your life easier. We all have them, and in every-day life, they are usually beneficial.

The maybe best-known cognitive bias is attentional bias. It means that the more often you hear about something, the more important you think it is. This normally makes a lot of sense. Say, if many people you meet are talking about the flu, chances are the flu’s making the rounds and you are well-advised to pay attention to what they’re saying and get a flu shot.

But attentional bias can draw your attention to false or irrelevant information, for example if the prevalence of a message is artificially amplified by social media, causing you to misjudge its relevance for your own life. A case where this frequently happens is terrorism. Receives a lot of media coverage, has people hugely worried, but if you look at the numbers for most of us terrorism is very unlikely to directly affect our life.

And this attentional bias also affects scientific judgement. If a research topic receives a lot of media coverage, or scientists hear a lot about it from their colleagues, those researchers who do not correct for attentional bias are likely to overrate the scientific relevance of the topic.

There are many other biases that affect scientific research. Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising.

But the most problematic cognitive bias in science is social reinforcement, also known as group think. This is what happens in almost closed, likeminded, communities, if you have people reassuring each other that they are doing the right thing. They will develop a common narrative that is overly optimistic about their own research, and they will dismiss opinions from people outside their own community. Group think makes it basically impossible for researchers to identify their own mistakes and therefore stands in the way of the self-correction that is so essential for science.

A bias closely linked to social reinforcement is the shared information bias. This bias has the consequence that we are more likely to pay attention to information that is shared by many people we know, rather than to the information held by only few people. You can see right away how this is problematic for science: That’s because how many people know of a certain fact tells you nothing about whether that fact is correct or not. And whether some information is widely shared should not be a factor for evaluating its correctness.

Now, there are lots of studies showing that we all have these cognitive biases and also that intelligence does not make it less likely to have them. It should be obvious, then, that we organize scientific research so that scientists can avoid or at least alleviate their biases. Unfortunately, the way that research is currently organized has exactly the opposite effect: It makes cognitive biases worse.

For example, it is presently very difficult for a scientist to change their research topic, because getting a research grant requires that you document expertise. Likewise, no one will hire you to work on a topic you do not already have experience with.

Superficially this seems like good strategy to invest money into science, because you reward people for bringing expertise. But if you think about the long-term consequences, it is a bad investment strategy. Because now, not only do researchers face a psychological hurdle to leaving behind a topic they have invested time in, they would also cause themselves financial trouble. As a consequence, researchers are basically forced to continue to claim that their research direction is promising and to continue working on topics that lead nowhere.

Another problem with the current organization of research is that it rewards scientists for exaggerating how exciting their research is and for working on popular topics, which makes social reinforcement worse and adds to the shared information bias.

I know this all sounds very negative, but there is good news too: Once you are aware that these cognitive biases exist and you know the problems that they can cause, it is easy to think of ways to work against them.

For example, researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing. Also, researchers should always list shortcoming of their research topics, in lectures and papers, so that the shortcomings stay on the collective consciousness. Similarly, conferences should always have speakers from competing programs, and scientists should be encouraged to offer criticism on their community and not be avoided for it. These are all little improvements that every scientist can make individually, and once you start thinking about it, it’s not hard to come up with further ideas.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality.

The reason this is so, so important to me, is that science drives innovation and if science does not work properly, progress in our societies will slow down. But cognitive bias in science is a problem we can solve, and that we should solve. Now you know how.

Tuesday, March 04, 2014

10 Misconceptions about Creativity

Lara, painting. She says
it's a snake and a trash can.

The American psyche is deeply traumatized by the finding that creativity scores of children and adults have been constantly declining since 1990. The consequence is a flood of advice on how to be more creative, books and seminars and websites. There’s no escaping the message: Get creative, now!

Science needs a creative element, and so every once in a while I read these pieces that come by my newsfeed. But they’re like one of these mildly pleasant songs that stop making sense when you listen to the lyrics. Clap your hands if you’re feeling like a room without a ceiling.

It’s not like I know a terrible lot about research on creativity. I’m sure there must be some research on it, right? But most of what I read isn’t even logically coherent.
  1. Creativity means solving problems.

    The NYT recently wrote in an article titled “Creativity Becomes an Academic Discipline”:
    “Once considered the product of genius or divine inspiration, creativity — the ability to spot problems and devise smart solutions — is being recast as a prized and teachable skill.”
    Yes, creativity is an essential ingredient to solving problems, but equating creativity with problem solving is like saying curiosity is a device to kill cats. It’s one possible use, but it’s not the only use and there are other ways to kill cats.

    Creativity is in the first place about creation, the creation of something new and interesting. The human brain has two different thought processes to solve problems. One is to make use of learned knowledge and proceed systematically step by step. This is often referred to as ‘convergent thinking’ and dominantly makes use of the left side of the brain. The other process is a pattern-finding, a free association, often referred to as ‘divergent thinking’ which employs more brain regions on the right side. It normally kicks in only if the straight-forward left-brain attempt failed because it’s energetically more costly. Exactly what constitutes creative thinking is not well known, but most agree it is a combination of both of these thought processes.

    Creative thinking is a way to arrive at solutions to problems, yes. Or you might create a solution looking for a problem. Creativity is also an essential ingredient to art and knowledge discovery, which might or might not solve any problem.

  2. Creativity means solving problems better.

    It takes my daughter about half an hour to get dressed. First she doesn’t know how to open the buttons, then she doesn’t know how to close them. She’ll try to wear her pants as a cap and pull her socks over the jeans just to then notice the boots won’t fit.

    It takes me 3 minutes to dress her – if she lets me – not because I’m not creative but because it’s not a problem which calls for a creative solution. Problems that can be solved with little effort by a known algorithm are in most cases best solved by convergent thinking.

    Xkcd nails it:

    But Newsweek bemoans:
    “Preschool children, on average, ask their parents about 100 questions a day. Why, why, why—sometimes parents just wish it’d stop. Tragically, it does stop. By middle school they’ve pretty much stopped asking.”
    There’s much to be said about schools not teaching children creative thinking – I agree it’s a real problem. But the main reason children stop asking question is that they learn. And somewhat down the line they learn how to find answers themselves. The more we learn, the more problems we can address with known procedures.

    There’s a priori nothing wrong with solving problems non-creatively. In most cases creative thinking just wastes time and brain-power. You don’t have to reinvent the wheel every day. It’s only when problems do not give in to standard solutions that a creative approach becomes useful.

  3. Happiness makes you creative.

    For many people the problem with creative thought is the lack of divergent thinking. If you look at the advice you find online, they’re almost all guides to divergent thinking, not to creativity: “Don’t think. Let your thoughts unconsciously bubble away.” “Sourround yourself with inspiration”Be open and aware. Play and pretend. List unusual uses for common household objects.” And so on. Happiness then plays a role for creativity because there is some evidence that happiness makes divergent thinking easier:
    “Recent studies have shown […] that everyday creativity is more closely linked with happiness than depression. In 2006, researchers at the University of Toronto found that sadness creates a kind of tunnel vision that closes people off from the world, but happiness makes people more open to information of all kinds.”
    Writes Bambi Turner who has a business degree and writes stuff. Note the vague term “closely linked” and look at the research.

    It is a study showing that people who listened to Bach’s (“happy”) Brandenburg Concerto No. 3 were better solving a word puzzle that required divergent thinking. In science speak the result reads “positive affect enhanced access to remote associates, suggesting an increase in the scope of semantic access.” Let us not even ask about the statistical significance of a study with 24 students of the University of Toronto in their lunch break, or its relevance for real life. The happy people participating this study were basically forced to think divergently. In real life happiness might instead divert you from hacking on a problem.

    In summary, the alleged “close link” should read: There is tentative evidence that happiness increases your chances of being creative in a laboratory setting, if you are among those who lack divergent thinking and are student at the University of Toronto.

  4. Creativity makes you happy.

    There’s very little evidence that creativity for the sake of creativity improves happiness. Typically it’s arguments of plausibility like this that solving a problem might improve your life generally:
    “creativity allows [people] to come up with new ways to solve problems or simply achieve their goals.”
    That is plausible indeed, but it doesn’t take into account that being creative has downsides that counteract the benefits.

    This blog is testimony to my divergent thinking. You might find this interesting in your news feed, but ask my husband what fun it is to have a conversation with somebody who changes topic every 30 seconds because it’s all connected! I’m the nightmare of your organizing committee, of your faculty meeting, and of your carefully assembled administration workflow. Because I know just how to do everything better and have ten solutions to every problem, none of which anybody wants to hear. It also has the downside that I can only focus on reading when I’m tired because otherwise I’ll never get though a page. Good thing all my physics lectures were early in the morning.

    Thus, I am very skeptic of the plausibility argument that creativity makes you happy. If you look at the literature, there is in fact very little that has shown to lastingly increase people’s happiness at all. Two known procedures that have proved some effect in studies is showing gratitude and getting to know ones’ individual strengths.

    For more evidence that speaks against the idea that creativity increases happiness, see 7 and 8. There is some evidence that happiness and creativity are correlated, because both tend to be correlated with other character traits, like openness and cognitive flexibility. However, there is also evidence to the contrary, that creative people have a tendency to depression: “Although little evidence exists to link artistic creativity and happiness, the myth of the depressed artist has some scientific basis.” I’d call this inconclusive. Either way, correlations are only of so much use if you want to actively change something.

  5. Creativity will solve all our problems.

    “All around us are matters of national and international importance that are crying out for creative solutions, from saving the Gulf of Mexico to bringing peace to Afghanistan to delivering health care. Such solutions emerge from a healthy marketplace of ideas, sustained by a populace constantly contributing original ideas and receptive to the ideas of others.”
    [From Newsweek again.] I don’t buy this at all. It’s not that we lack creative solutions, just look around, look at TED if you must. We’re basically drowning in creativity, my inbox certainly is. But they’re solutions to the wrong problems.

    (One of the reasons is that we simply do not know what a “healthy marketplace of ideas” is even supposed to mean, but that’s a different story and shell be told another time.)

  6. You can learn to be creative if you follow these simple rules.

    You don’t have to learn creative thinking, it comes with your brain. You can however train it if you want to improve, and that’s what most of the books and seminars want to sell. It’s much like running. You don’t have to learn to run. Everybody who is reasonably healthy can run. How far and how fast you can run depends on your genes and on your training. There is some evidence that creativity has a genetic component and you can’t do much about this. However, you can work on the non-genetic part of it.

  7. “To live creatively is a choice.”

    This is a quote from the WSJ essay “Think Inside the Box.” I don’t know if anybody ever looked into this in a scientific way, it seems a thorny question. But anecdotally it’s easier to increase creativity than to decrease it and thus it seems highly questionable that this is correct, especially if you take into account the evidence that it’s partially genetic. Many biographies of great writers and artists speak against this, let me just quote one:
    “We do not write because we want to; we write because we have to.”
    W. Somerset Maugham, English dramatist and novelist (1874 - 1965).

  8. Creativity will make you more popular.

    People welcome novelty only in small doses and incremental steps. The wilder your divergent leaps of imagination, the more likely you are to just leave people behind you. Creativity might be a potential source for popularity in that at least you have something interesting to offer, but too much of it won’t do any good. You’ll end up being the misunderstood unappreciated genius whose obituary says “ahead of his times”.

  9. Creativity will make you more successful.

    Last week, the Washington post published this opinion piece which informs the reader that:
    “Not for centuries has physics been so open to metaphysics, or more amenable to an ancient attitude: a sense of wonder about things above and within.”
    This comes from a person named Michael Gerson who recently opened Max Tegmark’s book and whose occupation seems to be, well, to write opinion pieces. I’ll refrain from commenting on the amenability of professions I know nothing about, so let me just say that he has clearly never written a grant proposal. I warmly recommend you put the word “metaphysics” into your next proposal to see what I mean. I think you should all do that because I clearly won’t, so then maybe I stand a chance then in the next round.

    Most funding agencies have used the 2008 financial crisis as an excuse to focus on conservative and applied research to the disadvantage of high risk and basic research. They really don’t want you to be creative – the “expected impact” is far too remote, the uncertainty too high. They want to hear you’ll use this hammer on that nail and when you’ve been hitting at it for 25 months and two weeks, out will pop 3 papers and two plenary talks. Open to metaphysics? Maybe Gerson should have a chat with Tegmark.

    There is indeed evidence showing that people are biased against creativity to the favor of practicality, even if they state they welcome creativity. This study relied on 140 American undergraduate students. (Physics envy, anybody?) The punchline is that creative solutions by their very nature have a higher risk of failure than those relying on known methods and this uncertainty is unappealing. It is particularly unappealing when you are coming up with solutions to problems that nobody wanted you to solve.

    So maybe being creative will make you successful. Or maybe your ideas will just make everybody roll their eyes.

  10. The internet kills creativity.

    The internet has made life difficult for many artists, writers, and self-employed entrepreneurs, and I see a real risk that this degrades the value of creativity. However, it isn’t true that the mere availability of information kills creativity. It just moves it elsewhere. The internet has made many tasks that previously required creative approaches to step-by-step procedures. Need an idea for a birthday cake? Don’t know how to fold a fitted sheet? Want to know how to be more creative? Google will tell you. This frees your mind to get creative on tasks that Google will not do for your. In my eyes, that’s a good thing. 
So should you be more creative?

My summary of reading all these articles is that if you feel like your life lacks something, you should take score of your strengths and weaknesses and note what most contributes to your well-being. If you think that you are missing creative outlets, by all means, try some of these advice pages and get going. But do it for yourself and not for others, because creativity is not remotely as welcome as they want you to believe.

On that note, here’s the most recent of my awesomely popular musical experiments:

Thursday, August 09, 2012

Book review: “Thinking, fast and slow” by Daniel Kahneman

Thinking, Fast and Slow
By Daniel Kahneman
Farrar, Straus and Giroux (October 25, 2011)

I am always on the lookout for ways to improve my scientific thinking. That’s why I have an interest in the areas of sociology concerned with decision making in groups and how the individual is influenced by this. And this is also why I have an interest in cognitive biases - intuitive judgments that we make without even noticing; judgments which are just fine most of the time but can be scientifically fallacious. Daniel Kahneman’s book “Thinking, fast and slow” is an excellent introduction to the topic.

Kahneman, winner of the Nobel Price for Economics in 2002, focuses mostly on his own work, but that covers a lot of ground. He starts with distinguishing between two different modes in which we make decisions, a fast and intuitive one, and a slow, more deliberate one. Then he explains how fast intuitions lead us astray in certain circumstances.

The human brain does not make very accurate statistical computations without deliberate effort. But often we don’t make such an effort. Instead, we use shortcuts. We substitute questions, extrapolate from available memories, and try to construct plausible and coherent stories. We tend to underestimate uncertainty, are influenced by the way questions are framed, and our intuition is skewed by irrelevant details.

Kahneman quotes and summarizes a large amount of studies that have been performed, in most cases with sample questions. He offers explanations for the results when available, and also points out where the limits of present understanding are. In the later parts of the book he elaborates on the relevance of these findings about the way humans make decision for economics. While I had previously come across a big part of the studies that he summarizes in the early chapters, the relation to economics had not been very clear to me, and I found this part enlightening. I now understand my problems trying to tell economists that humans do have inconsistent preferences.

The book introduces a lot of terminology, and at the end of each chapter the reader finds a few examples for how to use them in everyday situations. “He likes the project, so he thinks its costs are low and its benefits are high. Nice example of the affect heuristic.” “We are making an additional investment because we not want to admit failure. This is an instance of the sunk-cost fallacy.” Initially, I found these examples somewhat awkward. But awkward or not, they serve very well for the purpose of putting the terminology in context.

The book is well written, reads smoothly, is well organized, and thoroughly referenced. As a bonus, the appendix contains reprints of Kahneman’s two most influential papers that contain somewhat more details than the summary in the text. He narrates along the story of his own research projects and how they came into being which I found a little tiresome after he elaborated on the third dramatic insight that he had about his own cognitive bias. Or maybe I'm just jealous because a Nobel Prize winning insight in theoretical physics isn't going to come by that way.

I have found this book very useful in my effort to understand myself and the world around me. I have only two complaints. One is that despite all the talk about the relevance of proper statistics, Kahneman does not mention the statistical significance of any of the results that he talks about. Now, this is all research which started two or three decades ago, so I have little doubt that the effects he talks about are indeed meanwhile well established, and, hey, he got a Nobel Prize after all. Yet, if it wasn’t for that I’d have to consider the possibility that some of these effects will vanish as statistical artifacts. Second, he does not at any time actually explain to the reader the basics of probability theory and Bayesian inference, though he uses it repeatedly. This, unfortunately, limits the usefulness of the book dramatically if you don’t already know how to compute probabilities. It is particularly bad when he gives a terribly vague explanation of correlation. Really, the book would have been so much better if it had at least an appendix with some of the relevant definitions and equations.

That having been said, if you know a little about statistics you will probably find, like I did, that you’ve learned to avoid at least some of the cognitive biases that deal with explicit ratios and percentages, and different ways to frame these questions. I’ve also found that when it comes to risks and losses my tolerance apparently does not agree with that of the majority of participants in the studies he quotes. Not sure why that is. Either way, whether or not you are subject to any specific bias that Kahneman writes about, the frequency by which they appear make them relevant to understand the way human society works, and they also offer a way to improve our decision making.

In summary, it’s a well-written and thoroughly useful book that is interesting for everybody with an interest in human decision-making and its shortcomings. I'd give this book four out of five stars.

Below are some passages that I marked that gave me something to think. This will give you a flavor what the book is about.

“A reliable way of making people believe in falsehoods is frequent repetition because familiarity is not easily distinguished from truth.”

“[T]he confidence that people experience is determined by the coherence of the story they manage to construct from available information. It is the consistency of the information that matters for a good story, not its completeness.”

“The world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.”

“It is useful to remember […] that neglecting valid stereotypes inevitably results in suboptimal judgments. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is cost-less is wrong.”

“A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.”

“I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the fact of repeated experiences of multiple small failures and rare successes, the fate of most researchers.”

“The brains s of humans and other animals contain a mechanism that is designed to give priority to bad news.”

“Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.”

“When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that maybe exposed to events no one has yet experienced, this is not good news.”

“We tend to make decisions as problems arise, even when we are specifically instructed to consider them jointly. We have neither the inclination not the mental resources to enforce consistency on our preferences, and our preferences are not magically set to be coherent, as they are in the rational-agent model.”

“The sunk-cost fallacy keeps people for too long in poor jobs, unhappy marriages, und unpromising research projects. I have often observed young scientists struggling to salvage a doomed project when they would be better advised to drop it and start a new one.”

“Although Humans are not irrational, they often need help to make more accurate judgments and better decisions, and in some cases policies and institutions can provide that help.”

Tuesday, August 07, 2012

Why does the baby cry? Fact sheet.

Gloria at 2 months, crying.
Two weeks after delivery, when the husband went back to work and my hemoglobin level had recovered enough to let me think about anything besides breathing, I seemed to be spending a lot of time on The One Question: Why does the baby cry? We had been drowned in baby books that all had something helpful to say. Or so I believe, not having read them. But what really is the evolutional origin of all that crying to begin with? That’s what I was wondering. Is there a reason to begin with?

You don’t need a degree to know that baby cries if she’s unhappy. After a few weeks I had developed a trouble-shooting procedure roughly like this: Does she have a visible reason to be unhappy? Does she stop crying if I pick her up? New diaper? Clothes comfortable? Too warm? Too cold? Is she bored? Is it possible to distract her? Hungry? When I had reached the end of my list I’d start singing. The singing almost always helped. After that, there’s the stroller and white noise and earplugs.

Yes, the baby cries when she’s unhappy, no doubt about that. But both Lara and Gloria would sometimes cry for no apparent reason, or at least no reason that Stefan and I were able to figure out. The crying is distressing for the parents and costs the baby energy. So why, if it’s such an inefficient communication channel, does the baby cry so much? If the baby is trying to tell us something, why haven't hundred thousands of years of evolution been sufficient to teach caregivers what it is that she wants? I came up with the following hypotheses:
    A) She doesn’t cry for any reason, it’s just what babies do. I wasn’t very convinced of this because it doesn’t actually explain anything.

    B) She cries so I don’t misplace or forget about her. I wasn’t very convinced of this either because after two months or so, my brain had classified the crying as normal background noise. Also, babies seem to cry so much it overshoots the target: It doesn’t only remind the caregivers, it frustrates them.

    C) It’s a stress-test. If the family can’t cope well, it’s of advantage for future reproductive success of the child if the family breaks up sooner rather than later.

    D) It’s an adaption delay. The baby is evolutionary trained to expect something else than what it gets in modern western societies. If I’d just treat the baby like my ancestors did, she wouldn’t cry so much.
So I went and looked what the scientific literature has to say. I found a good review by Joseph Soltis from the year 2004 which you can download here. The below is my summary of these 48 pages.

First, let us clarify what we’re talking about. The crying of human infants changes after about 3 months because the baby learns to make more complex sounds and also becomes more interactive. In the following we’ll only consider the first three months that are most likely to be nature rather than nurture.

Here are some facts about the first three months of baby’s crying that seem to be established pretty well. All references can be found in Soltis’ paper.
  • Crying increases until about 6 weeks after birth, followed by a gradual decrease in crying until 3 or 4 months, after which it remains relatively stable. Crying is more frequent in the later afternoon and early evening hours. These crying patterns have been found in studies of very different cultures, from the Netherlands, from South African hunter-gatherers, from the UK, Manilia, Denmark, and North America.
  • Chimpanzees too have a peak in crying frequency at approximately 6 weeks of life, and a substantial decline in crying frequency by 12 weeks.
  • The cries of healthy, non-stressed infants last on the average 0.5-1.5 seconds with a fundamental pitch in the range of 200-600 Hz. The melody is either falling or rising/falling (as opposed to rising, falling/rising or flat).
  • Serious illness, both genetic and acquired, is often accompanied by abnormal crying. The most common cry characteristic indicating serious pathology is an unusually high pitched cry, in one case study above 2000 Hz, and in many other studies exceeding 1500 Hz. (That’s higher than most sopranos can sing.) Examples are: bacterial meningitis 750-1000 Hz, Krabbe’s disease up to 1120 Hz, hypoglycemia up to 1600 Hz. Other abnormal cry patters that have been found in illness is biphonation (the simultaneous production of two fundamental frequencies), too low pitch, and deviations from the normal cry melodies.
  • Various studies have been conducted to find out how well adults are able to tell the reason for a baby’s cry by playing them previously recorded cries. These studies show mothers are a little bit better than random chance when given a predefined selection of choices (eg pain, anger, other, in one study), but by and large mothers as well as other adults are pretty bad at figuring out the reason for a baby’s cry. Without being given categories, participants tend to attribute all cries to hunger.
  • It has been reported in several papers that parents described a baby’s crying as the most proximate cause triggering abuse and infanticide. It has also been shown that especially the high pitched baby cries produce a response of the autonomic nervous system, measureable for example by the heart rate or skin conductance (the response is higher than for smiling babies). It has also been shown that abusers exhibit higher autonomic responses to high-pitched cries than non-abusers.
  • Excessive infant crying is the most common clinical complaint of mothers with infants under three months of age.
  • Excessive infant crying that begins and ends without warning is called “colic.” It is often attributed to organic disorders, but if the baby has no other symptoms it is estimated that only 5-10% of “colic” go back to an organic disorder, the most common one being lactose intolerance. If the baby has other symptoms (flexed legs, spasm, bloating, diarrhea), the ratio of organic disorder goes up to 45%. The rest cries for unknown reasons. Colic usually improves by 4 months, or so they tell you. (Lara’s didn’t improve until she was 6 months. Gloria never had any.)
  • Colic is correlated with postpartum depression which is in turn robustly associated with reduced maternal care.
  • Records and media reports kept by the National Center on Shaken Baby Syndrome implicate crying as the most common trigger.
  • In a survey among US mothers, more infant crying was associated with lower levels of perceived infant health, more worry about baby’s health, and less positive emotion towards the infant.
  • Some crying bouts are demonstrably unsoothable to typical caregiving responses in the first three months. Well, somebody has to do these studies.
  • In studies of nurses judging infant pain, the audible cry was mostly redundant to facial activity in the judgment of pain.
Now let us look at the hypotheses researchers have put forward and how well they are supported by the facts. Again, let me mention that everybody agrees the baby cries when in distress, the question is if that’s the entire reason.
  1. Honest signal of need. The baby cries if and only if she needs or wants something, and she cries to alert the caregivers of that need. This hypothesis is not well supported by the facts. Baby’s cries are demonstrably inefficient of bringing the baby the care it allegedly needs because caregivers don’t know what she wants and in many cases there doesn’t seem to be anything they can do about it. This is the scientific equivalent of my hypothesis D which I found not so convincing.
  2. Signal of vigor. This hypothesis says that the baby cries to show she’s healthy. The more the baby cries (in the “healthy” pitch and melody range), the stronger she is and the more the mother should care because it’s a good investment of her attention to raise offspring that’s likely to reproduce successfully. Unfortunately, there’s no evidence linking a high amount of crying to good health of the child. In contrast, as mentioned above, parents perceive children as more sickly if they cry more, which is exactly the opposite of what the baby allegedly “wants” to signal. Also, lots of crying is apparently maladaptive according to the evidence listed above, because it can cause violence against the child. It’s also unclear why, if the baby isn’t seriously sick and too weak to cry, a not-so-vigorous child should alert the caregivers to his lack of vigor and trigger neglect. It doesn’t seem to make much sense. This is the scientific equivalent of my hypothesis B which I didn’t find very convincing either.
  3. Graded signal of distress. The baby cries if she’s in distress, and the more distress the more she cries. This hypothesis is, at least for what pain is concerned, supported by evidence. Pretty much everybody seems to agree on that. As mentioned above however, while distress leads to crying, this leaves open the question why the baby is in distress to begin with and why it cries if caregivers can’t do anything about it. Thus, while this hypothesis is the least controversial one, it’s also the one with the smallest explanatory value.
  4. Manipulation: The baby cries so mommy feeds her as often as possible. Breastfeeding stimulates the production of the hormone prolactin; prolactin inhibits estrogen production, which often (though not always) keeps the estrogen level below the threshold necessary for the menstrual cycle to set it. This is called lactational amenorrhea. In other words, the more the baby gets mommy to feed her, the smaller the probability that a younger sibling will compete for resources, thus improving the baby’s own well-being. The problem with this hypothesis is that it would predict the crying to increase when the mother’s body has recovered, some months after birth, and is in shape to carry another child. Instead however, at this time the babies cry less rather than more. (It also seems to say that having siblings is a disadvantage to one’s own reproductive success, which is quite a bold statement in my opinion.)
  5. Thermoregulatory assistance. An infant’s thermoregulation is not very well developed, which is why you have to be so careful to wrap them warm when it’s cold and to keep them in the shade when it’s hot. According to this hypothesis the baby cries to make herself warm and also to alert the mother that it needs assistance with thermoregulation. It’s an interesting hypothesis that I hadn’t heard of before and it doesn’t seem to have been much studied. I would expect however that in this case the amount of crying depends on the external temperature, and I haven’t come across any evidence for that.
  6. Inadequacy of central arousal. The infant’s brain needs a certain level of arousal for proper development. Baby starts crying if not enough is going on, to upset herself and her parents. If there’s any factual evidence speaking for this I don’t know of it. It seems to be a very young hypothesis. I’m not sure how this is compatible with my finding that the Lara after excessive crying would usually fall asleep, frequently in the middle of a cry, and that excitement (people, travel, noise) were a cause for crying too.
  7. Underdeveloped circadian rhythm. The infant’s sleep-wake cycle is very different from an adult’s. Young babies basically don’t differentiate night from day. It’s only at around two to three months that they start sleeping through the night and develop a daily rhythm. According to this hypothesis it’s the underdeveloped circadian rhythm that causes the baby distress, probably because certain brain areas are not well synched with other daily variations. This makes a certain sense because it offers a possible explanation for the daily return of crying bouts in the late afternoon, and also for why they fade when the babies sleep through the night. This too is a very young hypothesis that is waiting for good evidence.
  8. Behavioral state. The baby’s mind knows three states: Sleep, awake, and crying. It’s a very minimalistic hypothesis, but I’m not sure it explains anything. This is the scientific equivalent of my hypothesis A, the baby just cries.
Apparently nobody ever considered my hypothesis D, that baby cries to move herself into an optimally stable social environment which would have developmental payoffs. It’s probably very difficult a case to make. The theoretical physicist in me is admittedly most attracted to one of the neat and tidy explanations in which the crying is a side-effect of a physical development.

So if your baby is crying and you don’t know why, don’t worry. Even scientists who have spent their whole career on this question don’t actually know why the baby cries.

Wednesday, August 01, 2012

Letter of recommendation 2.0

I am currently reading Daniel Kahneman’s book “Thinking, fast and slow,” which summarizes a truly amazing amount of studies. Among many other cognitive biases, Kahneman explains that it is difficult for people to accept that often algorithms based on statistical data produce better predictions than experts. This is difficult to accept even when one is shown evidence that the algorithm is better. He cites many examples for that, among them forecasting the future success of military personnel, quality of wine, or treatment of patients.

The reason, Kahneman explains, is that humans are not as efficient screening and aggregating data as software. Humans are prone to miss details, especially if the data is noisy, they get tired or fall for various cognitive biases in their interpretation of data. Generally, the human brain does not effortlessly engage in Bayesian inference. In combination with it trying to save energy and effort, this leads to mistakes. Humans are especially bad in making summary judgements of complex information, Kahneman writes, while at the same time being overly confident about the accuracy of their judgement. One of his examples is: “Experienced radiologists who evaluate chest X-rays as “normal or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions.”

Interestingly however, Kahneman also cites evidence that expert intuition can be very valuable, provided the expert’s judgement is about a situation where learning from experience is possible. (Expert judgement is an illusion when a data series is entirely uncorrelated.) He thus suggests that judgements should be based on an analysis of statistical data from past performance, combined with expert intuition. We should overcome our disliking of statistical measures, he writes “to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments” (when prediction is difficult due to a large amount of relevant factors).

This made me question my own objections to using measures for scientific success, as scientific success is of the type of prediction that is very difficult to make because luck plays a big role. Part of my disliking arguably stems from a general unease of leaving decisions about people’s future to a computer. While that is the case, and probably part of the reason I don’t like the idea, it’s not the actual problem I have belabored in my earlier blogposts. For me the main problem with using measures for scientific success is that I’d like to see evidence they are actually working, and do not adversely affect research. I am worried particularly that a widely used measure for scientific success would literally redefine what we mean by success in the first place. A small mistake, implemented and streamlined globally, could in this way dramatically slow down progress.

But I am wondering now if not, based on what Kahneman writes, I have to conclude that in addition to asking for letters of recommendation (the “expert’s intuiton”) it would be valuable to judge researchers’ past performance on a point scale. Consider that you’d be asked to fill out a questionnaire for each of your students and postdocs, ranking him or her from 0 to 5 for those characteristics typically named in letters: technical skills, independence, creativity, and so on, and also add your confidence on these judgements. You could update your scores if your opinion changes. What a hiring committee would do with these scores is a different question entirely.

The benefit of this would be the assembly of a data base needed to discover predictors for future performance, if they exist. The difficulty is that the experts in question are rarely offering a neutral judgement; many have a personal interest in seeing their students succeed, so there needs to be some incentive for accuracy. The risk would be that such a predictor might become a self-fulfilling prophecy. At least until a reality check documents that actually, despite all the honors, prices and awards, very little has happened in terms of actual progress.

Either way, now that I think about it, such a ranking would be temptingly useful for hiring committees to sort through large numbers of applicants quickly. I wouldn’t be surprised if somebody tries this rather sooner or later. Would you welcome it?

Tuesday, June 12, 2012

The loneliness of making sense

Connect the dots.
Click to enlarge. Source.
Jonah Lehrer in his book “Imagine” introduced me to an interesting study by Beeman and Kounios. They showed, in brief, by using fMRI imaging and EEG measurements of brain activity, that we possess two different modes of problem solving: One dominated by the left half of the brain, and one dominated by the right half of the brain.

The left-brain dominated problem solving is an analytical step-by-step procedure. It goes through the existing knowledge and applies known methods to a problem. Candidates solving a problem by this method have usually a feeling of making progress, of getting closer to a solution.

The right-brain dominated problem solving relies on pattern recognition and associations. It often kicks in after the left-brain method failed. Candidates solving a problem by this method have no feeling of making progress till they suddenly come up with a solution, often accompanied by an “aha-moment.” (Another study showed that brain activity indicates a solution has been found before people became consciously aware of that. The “aha” is produced just above your right ear.)

The problems that were used in this study, verbal puzzles and trick questions and so on, are highly artificial. In real life, most problems require a mixture of both approaches, though some more heavily rely on one or the other. If you are for example adding prices when shopping, that’s a very straight-forward left-brain problem. Figuring out how to fit the twin stroller and two baby seats plus two adults into a Renault Twingo (clearly a misnomer) will take forever if you’d indeed go through all possible options. Now visualize the space, or lack thereof, take off the stroller’s wheels and, aha, the trunk will close. I was very proud of my right brain.

But I am usually addressing problems exactly the way described in Lehrer’s book: First, search through existing knowledge and see if a method is known to solve the problem. If that doesn’t work, I have an intermediate step in which I try to come up with knowledge about where the solution can be found. If that fails too, I’ll try to match the problem to other problems that I know, simplify it, look at limiting cases, rewrite it, iterate, take off the wheels, and so on and so forth.

By and large my pattern searching mechanism seems to be somewhat overactive. It frequently spits out more associations than I’d want to, resulting in what the psychologists call divergent thinking. That, I’m afraid, is very noticeable if you talk to me, as I have the habit of changing topic in the middle of the sentence, making several loops and detours before coming back, if I come back. Needless to say, this makes perfect sense to me. In my experience (watch out, anecdotal evidence) most women have no problem following me. Most men get glassy eyes and either interrupt me, or patiently wait till I’ve made my loops and detours. I know at least one exception to this and, yes, I’m talking about you. So you should have no problem following the connections I’m about to draw from Lehrer’s book to some other books I’ve read recently.

Micheal Nielsen in his book “Reinventing Discovery” preaches that scientists should not only share their knowledge, but also share their ideas that are still under construction. In essence, his point is that our present knowledge is badly connected and has lots of unused potential. You might just have exactly that piece of knowledge that I am missing, but how will you know if I don’t tell you what it is I’m looking for? There are some prominent examples where this crowd-sourcing for knowledge matches has been successful, the Polymath project is often named.

Reading this, introspection reveals I rarely, if ever, blog about research I am working on. It’s not so much that I don’t want to, but that I can’t. I talk of course to my colleagues about what I am working on, people I have known and who have known me for a while. But they usually can’t make much sense of what I’m telling them. Heck, even my husband usually has no clue what I’m trying to say - till he has a finished paper in his hand that is. I mostly talk to them just for the merit of talking, and they know pretty well that their role is primarily to listen. I know this procedure from both sides, it’s quite common and clearly serves a purpose. But that purpose isn't sharing, it's improving the pattern seeking by bouncing loose connections off other people’s frowned foreheads.

Most often the problem I’m plaguing myself with is not the answer to a specific question, but to find a useful way to ask a question to begin with. And the way it feels to me, that’s mostly a right-brain task, a pattern-seeking, sense-making effort; a searching through the bits and pieces from papers and seminars, a matching and mixing, a blending and crossing. Once you have a concrete question, you can get out the toolbox and nail it to the wall, left-brain dominated.

Science needs both finding a question and finding an answer to that question, one can debate to which extent. But these two types of problems don’t communicate the same way. In fact, Sunstein in his book “Infotopia” points out it is very relevant for crowd-sourcing to work well that one has a well-posed question, the solution to which, if it is found, everybody will be able to agree on.

So I am thinking, there are problems we are plaguing ourselves with that we just can’t talk about. They are lonely problems.

Another connection I want to draw is to Michael Chorost’s book “World Wide Mind,” because that piece of information from Lehrer’s book had me realize just why I was so skeptic of the brain-to-brain communication method which is Chorost’s vision for the future.

Chorost suggests in his book to record each person’s pattern of neuronal activity for certain impressions, sights, smells, views, words, emotions and so on, which he calls “cliques.” An implant in your brain will pick up and decode your neural activity into “cliques” and transmit them to trigger the same cliques by somebody else’s implant in that person’s brain, which might have a different neuronal representation of the clique. That is, the cliques are essentially the basic units of brain-to-brain communication.

But what you cannot communicate this way is your brain’s attempt to find patterns in all the cliques. Neither can you, by this method, ever try to find patterns in other people’s cliques. Or, in the words that I used in my earlier post on “collective intelligence,” these are no examples for type-2 collective intelligence, the type in which the intelligence of the collective is not due to a shared and well-connected pool of knowledge, but to shared processes acting on that knowledge.

Finally, let us revisit an argument from Mark Pagel that we discussed recently. Pagel believes, in a nutshell, that we need fewer and fewer inventors because we are constantly improving the way we share ideas. The better we share the ideas we have, the fewer people we need to produce them. But what do we do with the part of the idea seeking that’s unshareable, at least for now? The better we share ideas the fewer similar ideas we need, but that leaves open the question how many people are needed to produce one shareable idea. And, distinguishing the two types of problem solving that we use, sharing doesn’t cut down the amount of necessary pattern seeking per idea. Sharing can improve the repository you search through, but taking into account that the problems are getting more involved too, it is far from clear that we need fewer people per idea.

If you’ve been following along all the way till here, thank you for your patience. If not, good to see you again in the last paragraph and either way, let me just come to the conclusion now. As I argued above, improvements in sharing and connecting ideas don’t work equally well for all types of thought processes. This bears the risk of smothering the lonely and unshareable sense-making, right-brain efforts. Much like a forest with two types of trees that receives a fertilizer which benefits only one type of trees, the shade of larger trees can cut off sunlight to the smaller trees. So I hope your lonely thoughts receive sufficient sunlight, and may you have many aha-moments right above your ear.

Tuesday, April 10, 2012

Be careful what you wish for

Michael Nielsen in his book “Reinventing Discovery” relates the following anecdote from the history of science.

In the year 1610, Galileo discovered that the planet Saturn, the most distant then known planet, had a peculiar shape. Galileo’s telescope was not good enough to resolve Saturn’s rings, but he saw two bumps on either side of the main disk. To make sure this discovery would be credited to him, while still leaving him time to do more observations, Galileo followed a procedure common at the time: He sent the announcement of the discovery to his colleagues in form of an anagram
    smaismrmilmepoetaleumibunenugttauiras

This way, Galileo could avoid revealing his discovery, but would still be able to later claim credit by solving the anagram, which meant “Altissimum planetam tergeminum observavi,” Latin for “I observed the highest of the planets to be three-formed.”

Among Galileo’s colleagues who received the anagram was Johannes Kepler. Kepler had at this time developed a “theory” according to which the number of moons per planet must follow a certain pattern. Since Earth has one moon and from Jupiter’s moons four were known, Kepler concluded that Mars, the planet between Earth and Jupiter, must have two moons. He worked hard to decipher Galileo’s anagram and came up with “Salve umbistineum geminatum Martia proles” Latin for “Be greeted, double knob, children of Mars,” though one letter remained unused. Kepler interpreted this as meaning Galileo had seen the two moons of Mars, and thereby confirmed Kepler’s theory.

Psychologists call this effort which the human mind makes to brighten the facts “motivated cognition,” more commonly known as “wishful thinking.” Strictly speaking the literature distinguishes both in that wishful thinking is about the outcome of a future event, while motivated cognition is concerned with partly unknown facts. Wishful thinking is an overestimate of the probability that a future event has a desirable outcome, for example that the dice will all show six. Motivated cognition is an overly optimistic judgment of a situation with unknowns, for example that you’ll find a free spot in a garage whose automatic counter says “occupied,” or that you’ll find the keys under the streetlight.

There have been many small-scale psychology experiments showing that most people are prone to overestimate a lucky outcome (see eg here for a summary), even if they know the odds, which is why motivated cognition is known as a “cognitive bias.” It’s an evolutionary developed way to look at the world that however doesn’t lead one to an accurate picture of reality.

Another well-established cognitive bias is the overconfidence bias, which comes in various expressions, the most striking one being “illusory superiority”. To see just how common it is for people to overestimate their own performance, consider the 1981 study by Svenson which found that 93% of US American drivers rate themselves to be better than the average.

The best known bias is maybe confirmation bias, which leads one to unconsciously pay more attention to information confirming already held believes than to information contradicting it. And a bias that got a lot attention after the 2008 financial crisis is “loss aversion,” characterized by the perception of a loss being more relevant than a comparable gain, which is why people are willing to tolerate high risks just to avoid a loss.

It is important to keep in mind that these cognitive biases serve a psychologically beneficial purpose. They allow us to maintain hope in difficult situations and a positive self-image. That we have these cognitive biases doesn’t mean there’s something wrong with our brain. In contrast, they’re helpful to its normal operation.

However, scientific research seeks to unravel the truth, which isn’t the brain’s normal mode of operation. Therefore scientists learn elaborate techniques to triple-check each and every conclusion. This is why we have measures for statistical significance, control experiments and double-blind trials.

Despite that, I suspect that cognitive biases still influence scientific research and hinder our truth-seeking efforts because we can’t peer review scientists motivations, and we’re all alone inside our heads.

And so the researcher who tries to save his model by continuously adding new features might misjudge the odds of being successful due to loss aversion. The researcher who meticulously keeps track of advances of the theory he works on himself, but only focuses on the problems of rival approaches, might be subject to confirmation bias, skewing his own and other people’s evaluation of progress and promise. The researcher who believes that his prediction is always just on the edge of being observed is a candidate for motivated cognition.

And above all that, there’s the cognitive meta-bias, the bias blind spot: I can’t possibly be biased.

Scott Lilienfeld in his SciAm article “Fudge Factor” argued that scientists are particularly prone to conformation bias because
“[D]ata show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant”

As I scientist, I regard my brain the toolbox for my daily work, and so I am trying to learn what can be done about its shortcomings. It is to some extent possible to work on a known bias by rationalizing it: By consciously seeking out the information that might challenge ones beliefs, asking a colleague for a second opinion on whether a model is worth investing more time, daring to admit to being wrong.

And despite that, not to forget the hopes and dreams.

Mars btw has to our best current knowledge indeed two moons.

Wednesday, March 07, 2012

Book review: "Quiet" by Susan Cain

Quiet: The Power of Introverts in a World That Can't Stop Talking
Susan Cain
Crown (2012)

People who got to know my from my blog are usually surprised when they meet me in person.

I like to write, but I am not very talkative. I try to avoid group activities. I don't like to draw attention to myself, and I don't like crowds. I'm noise-sensitive. I prefer reading over parties, and if you find me at a party, I'm the one in the corner watching the others. My school memory contains a long series of teachers telling me to speak up more often. My Myers Briggs type is INTJ, with 100% on the introvert scale.

I am, in short, the sort of person that Susan Cain's book is about. So how could I not read it.

Susan Cain, a self-confessed introvert herself, has collected results of scientific studies on introversion and extroversion, from neurology, psychology and sociology. As it has recently been the case with many personality traits, evidence is building up that they are to some extend genetic, but the strength of expression also depends on environmental influences. This also means that while we can't change our genetic predisposition, we have some flexibility to deal with it.

Cain writes studies show that one third to one half (depending on the study) of all adults in North America are introverts, yet the American culture has glorified the extrovert ideal. That is, so Cain argues, a disadvantage not only for the introverts, many of whom end up pretending to be something they're not, but also for society as a whole because we're not making good use of many skilled people. Cain discusses studies that have shown that in the right circumstances, thinking brings better results alone than in groups, and that some leadership roles call for extroverts, and some for introverts. Extroverts do better, it turns out, when motivating others is relevant. Introverts do better when listening is important.
"[E]xtrovert leaders enhance group performance when employees are passive, but... introvert leaders are more effective with proactive employees."

She covers a lot of ground in her book, and draws upon many examples, Rosa Parks, Gandhi, Eleanor Roosevelt and Moses, just to mention a few.
"We don't ask why God chose as his prophet [Moses,] a stutterer with a public speaking phobia. But we should. The book of Exodus is short on explication, but its stories suggest that introversion plays the yin to the yang of extroversion."

In her book, Cain discusses for example evidence from Jerome Kagan's research that shows introversion is linked to a physiological trait called "high reactivity." I used to say I have an input filter problem. Amazingly enough, it turns out that's pretty much exactly what Kagan's research, and the research of those who have followed up on his original intuition, has shown. "High reactivity" is a higher activity in a brain region called the amygdala when confronted with something new. Infants who show high reactivity are more likely to grow up to be introverted adults; they need less stimulation than extroverts.

Later in the book, Cain also discusses another trait called "reward sensitivity," basically how active the brain's reward circuit is, and how much attention we thus pay to prospects of rewards:
"[E]xtroverts seem to be more susceptible than introverts to the reward-seeking cravings of the [limbic system of the brain]. In fact, some scientists are starting to explore the idea that reward-sensitivity is not only an interesting feature of extroversion; it is what makes an extrovert an extrovert. Extroverts, in other words, are characterized by their tendency to seek rewards."

Cains book is very carefully written. She points out repeatedly that no two people are alike and the reader might not feel well classified in all terms she discusses. Introversion is for example correlated with agreeableness and conflict aversion. As you might have guessed, I'm not a very agreeable person ;o) Cain also explains that it's not uncommon for people to "act out of character" if the situation calls for it, but too much of it can lead to a burnout. Professor Brian Little calls it the "Free Trait Theory":
"Introverts are capable of acting like extroverts for the sake of work they consider important, people they love, or anything they value highly."

That would be me organizing a conference.

Cain's argument is also very balanced. Her perspective is that there's not one best way of doing things, but that introverts and extroverts both bring different strengths that we are not all presently supporting and using very well. She seems to have taken particular offense in teachers using group tables, something I too recall very well from my schooldays. She argues for seeking a better way to do things, based on recent insights about how differently people's brains work
"We should actively seek out symbiotic introvert-extrovert relationships, in which leadership and other tasks are divided according to people's natural strengths and temperaments. The most effective teams are composed of a healthy mix of introverts and extroverts, studies show, and so are many leadership structures."

Cain is also careful to point out that extroversion and introversion are partly cultural, and she discusses to some extend the tension of Asian-Americans. She doesn't go into the cultural aspects very much though. I guess it's not very well understood.

The book is well referenced, she does mention if a research result is still under discussion or maybe even controversial. She doesn't merely report, but also brings in her own opinion. The book is flawlessly written. For the first two of three parts I found it to be the best non-fiction book I've read lately. It taught me a lot of things I hadn't previously known, without drowning me in irrelevant details. Then I came to the last part of the book.

The last part of the book gives the reader advice how to manage their personal lives and relationships. It's about the couples Greg and Emily, and John and Jennifer. It's about Joyce and her daughter Isabel, and about Sarah and her daughter Ava. I would have much preferred Susan Cain's book without the self-help part. Not only because I'm happily married to another introvert and wasn't looking for advice, but because for 200 pages I was thinking to send a copy of her book to some of my extrovert friends, just so they understand. Now I'd risk they think I'm suggesting they need help with their marriage or parenting.

In summary: If you are, as I, following research in neurology and psychology only peripherally, Susan Cain's book is likely to teach you something about yourself, and your friends and relatives. It is a well written, well researched, and well argued book that studies both the powers and weaknesses of introversion and extroversion, and addresses the question how much of these personality traits are nature and how much nurture. I would recommend this book to everybody who has ever felt they have trouble understanding others or themselves.

You can read an excerpt of Susan Cain's book here, and you can watch her TED talk here.

Saturday, April 19, 2008

Ninetynine-Ninetynine

"Just ninetynine-ninetynine!" is what they tell me every time I fail to switch the radio station fast enough, is what they print in the ads, is what the shout in the commercials.

When I was about six years old or so, I recall I asked my mom why all prices end with a ninetynine. Because they want you to believe it's cheaper than it is, I was explained. If they print 1.99 it's actually 2, but they hope you'll be fooled and think it's "only" one-something.

I found that a good explanation when I was six, but twentyfive years later I wonder if even six year's old know that can it be a plausible reason? Why keep stores on doing that? Do they really think customers are that stupid? Or has it just become a convention?

Now coincidentally, I recently came across this paper

via Only Human. The study presented in this paper examines the influence of a given 'anchor' price on the 'adjusted' price that people believe to be the actual worth of an object if the only thing they know is the adjusted price is lower than the retail price. A typical question they used in experiments with graduate students sounds like this

"Imagine that you have just earned your first paycheck as a highly paid executive. As a result, you want to reward yourself by buying a large-screen, high-definition plasma TV [...] If you were to guess the plasma TV’s actual cost to the retailer (i.e., how much the store bought it for), what would it be? Because this is your first purchase of a plasma TV, you have very little information with which to base your estimate. All you know is that it should cost less than the retail price of $5,000/$4,988/$5,012. Guess the product’s actual cost. This electronics store is known to offer a fair price [...]"

Where the question had one of the three anchor prices for different sample groups: a rounded anchor (here $5,000), a precise 'under anchor' slightly below the rounded anchor, and a precise 'over anchor' slightly above the rounded anchor. Now the interesting outcome of their experiment is that consistently people's guess for the adjusted price stayed closer to the anchor the higher the perceived precision of this price, i.e. the less zeros in the end. Here is a typical result for a beach house, the anchors in $, followed by the participants' mean estimate

    Rounded anchor: 800,000
    Mean estimate: 751,867

    Precise under anchor: 799,800
    Mean estimate: 784,671

    Precise over anchor: 800,200
    Mean estimate: 778,264

What you see is that the rounded anchor results in an adjustment that is larger
than the average adjustment observed with the precise anchors. Now you might wonder how many graduate students have much experience with buying beach houses, or plasma TV's for 5,000. But they used a whole set of similar questions, in which the measure to be estimated wasn't always a price, but possibly some other value like the protein value of a beverage. There even was a completely context-free question "There is a number saved in a file on this computer. It is just slightly less than 10,000/9,989/ 10,011. Can you guess the number?". The results remain consistent, the more significant digits the anchor has, the less the adjustment. For the context free question the mean estimate was 9,316 (rounded) 9,967 (precise under) 9,918 (precise over).

The paper further contains some other slightly different experiments with students to check other aspects, and it also contains an analysis of behavior in real estate sales. The author's looked at five years of real estate sales somewhere in Florida, and compared list prices with the actual sales prices of homes. They found that sellers who listed their homes more precisely (say $494,500 as opposed to $500,000) consistently got closer to their asking price. The buyers were less likely to negotiate the price down as far when they encountered a precise asking price.

I find this study kind of interesting, as it would indicate that the use of ninetynineing is to fake a precision that isn't there.

Bottomline: The more details are provided, the less likely people are to doubt the larger context.


TAGS: , ,