Why Is The Placebo Effect Getting Stronger ?

Robert Waldmann

An article with that title by Steve Silverman in Wired is getting a lot of attention.

I actually know a little bit about the placebo effect and depression. Lo and behold, Silverman’s key example is the effect of placebos on depression. He briefly mentions two issues which I would like to discuss at length after the jump.

On the third page of a three page article he finally discusses how depression is quantified.

Big Pharma faces additional problems in beating placebo when it comes to psychiatric drugs. One is to accurately define the nature of mental illness. The litmus test of drug efficacy in antidepressant trials is a questionnaire called the Hamilton Depression Rating Scale. The HAM-D was created nearly 50 years ago based on a study of major depressive disorder in patients confined to asylums. Few trial volunteers now suffer from that level of illness. In fact, many experts are starting to wonder if what drug companies now call depression is even the same disease that the HAM-D was designed to diagnose.

This paragraph vastly understates the problem. The HAM-D scale is not a cardinal measurement of depression — the logarithm of the HAM-D score would work just as well. This means that any comparison the magnitude of changes in depression starting from different initial levels of depression is meaningless.

This is very important, because the difference in the change in HAM-D with modern antidepressants and with placebo is different for different initial levels of depression.

There is no evidence that anti-depressants do anything for you if you aren’t depressed. There also happens to be no evidence that prozac does anything for you if you are mildly depressed. This is the result of one single experiment which taught Pharmaceutical companies not run trials on mildly depressed patients. It also matches the result of extrapolation from trials with more depressed patients.

Silverman notes that, not satisfied with sales, pharmaceutical companies are trying to see if modern anti-depressants are good for syndromes different from classical depression. They are being disappointed. I think there is no meaningful way to decide if the placebo effect is stronger or if anti depressants just work for, you know, depression.

There is another issue (also briefly noted by Silverman earlier in the article) that HAM-D scale is not like a temperature. It is based on a form filled out by a caregiver in which the score repeatedly depends on the word “severe.” A co-author of mine Jamie Horder notes that the so called Placebo effect might be the result of (perhaps unconcious) bias in the measurement of initial depression. People in the clinical trials team are looking for subjects/patients who must be moderately to severely depressed. If they give a prospective patient a high score on the initial HAM-D they have found a patient. A low score and they have to keep looking.

Similarly prospective patients are there because they are willing to participate in the trial. They show up saying they are depressed. It is natural to stress ones depressive symptoms if one has just gone out of one’s way to claim to be depressed.

This basically must create an upward bias in the initial scores. That means that the improvement of patients given the placebo might be the disappearance of this bias.

Increased pressure to find severely depressed patients (do the the fact that anti depressants work better compared to placebo with such patients) might have caused increased bias and an increased measured placebo effect.

Now I will discuss the brief discussion of the quantitative research

Potter and DeBrota’s data-mining also revealed that even superbly managed trials were subject to runaway placebo effects. But exactly why any of this was happening remained elusive. “We were able to identify many of the core issues in play,” Potter says. “But there was no clear answer to the problem.”

I assume that Potter and DeBrota checked whether the HAM-D placebo effect was a systematic function of initial depression (it doesn’t seem to be).

That would be the simplest explanation of what only a silly person would consider an increase in the placebo effect — patients in newer trials are initially more depressed. The absolute change in HAM-D due to the placebo effect is greater the more depressed patients are. This tells us nothing (we would get the opposite result if we used Log(HAM-D) instead of HAM-D. I don’t think that’s happening.

I can tell a fancy story
1) there is an ever greater shortage of severely depressed people who are willing to participate in a trial which means they must be willing to take anti-depressants and either not taking an antidepressant or not satisfied with it.
2) the average initial HAM-D score is holding up because there is more and more pressure to find severely depressed trial participants.
3) the bias in the initial assessment is getting stronger and stronger
4) measured improvement with placebo ( and with antidepressant treatment) is getting larger and larger.