Assessing the ‘true’ effect of active antidepressant therapy v. placebo

I’ve just read this article in The British Journal of Psychiatry (2011) 199: 501-507 .    It proposes an explanation for why so many reviews fail to find a clinically relevant difference between antidepressants and placebo:  the way the data is analyzed.  The vast majority of studies use a statistical model called ANCOVA (analysis of covariance), which assumes a linear relationship between two variables.  Let’s say on one axis we put a score for how improved a patient’s depression is.  On the other axis we might put how severe the person’s depression was to begin with.  That would allow us to evaluate if there is an interdependent relationship between those two factors: they are “covariant”.

However, the problem highlighted in this paper is that depression and response to antidepressants may not be well represented with a smooth line.  People may not exist along a continuous curve of depression, but rather fall into distinct groupings and to draw a line connecting those groups ruins the validity of the ANCOVA analysis.  Instead, we might analyze the data by category:  people who respond to drug, people who don’t respond to drug, and people who respond a little, then in each category measure how many people have serious depression, major depression, and minor depression, as measured on a depressive symptoms scale.  They did this using something called the “mixture model” that I was unfamiliar with.

The authors point to another paper I don’t have access to that suggests that the clinical significant effects are buried in the very minor improvements seen in some categories of patients.  I suppose the best analogy I can come up with is evaluating a special fertilizer used to enhance the growth of garden crops.  We apply that fertilizer to our garden but also the lawn, the patio and the driveway.   Let’s say a total of 600 square meters are fertilized, where the garden is only 100 square meters.  We produce 30 kg of crops versus 15 kg the previous year, unfertilized.  You could report this as generating 15 kg increase per 100 sq m (Effectiveness Index = 0.15 kg/m2), or as 15 kg increase per 600 sq m (Effectiveness Index = 0.025 kg/m2).  If you are trying to figure whether this fertilizer is cost effective, the difference between analyzing only the garden and the entire fertilized area is significant.

Likewise, depression is often being over-treated as a matter of course.  That sounds deplorable, but it’s a very blunt instrument and sometimes necessary to treat more people than truly respond.  NNT is the “number needed to treat”, and we use it as a way of measuring how many non-responders have to be treated to get one truly effective response.  The authors of this paper produce values of 6 for response and 8 for remission.  That means you need to treat 6 people who don’t respond fully in order to get one clinically significant response, and 8 people have to be treated to get a remission of depression.  Not inspiring, I know, but actually much better than most previous studies.  I out of 8 people on antidepressants are getting enough of a benefit out of the drugs to live a more-or-less normal life.  One in 6 are at least getting measurable relief.  The other 5, the ones who are suffering side effects with no significant relief, are the ones that concern me most on this topic.

Final point, because I didn’t really cover it.  The authors found in the data a “bimodal response profile” in most cases (there were a few exceptions).  That means if we separate the patient set into two groups, they are much more distinct in terms of response to drug.

On the left here you should see Figure 1 from the paper.  The total population looks relatively flat, but when subdivided, two populations clearly emerge (bimodal distribution).

I’m very interested in this because it was my initial assumption/bias that this would be the case.  I assume that we’re broadcasting this drug to everyone during clinical trials, regardless of whether we know they will respond, and I would fully expect that a drug that changes serotonin biology (SSRIs) would only be effective in patients with an underlying serotonin problem.  I’m always very careful when I find data to support my preconceptions, so I’m trying to scrutinize this analysis as carefully as possible to avoid getting carried away with confirmation bias.

A big part of my own research has been into how to randomize cancer patients on a responder/non-responder profile of “biomarkers“, so this type of bimodal distribution is very significant to me.

–C.

Some Foods Contain Nicotine… But Not Much

One fact frequently used by proponents of e-cigarettes is that regulating their favorite product because it contains nicotine means we should also regulate foods that contain nicotine.  Yes, many foods we eat produce measurable levels of nicotine.  Tomatoes, potatoes, cauliflower, and green peppers are known to contain small amounts, and black tea is sometimes suggested to contain some, although recent tests don’t confirm this.

Does this nicotine affect us?  Are we, like Homer Simpson, creating addictive tomato-tobacco hybrids (Tomacco)?  The short answer is no.  The dose makes the poison, and in this case the dose of nicotine you receive from a cup of these vegetables barely registers when compared on the same graph with the nicotine content of a single cigarette.  The figure below uses figures produced in a letter to the editor of the NEJM.  Cooked vegetables, especially boiled, will probably have lower values, but the amount is only really significant in relation to the specificity of clinical tests designed to detect nicotine (and cotinine, its metabolite) levels in the blood.

What this figure also highlights is the potential harm from the abundant nicotine found in e-cigarette juice.  The cartridges are intended to be used many, many times so that the delivered dose is not much different than a single smoking session with a real cigarette, but it would be a simple thing to forget and over-consume from the cartridge, resulting in increased total nicotine intake.

I’ve covered it before, and frankly I have more important topics to deal with, but I personally think there’s not enough evidence to conclude that e-cigarettes are as safe as the FDA regulated nicotine replacement therapies designed to help people quit.  As always, I recommend anyone with questions about their health consult an actual physician.  I’m just some guy on the Internet.

March of Progress in Cancer Treatment

I came across this today, and I thought it might give some people hope for the future.  The source is: Cancer Research UK, Survival Statistics for the most common cancers.

What really strikes me is the number of cancers where 5 year survival is over 50%.  Pancreatic, Lung and Esophagus cancer are still very deadly, though.

Now take a look at where we’ve come from since 1971:

You can see that many cancers have radically improved in 10 year survival.  There’s still much room for improvement, but I take a lot of comfort in the progress we’ve made and are continuing to make.

My postdoc was at a cancer research center that shared a building with a cancer treatment clinic.  Every day I’d walk past the waiting room and the chemo chairs and see someone’s mother, grandmother, father, brother, sister awaiting treatment.  It was great motivation to take my job seriously.  F–k Cancer, Support Cancer Research!

Humor: The Methodology Section Translator

This is hilariously accurate.  I think people forget that most university research is done by 20-25 year olds… Normal grad students who just want to get their degrees.  I always try to remember that when reading a paper with surprising findings.  My own estimate, I’d guess that 60%+ of the retracted papers and unreplicated findings aren’t some vast conspiracy or collusion with Big Pharma… they’re some poor grad student who didn’t document well, or after the 8 gazillionth time running a gel to find a band of interest, finally broke down and just Photoshopped the fracking thing into the TIFF.

By c0nc0rdance Posted in Humor

Menin Doesn’t Satisfy the Challenge to the Discovery Institute

“The MEN1 gene, mutations in which are responsible for multiple endocrine neoplasia type 1 (MEN1), encodes a 610-amino acid protein, denoted menin. The amino acid sequence of this putative tumor suppressor offers no clue to the function or subcellular location of the protein.”
–Proc Natl Acad Sci U S A. 1998 February 17; 95(4): 1630–1634.

I got a request to test the Menin gene today to see if it meets the criteria of the challenge to the Discovery Institute to locate a gene with no homology to other genes: in other words, a gene that appears to have been created by non-natural processes.  This is the mirror of their challenge to produce an observed instance of “macroevolutionary change” which is based on false premises… I’ll save that for a later post.

What about Menin?  Well, in 1998 we didn’t have any homologous proteins… but a lot has changed since that young person’s textbook was printed.  Below I’ll put a screen shot from Homologene .  For the sake of brevity, I’m only including the first part of the protein sequence.

Sorry for the graphics quality.  Each of those letters represents an amino acid in the protein.  Notice that even the invertebrates (fruit fly, mosquito) have a menin homolog, although there’s an increase in differences the further away we travel from the common ancestor of each pair.  Chimp and Human menin sequences are identical across the full length.

So, no, this doesn’t meet the challenge.  If you want to read more about Menin, a tumor suppressor protein where variations are tied to a variety of diseases, go here.

My Mission Statement (yes, really)

I know, I know… mission statements are for corporate drones, but honestly I wanted to make it clear why I’m continuing in making videos and running a blog, both to you and to myself. Consider this a contract, of sorts: I promise to keep evaluating scientific claims if you promise to watch and respond thoughtfully.

John Ioannidis Is Just What Science Needs

For those of you who haven’t seen my video on “Why Most Research Findings Are False” or otherwise aren’t familiar with the excellent work of John PA Ioannidis, I invite you to learn more about his work:

He’s done such an important job in being the conscience of clinical science: exposing where the scientists are prone to self-deception.  He’s a critical thinker about critical thinking, and I love him for it.  Apparently, so do other scientists, because his article is the most downloaded from the Public Library of Science (PLoS) site.  His paper was also the clearest on “prior probability” and Bayesian statistical methods that I have ever read.

So now I come across his work in my background research on antidepressants, and as usual it’s a clear and revealing look at the critical thinking that should underlie the body of research.  Here’s the full-text article:

Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?

His wording is forceful and unapologetic:

“Based on the above considerations, antidepressants are probably indicated only in select patients with major depression, probably preferentially in those who have severe symptoms and have not responded to anything else. For most patients with some depressive symptoms who are currently taking antidepressants, using these drugs would not have been the preferred option, placebo would be practically as good, if not better, and would save toxicities and cost.”

Did you catch that?  Placebo would have been as effective, or more effective, in many cases without the side effects.  I have to concur based on the systematic reviews I’ve screened so far.  I think the number needed to treat says a lot as well, and will probably include that concept in the future video on the topic.  It’s not that they’re ineffective, it’s that they’re badly misused, overdiagnosed, and I think a big part of the blame can fall on pharma advertising on the airwaves.  The numbers below say a lot about why we keep generating new antidepressants:

Table 1
Top-selling antidepressants in the USA, 2006
Drug (brand name) Rank across all drugs Sales (billions $)
Venlafaxine XR (Effexor XR) 6 2.25
Escitalopram (Lexapro) 10 2.10
Sertraline (Zoloft) 15 1.77
Bupropion XL (Wellbutrin XL) 16 1.67
Duloxetine (Cymbalta) 35 1.08

Do the Latest Antidepressants Work?

I’m continuing my review of the literature on antidepressants.  Today we cover a systematic review in PLoS Medicine, which is one of the new open access journals that anyone can access.  It’s been amazing to watch these open-source journals revolutionize access to the literature for people like me who don’t have university access.

Here’s the link to the full text article.

PLoS Med. 2008 February; 5(2): e45.

The authors are diverse and from different countries, and they’re only considering FDA submitted data.  I’ll let them summarize their findings:

“Using complete datasets (including unpublished data) and a substantially larger dataset of this type than has been previously reported, we find that the overall effect of new-generation antidepressant medications is below recommended criteria for clinical significance. We also find that efficacy reaches clinical significance only in trials involving the most extremely depressed patients, and that this pattern is due to a decrease in the response to placebo rather than an increase in the response to medication.”

Let me give my own translation:  Unless someone is VERY, VERY depressed, it was hard to justify using these new antidepressants, and then only because there are no non-pharmacological solutions that seem to work anymore.  In other words, it’s not that the drugs worked particularly well in any case, but the placebo effect was no longer of any benefit.  There’s a figure that illustrates this pretty well:

As you start at the left with the people who started out less severely depressed, placebo outperforms the new drugs by a small amount.  Then, as the benefit of placebo decreases, the difference between the two increases.  It’s not that the drugs get that much more effective in people with stronger symptoms, but the difference increases until finally we enter the “green zone”, the area where the drug outperforms the placebo enough to be clinically relevant.  By that point the drug is still not showing much improvement.

I personally found that pretty surprising, given that the data they are mining for the review was in the FDA’s hands.  I went into this evaluation with a general impression that SSRIs and the like work only in cases where there’s an underlying serotonin problem (brain chemistry issues), but now I see that this isn’t going to be that straightforward.

My one concern is that in grouping all these studies together, there seems to be a lot of diversity in the kind of results produced.  I wonder what caused those outlier triangles with high improvement (high on the Y axis).

The investigation continues!

C.

Evolution Is Not An Arrow!

I think sometimes people get the very wrong impression that evolution moves organisms along a fitness axis slowly, inexorably, but inevitably in one direction, towards increasing fitness.  A number of analogies spring to mind:  a stair step; a hill; an arrow.  This is entirely wrong.  In fact, evolution is closer to a random walk on an inclined plane.   Wikimedia has an example of a random walk animation:

Now imagine this random walk is on a slightly sloped plane, where the slope represents the slight increase in fitness from a certain trait (camouflage, nutrition, whatever).  There is a good chance that our desired outcome (increased fitness) will be the result of walking randomly along that sloped surface… but sometimes it won’t be, because it randomly heads in the wrong direction.  We can often detect only the cases where the slope of that plane is very steep, so that randomness contributes only a small amount to the outcome.

It’s a word I’ve used twice in one day, but evolution is not what you would call deterministic… it’s stochastic (in the sloppy sense of that word that a biologist can use).  That is, you can’t predict where the actual mix of alleles will end up based purely on knowing the fitness slope, or what alleles carry a slight advantage.  It’s not truly random, but the information for the outcome isn’t contained in the setup.

I see this mistake all the time among proponents of the scientific view:  “The birds have optimized beaks because it increases their fitness” is not QUITE the same thing as “The fitness increase of specialized beaks should be favored over time by selection”.  The difference is in the inevitability of fitness optimization.  You can’t assume that every good trait will be preserved.  Some of them just get lost by random effects.

I’ll leave you with a good quote:

Sometimes a 3-1 favorite loses. That’s why they call it gambling, and that’s why they keep flipping over the cards.
Richard Roeper

The same is true for fitness landscapes…  or if you prefer: “The race is not always to the swift, nor the battle to the strong, but that’s the way to bet”  : )  Darwin couldn’t have put it better.
-C