Sunday, 12 September 2010

New Paper on STAR*D - Is all as it seems?

A recent paper commenting on the outcomes from the STAR*D Study (Full paper for download: Pigott, H. E., Leventhal, A. M., Alter, G. S., et al (2010) Efficacy and Effectiveness of Antidepressants: Current Status of Research. Psychotherapy and Psychosomatics, 79(5): 267-279 revisits some of the ground covered by Irving Kirsch but interestingly looks again at the outcomes from STAR*D.

Most people will recall that STAR*D was a large (over 4,000 patients enrolled) and expensive (about $35 million) trial which attempted to determine what the 'real world' outcomes were for four steps of antidepressant therapy (drugs and psychological therapy). It covered a range of different comparisons, and perhaps the 'quickest' overview of what was a complex trial can be found in this paper: Rush, A. J., Trivedi, M. H., Wisniewski, S. R., et al (2006)American Journal of Psychiatry, 163(11): 1905-1917 Acute and Longer-Term Outcomes in Depressed Outpatients Requiring One or Several Treatment Steps: A STAR*D Report. [Link to Journal Website]

A summary of the steps (and to some extent, outcomes) is shown below:


STAR*D used the Quick Inventory of Depressive Symptomatology (16-items) as one of their main outcome measures, and outcomes from each treatment step, using the QIDS-16 were as follows:

  • Step 1 = 36.8%
  • Step 2 = 30.6%
  • Step 3 = 13.7%,
  • Step 4 = 13.0% 
The overall [cumulative] remission rate was 67%. This means that overall, two thirds of patients achieved remission. It also means that after four treatment trials, one third had not remitted.

It's important to note that this cumulative remission rate was "hypothetical" in that it assumed no drop-outs. Of course, every trial has drop-outs, and this is what Pigott et al have taken into account. They also make some other criticisms of STAR*D:
  1. The use of the QIDS-16 as a primary outcome measure instead of the 30-item scale. The main reason for this was apparently the large number of missing measurements on the latter. However, the QIDS-16 is critiqued by Pigott et al as being used as a 'clinical' measure rather than a 'research' measure throughout the study, and that its use as a study outcome might be unjustified.
  2. They also comment on the cut-offs that were used on the Hamilton Rating Scale for Depression (HRSD, or HAM-D). Pigott et al report that 931 patients who should have been excluded because their HRSD score was ≤ 14 (an exclusion criterion for the study, which excludes those with very mild depression - i.e. non-major depression). They argue that this "inflated" the original remission rate for Step 1 from 32.8% to 36.8%.
They go on to present slightly modified remission rates for each step, along with the drop-out rates for each step:

Pigott et al conclude by discussing the fact that remission from major depression is hard to achieve. Perhaps this is not surprising to those who treat depression. Of course, we know that antidepressants aren't as effective for those with milder forms of the illness but antidepressants are probably one of the treatments for depression with the greatest evidence base. STAR*D also included Cognitive Behavioural Therapy (CBT) which is probably the best-evidenced psychological treatment for this population.

Should it really be a surprise that antidepressants don't work for everyone? Hopefully not. However, it is not clear what the authors might be suggesting as an alternative until you get to the 'Conflicts of Interest' section:

H. Edmund Pigott, PhD, and Gregory S. Alter, PhD, are founders of NeuroAdvantage, LLC, a for-profit neurotherapy company. During the past 3 years, Dr. Pigott has consulted for CNS Response, Midwest Center for Stress and Anxiety, and SmartBrain Technologies.
That's right, the authors own their own company, Neuroadvantage, which will sell you (for $995), a device that plays sounds and lights in order to synchronise your brain waves. They claim that it "Helps to Decrease Symptoms of Depression & Anxiety". No wonder that they were keen to weaken the case for drugs and psychological therapies - they have a product which they wish to sell that is targeted at the same market.

As with all such products, "This patented technology is based on over 70 years of research and takes advantage of our mind’s natural tendency to synchronize with pleasant rhythmic stimulation". Not only is the 'technology' old (and therefore established), it links in to natural tendencies.

So keen are the authors/ salesmen to ensure that you consider their product, they provide a link to the paper above (it is free to download) to viewers of the website. This is, arguably, a classic case of 'bait and switch'. There is a very good article on 'bait and switch' available here, but essentially the bait is a treatment that avoids the side effects and apparent ineffectiveness of drugs and therapy, and once lured in, the switch is to unproven therapies with much less evidence.

How much less will be the subject of a subsequent post.

Sunday, 5 September 2010

Why are psychiatrists more likely to get into trouble with the NCAS?

The National Clinical Assessment Service (NCAS) was established in 2001 to help the NHS address concerns about the performance of doctors. It was extended to include dentists in 2003. They publish reports on their activity every couple of years or so; the most recent one being published in 2009.

Unfortunately, it seems as though it's difficult to access from outside of the NHS. However, there is a copy here: NCAS Casework: The first eight years. There are some interesting findings.

One is that psychiatrists are over-represented in specialties referred to the NCAS. Between 2001/02 and 2008/09, 541 psychiatrists were assessed out of a total of 4,508 doctors (12%). The majority, 341/ 541 (63%) were consultant psychiatrists.

This might not be that interesting were it not for the fact that more psychiatrists were assessed than might be expected from their overall number of doctors. Psychiatrists make up between 6-7% of the workforce, but contribute 12% of all assessments. This is shown below:


Why might this be? Well, the NCAS has tried to determine the factors that might be contributing to this finding (which is consistent year-on-year). They found that, "The specialties are ranked in order of proportions non-white and qualifying outside the UK (column 2). O&G [Obstetrics and Gynaecology] and psychiatry rank highest (36 per cent and 33 per cent compared with 25 per cent for medical specialties together). They also have the lowest proportions white and UK-qualified (37 per cent and 40 per cent in column 5). Their share of UK-qualified non-white practitioners is below average (column 3) and white non-UK qualified practitioners have an above average share (column 4). Chart 2.1 may therefore be showing, for psychiatrists, the effect of ethnicity and place of qualification alongside specialty."

This appears to suggest that being non-British and qualifying outside of the UK has a bigger effect on psychiatry than other specialties because psychiatry has a higher relative proportion of these groups. They add that, "There is no evidence that non-white UK-qualified practitioners are being referred or excluded disproportionately." I think that it is important to not speculate too much as these are potentially sensitive areas. NCAS are keen to point out that, "NCAS is not trying to produce a determinist explanation of referral patterns".

What about reasons for referral to NCAS? Well, individual specialty data are not available from the report, but the most common reasons for doctors to get into trouble are: "Clinical Difficulties", followed by "Governance/ Safety Issues", followed by "Misconduct". This is shown in the graph below:


Of the clinical difficulties, the most common reasons were: Critical Incident (21%); Diagnosis Skills (20%); Record Keeping (18%); Consultation Skills (18%). Other reasons are broken down by speciality in the table below. Only the figures in bold were found to be statistically significant.



The whole report can be found at the link above.