What clinicians should know!

[ Contents | Search | Post | Reply | Next | Previous | Up ]


Re: Ann's comments on Lidcombe Program trial

From: Tom Weidig (thestutteringbrain.blogspot.com)
Date: 21 Oct 2008
Time: 07:52:55 -0500
Remote Name: 88.207.185.7

Comments

Most people are NOT frustrated by attention to scientific principles. I am frustrated about the poor application of scientific principles to therapy outcome research like conflict of interest (proving your own treatment), passive understanding and robot-like application of statistics, leaving out subtelties, and repeating of statements and deference to authority instead of engaging in counterarguments when challenged on the strength of evidence. I will show that EVERY single of Ann's sentences (to "clear up misconceptions" according to her words) are inaccuracte. (1) "the Lidcombe Program randomised control trial". Let's be clear which kind of RCT it is. It is not a double blind RCT, the highest standard, that allows to check whether it is the treatment itself that is sucessful. It is an open-label trial with the big disadvantage that even if the treatment arm shows a higher sucess rate you cannot say whether it was the placebo effect (the fact that the kids/parents had treatment), generic feature of ALL early intervention treatments (like parent-child interaction, easing parents' stress, adaptation to the treatment setting), or actually Lidcombe-specific feature. So you are NOT actually testing Lidcombe specifically but the whole package (placebo, generic and specific)! Moreover, the randomization was broken after 9 months and was not present in the long-term data. And let's note that 9 months is from the start of the treatment and NOT from the end of the treatment. Finally, the sample size was too low for randomization to equalise the two groups: as I discuss in my rapid response to Jones 2005 in BMJ. These arguments were confirmed and mentioned by Roger Ingham's group as he told me when I met him. So calling it a Lidcombe RCT looks very scientific but is a misnomen really! (2) "The trial was reported by Jones et al. (2005) in the British Medical Journal." It is irrelevant whether it appears in BMJ or anywhere else. It does not add to the debate, and only fallaciously implies "BMJ is a really good journal so the trial must be really sound". Moreover, you are not mentioning that I wrote a rapid response in BMJ criticising the statistics or if you think I as a PhD physics have no clue you could at least mention other critical feedback. (3) "There was a significant treatment effect after 9 months, compared to the no-treatment control group." As I said the stats are wrong. And again, 9 months after the start of the treatment, but not 9 months after the end of the treatment. I just re-read the article and you are writing that the kids are still in treatment! The relevant time period is starting at the end of treatment. ANY behavioural therapy will produce gains: diets, drug, giving up smoking. The important part is the relapse. (4) "The study was conducted according to CONSORT guidelines (see http://www.consort-statement.org/ which specify the appropriate methods and analyses for reporting trials in medical journals." First, these guidelines are for standard situations, but early intervention is very different because you have the natural recovery that distorts statistics and therefore you need many more kids to create truely balanced groups via randomization. You stopped at 47 kids rather then the 100 which would have improved the statistics dramatically. In fact, you had your design at 100. Why? Second, even if the guidelines are correct, it does not imply that the implementation of the guidelines was done correctly! Kids dropped out, you gave up the control group, you changed the sample size. (5) "They have been in existence for over 15 years." That is so symptomatic of bad thinking i.e. deference to some authority. I do not care how many years something is in existence. I only care about the strength of arguments. To show you how strange this is. I could argue: Well if it is 15 years old, it is too out-dated and should not be trusted! You might convince non-scientists but you cannot conduct a debate with such pseudo arguments. (6) "As for the 5-year follow up study (Jones et al. 2008) of the children in this trial, it is indeed the case that three of the children were found to be stuttering again, after at least two years of fluency.". Again this sounds very respectable, but I have actually read the article (unlike most therapists). It is a desaster. The MAJORITY of the kids could not be contacted anymore. Why? Or did someone not contact them because they stuttered? Moreover, 3 kids relapsed is 86% recovery rate, and considering the small sample you cannot even be sure you beat the natural recovery. OK you argue that the natural recovery is much lower, but please show it to me in the control group or achieve 90% in a sample of 100 kids! "Without this long-term follow up study, we would not have this important new knowledge about the nature of stuttering and about the need to work to further improve Lidcombe outcomes. " Again, this sounds really great but your study was so poorly implemented how can we trust your results. From 134 kids referred to treatment and 47 completing it, you are left with 28 kids! So where is this important new knowledge? How can you have new knowledge on such a poor sample? The need to further improve Lidcombe? Sounds like from a spinning doctor. THE TRIAL IS NOT SET UP TO PROVE LIDCOMBE IS EFFECTIVE, so how can you say you will improve it? (7) "This tells us that:" "1. For these children the initial improvement in stuttering was apparently due to the treatment, not natural recovery (2)" NO AGAIN THE TRIAL DOES NOT EXCLUDE PLACEBO OR NON-LIDCOMBE EFFECTS. Moreover, you could even argue that those you would have recovered anyway just recovered faster in the 9 months because they have the inherent ability anyway. And we know from adult therapy, that nearly everything works for some time. Not speak about getting used to clinic environment. To summarise, I am just fed up with sloppy pseudo-scientific replies that 99% of the clinicians and stuttering community swallow happily because no-one actually sits down and looks at the trial carefully. Or she or he would find that it is a can of worms. But let me conclude by saying that at least you try to do evidence-based research. So the fact that I can criticise your research is progress in itself for I cannot criticise other approaches because they do not do any outcome research.


Last changed: 10/21/08