1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. Discuss politics - join our community by registering for free here! HOP - the political discussion forum

Getting it wrong

Discussion in 'Science & Technology' started by Dr.Who, Nov 11, 2011.

  1. Dr.Who

    Dr.Who Well-Known Member

    Joined:
    Jul 11, 2007
    Messages:
    6,776
    Likes Received:
    251
    Trophy Points:
    83
    Location:
    Horse Country
    I stumbled on this blog the other day and found it interesting enough to read several of this person's articles which I have bookmarked and will be sharing with you all. In the course of reading his blog I have determined that he is a liberal leaning blogger but I think what he has to say can still be valuable if we understand his biases.

    In this one he analyzes the process of research with a focus on medical research.

    http://www.guardian.co.uk/commentisfree/2011/jul/15/bad-science-studies-show-we-get-things-wrong

    "

    Morons often like to claim that their truth has been suppressed: that they are like Galileo, a noble outsider, fighting the rigid and political domain of the scientific literature, which resists every challenge to orthodoxy.

    Like many claims, this is something where it's possible to gather data.

    Firstly, there are individual anecdotes that demonstrate the routine humdrum of medical fact being overturned.

    We used to think that hormone-replacement therapy reduced the risk of heart attacks by around half, for example, because this was the finding of a small trial, and a large observational study. That research had limitations. The small trial looked only at "surrogate outcomes", blood markers that are associated with heart attack, rather than real-world attacks; the observational study was hampered by the fact that women who got prescriptions for HRT from their doctors were healthier to start with. But at the time, this research represented our best guess, and that's often all you have to work with.

    When a large randomised trial looking at the real-world outcome of heart attacks was conducted, it turned out that HRT increased the risk by 29%. These findings weren't suppressed: they were greeted eagerly, and with some horror.

    Even the supposed stories of outright medical intransigence turn out to be pretty weak on close examination: people claim that doctors were slow to embrace Helicobacter pylori as the cause of gastric ulcers, when in reality, it only took a decade from the first murmur of a research finding to international guidelines recommending antibiotic treatment for all patients with ulcers.

    But individual stories aren't enough. This week Vinay Prasad and colleagues published a fascinating piece of research about research. They took all 212 academic papers published in the New England Journal of Medicine during 2009. Of those, 124 made some kind of claim about whether a treatment worked or not, so then they set about measuring how those findings fitted into what was already known. Two reviewers assessed whether the results were positive or negative in each study, and then, separately, whether these new findings overturned previous research.

    Seventy-three of the studies looked at new treatments, so there was nothing to overturn. But the remaining 51 were very interesting because they were, essentially, evenly split: 16 upheld a current practice as beneficial, 19 were inconclusive, and crucially, 16 found that a practice believed to be effective was, in fact, ineffective, or vice versa.

    Is this unexpected? Not at all. If you like, you can look at the same problem from the opposite end of the telescope. In 2005, John Ioannidis gathered together all the major clinical research papers published in three prominent medical journals between 1990 and 2003: specifically, he took the "citation classics", the 49 studies that were cited more than 1,000 times by subsequent academic papers.

    Then he checked to see whether their findings had stood the test of time, by conducting a systematic search in the literature, to make sure he was consistent in finding subsequent data. From his 49 citation classics, 45 found that an intervention was effective, but in the time that had passed, only half of these findings had been positively replicated. Seven studies, 16%, were flatly contradicted by subsequent research, and for a further seven studies, follow-up research had found that the benefits originally identified were present, but more modest than first thought.

    This looks like a reasonably healthy state of affairs: there probably are true tales of dodgy peer reviewers delaying publication of findings they don't like, but overall, things are routinely proven to be wrong in academic journals. Equally, the other side of this coin is not to be neglected: we often turn out to be wrong, even with giant, classic papers. So it pays to be cautious with dramatic new findings; if you blink you might miss a refutation, and there's never an excuse to stop monitoring outcomes."

    In other words 50% of new approaches your doctor practices is wrong but it will be corrected soon enough.
     
  2. dogtowner

    dogtowner Moderator Staff Member

    Joined:
    Dec 24, 2009
    Messages:
    16,604
    Likes Received:
    1,340
    Trophy Points:
    113
    Location:
    Sec 9 Row J Seat 1 @ VCU home games
    I have noticed with disturbing frequency that the basis for the conclusion seemed narrowly qualified and doomed to be found lacking. I wonder if this is not a predictable outcome of the nature of the research grant racket as opposed to the pure research model of our golden age.

    as an example, Bell Labs (a private concern) came up with a shocking number of things when their guys were free to just think outside the box. then the model changed. and when was the last time you heard them mentioned ? they were in the news all the time when I was young. CERN seems to have avoided this to some extent but why not us ?

    there are lots of causes for this but the ones I see are all government in nature. between destructive changes in tax law that halved the time a new idea had to produce ROI to government's involvement in dictating research via grants (picking winners and losers) its just anther indictment of overbearing government.
     
  3. Dr.Who

    Dr.Who Well-Known Member

    Joined:
    Jul 11, 2007
    Messages:
    6,776
    Likes Received:
    251
    Trophy Points:
    83
    Location:
    Horse Country
    While I did not intend this thread to be about the regulation of the health research industry making research difficult and expensive you are right. It takes years and millions for even the simplest new drug or use of a drug to be approved.

    Case in point. We use a "pen" to inject insulin. The pen can deliver 1 unit, or 1.5 units, or 2 units or 2.5 units etc. I was annoyed that while the pen was marked with a .5 unit measure the directions said not to give just a .5 injection because it might not be accurate. Then just a few month ago the FDA approved the pen, the exact same pen with no alterations, to give .5 unit injections. how hard was it to determine that the pen could accurately shoot out .5 units of insulin? When they proved that it could shoot out 1 unit accurately why did they not at the same time prove that it could shoot .5? Why the years of delay and how much did it cost for the separate study?
     
  4. dogtowner

    dogtowner Moderator Staff Member

    Joined:
    Dec 24, 2009
    Messages:
    16,604
    Likes Received:
    1,340
    Trophy Points:
    113
    Location:
    Sec 9 Row J Seat 1 @ VCU home games

    you note the problem so the next step is the "why". its a sad thing that so often the finger points to DC. worse that its seldom a matter of unintended consequences so much as picking winners.
     
  5. Dr.Who

    Dr.Who Well-Known Member

    Joined:
    Jul 11, 2007
    Messages:
    6,776
    Likes Received:
    251
    Trophy Points:
    83
    Location:
    Horse Country
    Sadly, the likelihood that the .5 dose was not approved very well could be because someone was picking the winner to be the pump, or the disposable pen, or a competing pen, or the patch, or just wanted to make a push for the artificial pancreas.
     
  6. dogtowner

    dogtowner Moderator Staff Member

    Joined:
    Dec 24, 2009
    Messages:
    16,604
    Likes Received:
    1,340
    Trophy Points:
    113
    Location:
    Sec 9 Row J Seat 1 @ VCU home games

    got me wondering now, was it only the .5 or all the half units ? kiddo and i both use pens (hers is insulin) so it kind of makes a difference here. could be that the .5 only is so little used as to not matter much.
     
Loading...

Share This Page