This is Chapter 9 of my book-in-progress, “Open Wide And Say Moo! – The Good Citizen’s Guide to Right Thoughts And Right Actions Under Obamacare.” Comments are fervently sought; you can leave them here.
You can read my rationale for undertaking this project, and thus opening myself up to the possibility of public failure, humiliation, derision, disapprobation, and unwanted scrutiny, here.
And here is the up-to-date archive for all the chapters that have been posted so far.
Update – September 1, 2012
Open Wide and Say Moo! is now revised and published!
Now available in the audiobook version!
Farmer Jones has 10,000 head of cattle in his beef herd. He prides himself in staying up to date on all the latest methods, so he knows that adding a certain antibiotic to his cattles’ feed will reduce the incidence of intestinal infections, and will increase his annual overall yield, measured in pounds of beef, by 7%. He also knows that, unfortunately, roughly one in 200 of his cattle will experience a likely fatal allergic reaction to the antibiotic. It is possible to do a blood test to determine which specific members of the herd are allergic, but the test itself is quite expensive, and the logistics of separating the allergic cattle at feeding time and providing them with their own antibiotic-free feed would be so costly it would entirely wipe out his potential savings. What should Farmer Jones do?
Obviously, the cost-effective solution is for Farmer Jones to give antibiotic-treated feed to all his cattle, accepting the loss of a few head as the necessary price for an impressive overall gain in productivity. He would be an ineffective and incompetent rancher indeed if he were to pass up this golden opportunity to achieve cost-effectiveness.
If you are a patient or a potential patient (and who is not!), you ought to be especially concerned about two particular hazards that are intrinsic to herd medicine. First, as demonstrated by Farmer Jones, medical decisions that are made on a collective basis rather than on an individual basis may succeed in improving the overall outcome for the herd, but often only at the cost of doing predictable – and avoidable – damage to certain individuals within that herd.
Second, since it is the overall health of the herd which is important, there will always be individuals within the herd whose very existence is seen by Farmer Jones as counterproductive. Individual cattle that are too scawny, too old, or are otherwise unlikely to prove profitable, are still consuming valuable resources and taking up valuable space. So under any system of herd medicine there will always be a natural temptation to cull instead of cure certain inconvenient individuals.
It is extraordinarily politically incorrect to mention this second point, and so I must apologize right away for having done so. Sorry.
In fact, Obamacare, so far, seems to have taken no overt steps in the direction of actively “culling the herd.” But the history of Progressivism, sadly, is not reassuring in this regard. Early Fathers (and Mothers) of Progressivism enthusiastically embraced eugenics as an attractive, science-based method for reducing the sort of undesirable citizens who so obviously hinder the achievement of a perfect society. Certain Progressive societies – led by doctors – have conducted the “humane termination” of people with various disabilities. And collectivist governments (admittedly usually out of frustration at the recalcitrance of human nature than out of any scientific zeal) have been responsible for the deaths of millions of people over the last century. So, if only to keep on the safe side, we members of the Obamacare herd ought to remain alert to any tendency toward culling behaviors. If our Progressive friends are as filled with the milk of human kindness as they insist, our vigilance in this matter may waste some of our time, but otherwise should do no harm. And accordingly, to help focus our vigilance (in order to render it more cost-effective), in later chapters I will point out certain aspects of American healthcare that seem particularly likely venues for culling activities.
In this chapter, however, I will concentrate on the less sinister but more universal hazard inherent to herd medicine – causing predictable and avoidable harm to individuals by insisting on making medical decisions collectively.
Let us imagine that a large clinical trial has shown that a new cancer drug increases the mean survival in women with metatstatic breast cancer by three months. Unfortunately, the drug also causes some very nasty side effects, including some that can be fatal. And again unfortunately, this is one of those fancy designer drugs that cost over a billion dollars to develop, and is very costly to manufacture – so it is quite expensive.
A panel of experts, after carefully studying all the evidence, concludes that, given the relatively short improvement in mean survival, neither the risk/benefit ratio nor the cost/benefit ratio justifies approving the drug. The news media, while expressing sadness and compassion for breast cancer patients, solemnly concurs that the experts, of course, are right – that, while the drug has shown promise, it’s just not effective enough to justify the risk of side effects, or the cost of the drug. So better luck next time, with the next drug.
I think we must agree that it cannot be society’s duty to buy this new drug for all women with breast cancer. Under any publicly funded healthcare system that is run in fiscally sound manner (at least sound enough to avoid causing a catastrophic financial collapse), some line will need to be drawn, somewhere, regarding what expenses public funds can bear. And very possibly, a cancer drug that only extends the mean survival by three months may not make the cut.
In Chapter 4 we discussed the four possible methods for running a fiscally sound healthcare system. If we were under a Method Three healthcare system, where public spending is strictly limited but where individuals have the option of supplementing the public system with their own private insurance products, or even paying for desired healthcare services themselves, then many individuals would still have access to treatments like this new cancer drug, if they wanted to try it.
But under a Progressive, Method Two healthcare system, public funding is all there is. In this case one centralized coverage decision must fit all, and the result is herd medicine. Under herd medicine the new cancer drug cannot be approved, for anyone, once a panel of experts determines that its herd effect is insufficient to justify approval.
But determining the herd effect of a therapy (i.e., the average response to that therapy across a herd of patients), does not really tell the whole story.
Going back to our hypothetical, if you look at what actually happened in the clinical trial with our imaginary cancer drug, it turns out that very few of the women with breast cancer actually experienced three additional months of survival. Instead, some had a truly remarkable response to the drug, and are still alive a year or more after their predicted demise. In fact, it appears that a few might even have been completely cured. Some women, on the other hand, had very bad experiences with the drug, and side effects hastened their deaths. When you average all of these responses together, you get a mean benefit of three months.
But “three months additional survival” is not actually what we would expect to happen with most individual women who take this drug, and in fact this happened with relatively few of them.
In general, the reason people with cancer subject themselves to the ravages of chemotherapy is not to gain a few more weeks of life. The chemo itself often produces several weeks where life is barely worth living, so that would be a bad trade. Rather, they subject themselves to chemo on the hope – often a slim hope – that by doing so they are gaining some realistic chance at surviving for a long, long time.
If you were to give women with metatstatic breast cancer – an incurable disease that invariably causes death – the option of taking our hypothetical new cancer drug, some would opt for it and others would not. But in making their decisions, most of these women would not be thinking about the average of three additional months. Rather, most would be considering the fact that this new drug offers them some chance to beat back their cancer for substantially longer than that. They would be hoping to beat the average. They would be making the same calculus that cancer patients always make.
This new cancer drug represents a new chance at long-term survival, and faced with a fatal disease that is difficult to treat, taking that chance would have been a reasonable choice for many women – even though the drug produces only a tepid herd effect.
Herd medicine removes this option. When our hypothetical panel of experts decides not to approve this new drug – for anybody – what they have concluded is that, because the drug does not produce a sufficiently favorable effect across the herd, individual women should not have the option of using it. This is the only thing expert panels under a herd medicine paradigm can do. They cannot deal in nuances. They must determine whether a new therapy merits application to the entire herd, or to nobody.
Furthermore, if the answer is “nobody,” then the message the experts must convey – the only acceptable message they can convey – is that the new therapy simply doesn’t work. Either they will say it is ineffective, or that its modest average effect is completely negated by the risk of side effects. They cannot let on that the actual data suggests that some individuals will have a truly remarkable benefit from the drug, and that on an individual basis, deciding to take the drug despite the risks would not be unreasonable.
It is worth noting that as a general rule, progress in cancer treatment has been a slow, painful and incremental process. Very few therapies have been devised that have single-handedly led to major gains in survival. Rather, progress has come from a long series of small steps – improving the average survival by three months with this drug regimen, then adding another six months with another drug regimen, and so on. Once expert panels begin deciding that adding another three (or six, or nine) month increment to the average survival of the herd does not meet the threshold for approval – that is, once it becomes evident that only “home run drugs” are sure to be approved – then drug companies will become quite reluctant to invest in the development of new cancer drugs. And medical progress will slow drastically.
Herd medicine will remove individual choice, will take away hope, and will stifle the slow, steady progress we have made in treating some of the most deadly diseases we face.
The hallmark of herd medicine is that it systematically and officially devalues the worth of the individual, essentially declaring that patients can be treated all alike, as if they are interchangeable members of a homogenous group. This devaluation of the individual, however, was not produced out of whole cloth by the Obamacare legislation. Rather, it is something that has been in the works for several decades, the natural, evolutionary result of a philosophy of healthcare that was all the rage until just a few years ago, but which – mysteriously – we seem to hear very little about these days. I refer, of course, to managed care.
Like many of the travesties that have taken place within our healthcare system, managed care began with a pretty reasonable idea; namely, to apply certain management principles to the healthcare system that have been used successfully in other industries, thereby injecting logic, organization, and accountability to what had been a bastion of disorganization and inefficiency.
The unifying idea behind managed care boils down to one word: standardization. Standardization is virtually a synonym for industry. In industry, standardization is the primary means of optimizing the two essential factors in any industrial process: quality and cost.
This proposition can be stated formally as the Axiom of Industry:
The standardization of any industrial process will improve the outcome and reduce the cost of that process.
If you had a widget-making factory, you would break your manufacturing process down into discrete, reproducible, repeatable steps and then optimize the procedures and processes necessary to accomplish each step. To further improve the quality of your finished product (or to reduce the cost of producing it), you would reexamine the steps, one by one, seeking opportunities for improvement. You would need to understand the process thoroughly, and you would need to collect data about how well the process works. But with the right information, you could almost certainly identify a few minor changes to improve the manufacturing process. The beauty in such a system is that you have only to make one change — to the process itself — and every widget that comes off the line after you make that change will be improved.
So standardization is good. It leads to higher quality and lower cost. Conversely, variation is bad. It reduces quality and raises cost.
Proponents of managed care argued that standardization should be just as useful in healthcare as it is in other industries. As medical care has traditionally been individualized, highly variable, and without any semblance of standardization, there must be a huge opportunity to improve the processes of care and to make them both cheaper and more effective. There is obvious merit in such an idea.
Perhaps the most direct, and the most successful, application of managed care practices to modern medicine was the adoption of “critical pathways” in the 1990s.
Critical pathways are blueprints for delivering standardized care to patients with specific medical problems. Consider a critical pathway for hip replacement surgery. The critical pathway is a specific schedule laying out which services are to be provided for the patient and when, from the date of hospital admission until the date of discharge (which is, of course, predetermined). Checklists are created itemizing which laboratory tests to order and when, which medications to administer at which times, and which specific complications to check for. Everyone involved in the patient’s care has their own relevant checklist. From the moment of the patient’s hospital admission, the critical pathway predetermines when to take vital signs, when to get the patient out of bed, when to begin physical therapy, and when to provide standardized instructions to the patient before discharge. Every vital medical service is included, and all extraneous medical services are omitted.
A “case manager” monitors the care each patient receives under the critical pathway. Every deviation from the prescribed procedure is tabulated as a “variance.” Variances are tracked not to decide who to punish, but to identify areas of the process that need improvement. If too many instances of a particular variance are seen in a critical pathway, then either medical personnel need to be retrained on following the pathway appropriately, or the pathway itself should be changed to reflect more realistic expectations.
Critical pathways, in fact, proved to be extremely helpful in managing many medical conditions. But of course there were some drawbacks and limitations.
First, critical pathways are only useful for delivering medical services, like elective surgery, in which the process of care can be broken down into a predictable series of discrete, reproducible tasks that generate reproducible results. In other words, industrial management tools only work when the process of care is similar to the process of making widgets.
Critical pathways are almost worthless when you are dealing with medical illnesses in which neither the diagnostic procedures nor the treatments that may be employed can be predicted or, therefore, standardized. For instance, it has proven impossible to develop workable critical pathways to manage patients with congestive heart failure (CHF). Knowing only that a patient has been admitted to the hospital with CHF tells you nothing about whether that patient will require cardiac catheterization, a stent, bypass surgery, valve replacement, a pacemaker, an implantable defibrillator, a mechanical ventilator, a prolonged and complicated stay in the intensive care unit, or just a couple of diuretic tablets and overnight observation. No two patients with CHF are exactly alike; and there is no such thing as a standard patient. Unfortunately, most non-surgical medical services fall into this category.
Second, it turns out that when you are taking care of patients, the Axiom of Industry simply does not hold true. Standardization does not always improve outcomes and reduce cost. The reason for this is: Patients are not widgets. And while in theory everyone seems to agree that patients are not widgets, the implications of this fact appear to escape many of our public health experts.
If you’re a widget maker, deciding between two manufacturing processes is a matter of economics. Nobody expects you to consider the widget itself. The outcome by which you are judged has nothing to do with how many individual widgets get discarded during the manufacturing process or even the quality of the widgets that pass final inspection. Instead, it’s the bottom line: how much profit you make in relation to whatever level of quality you put into the widget. So the quality of the widget is not necessarily maximized, instead it’s optimized, tuned to the optimal quality/cost ratio as determined by the market forces of the day. This is why, for a widget maker, the axiom holds: standardization, by rooting out variability, reduces the cost of making the widget (at whatever quality level you choose). This automatically improves the outcome, because the outcome the manufacturer cares about is overall profit.
If instead of running a widget company you’re practicing medicine, the calculus is supposed to be different. You’re supposed to be more interested in how things turn out for individual patients than you are in the bottom line. So an expensive process that yields a better clinical outcome is one most people (patients, at least) would expect you to use, even though it only gets you a healthier patient and doesn’t help your bottom line. A process that increases patients’ mortality rate by five percent is one you should disregard, even if it is substantially cheaper than the alternative. The clinical outcomes experienced by patients — the measure of success you’re supposed to be concerned about — may move in the same direction as costs, or in the opposite direction. But because you’re dealing with patients instead of widgets, the Axiom of Industry doesn’t hold – and outcomes and costs do not always move in the same direction.
So the push to strictly apply managed care techniques to healthcare created a dilemma for doctors. Doctors – the widget-makers in this scheme – tried diligently to apply standardized procedures such as critical pathways to the care of their patients. But the more un-widget-like the medical services they were providing, the more often they were compelled to make variances to the prescribed standardized process, in order to best serve their individual patients.
Such variances are a legitimate and valued aspect of any industrial process. In the widget-making world, variances reveal that the process needs to be tweaked to make it more usable. Variances lead to further iterations and refinements of the process, and a steadily improving result. Exceptions are what allow these industrial processes to become self-correcting.
But in the messy world of patient care, the variances revealed instead that industry-like standardization only works for a minority of medical services. No amount of tweaking can standardize the management of complex patients with complex combinations of illnesses.
It did not take long for doctors to simply stop attempting to use critical pathways for un-widget-like medical services. They did this because they actually cared about what happened to the individual widgets in their charge.
Similarly, it did not take long for our public health experts to recognize the same problem. From their standpoint, however, the problem was not that patients are not widgets. The problem was that the doctors on the scene cared about the widgets. Further analysis revealed that the root of the problem was that classic managed care techniques like critical pathways were administered locally, and therefore the misguided loyalties of the doctors on the scene were allowed to rule the day.
The reason we don’t hear about managed care anymore is that such terminology refers back to those locally-administered, iterative, self-correcting, continuously improving industrial processes. And our public health experts have now realized that this model does not work, and must no longer be encouraged.
The solution to the widget-makers dilemma is to remove the dilemma. Since a dilemma requires one to choose between two options, any dilemma can be resolved by simply removing the choice. And this is what has now been accomplished.
There is no dilemma for physicians any more. Clinical decisions are now to be made centrally, by expert panels appointed by the government, through the mechanism of what is euphemistically called “guidelines.” Guidelines are sacrosanct rules that will determine precisely who is to get what, when and how. Doctors are now enjoined, both by law and by their new medical ethics, to follow those “guidelines” to the letter, without exception.
So instead of the locally-controlled, iterative, self-correcting quality improvement processes like critical pathways – the same kind of processes that have so significantly improved American automobiles over the past three decades – under Obamacare we are reverting to a central-directive-style of management, far more reminiscent of the old Soviet collective farms.
Complex systems controlled by expert-generated centralized directives have never worked and never will. The fact that experts always seem to espouse such systems – apparently under the theory that they are so much more clever, or have better information, or better systems, than those other experts who tried before and failed – is just one of the reasons we should always be afraid of experts.
And sure enough, the experts who are going to determine which medical care we in the herd will receive (and not receive) do indeed have a new and infallible system – a magic bullet – upon which to base those decisions. They call it “evidence-based medicine,” which certainly sounds like a useful thing. And further, the “evidence” featured in this new formulation, virtually by definition, must come from a specific kind of rigorous study called the randomized clinical trial, or RCT.
Bias in clinical trials has long been recognized as a problem. All clinical trials are inherently biased. A research study is biased from the moment it is conceived. And those who conceive of, plan, conduct, and analyze the clinical study have every advantage. (This, indeed, is the very reason why everyone is so indignant about the studies conducted by medical industry and their minions in the medical academy.) That advantage of bias is now, under law, defaulting to the government’s expert panels.
The formulation which our leaders would have us believe is that first, such government panels will be completely objective and unbiased, and second, even if they were biased, the fact that they are basing all their decisions on RCTs will eliminate any possibility of acting on that bias.
The idea that government-controlled expert panels will be unbiased, of course, is absurd. The reason these panels exist in the first place is to control healthcare costs. Since the main mechanism by which these experts will drive a reduction in spending on medical services is through the application of clinical trials – whose results the experts themselves will officially interpret – panelists obviously will be strongly biased toward interpreting those results in a way that will justify withholding expensive medical services.
And while they are busily spinning the results of RCTs, the same experts will be assuring us that RCTs provide a guarantee against bias. For, according to the Gospel of the RCT, the chief advantage of this sort of clinical trial is that it eliminates bias altogether, and produces a completely objective result. So, in order to do the right thing, one merely needs to follow the results of RCTs.
This gospel is incorrect. An RCT, like any clinical trial, is inherently biased from the very beginning.
Many clinical researchers believe in their hearts and souls that bias can be eliminated through the use of RCTs. In such trials, “like” groups of research subjects are divided randomly into two or more groups, and each group receives (for instance) a different therapy, whereupon differences in outcomes among the groups are attributed to the different therapies to which they were randomized. Indeed, the widespread belief that RCTs are the necessary and sufficient means to achieve “clinical truth” has become so deeply ingrained within the medical establishment that when anyone (such as your humble author) suggests otherwise, he immediately reveals himself to be a scientific Neanderthal.
The widespread belief in RCTs has become nearly a Cult in the medical establishment, whose creed can be reduced to three main tenets:
1) Data derived from randomized clinical trials represents Truth.
2) Data derived from non-randomized trials represents Falsity.
3) If you don’t believe this, you are a heathen.
Objective observers should find it at least a little ironic that an attempt to claim the scientific high ground has so obviously resulted in a new religion, replete with its own dogma.
The sad truth is that the results of RCTs are invariably dependent on the bias built into their design, and even if internally they are statistically legitimate their interpretation can usually be twisted to suit one’s preconceived notions – and for these reasons RCTs, like any clinical trial, can often send us down the wrong path.
Those who design RCTs (the smart ones, at least) know this. Like smart trial attorneys, they know the answer before they ever dare to ask the question. So they tailor their “question” in such a way as to yield the answer they want to get. Indeed, if a lawyer should end up asking a question in court that produces an unexpected answer, he or she is completely incompetent and ought to be sued for legal malpractice. In more cases than one might think, the same is true for those who design RCTs.
For instance, if you are an insurance company and want to limit the use of an expensive therapy, you design your RCT so that patients likely to respond favorably to the therapy are diluted within a broad population of enrolled patients, many of whom are less likely to respond favorably. This tactic will tend to make the average response of the whole population quite unimpressive. (In many instances the clinical characteristics of the likely responders and the likely non-responders will be reasonably apparent to the study designers.)
On the other hand, if you are a drug company that wants to encourage the use of your expensive new product, you design an RCT that preferentially enrolls the relatively small subset of patients who are most likely to respond favorably. Once your product has gained approval through the results of your RCT, you can then trust the marketplace (with a tweak from your direct-to-consumer advertisements) to “extrapolate” the results to broader categories of individuals.
So it is immediately obvious that RCTs do not eliminate statistical bias, as the dogma suggests. Rather, they simply offer an opportunity to control the statistical bias in your favor.
Sadly, it is often child’s play for interested parties (both government and private) to twist and spin RCTs to create the desired impression. The conceit of Obamacare – that industry-sponsored research is invariably biased, while government-sponsored (or government-interpreted) research is entirely objective, and therefore, that the only thing we need to assure accurate clinical research is to have it all controlled by the government – is dangerously wrong.
Since all clinical research entails bias, the appropriate way to approach any clinical problem would be to acknowledge that neither RCTs nor any other kind of clinical trial will reliably distinguish between Truth and Falsity, and that no (inevitably conflicted) group of experts should be given the exclusive authority to interpret clinical results. Then, given the possibly competing results from various studies – which often will not yield a firm “answer” – the individual doctor and individual patient can weigh the evidence and review list of risks and benefits most pertinent to that patient, and determine the optimal course of action given that patient’s particular circumstances and proclivities. Driving such a process, in fact, is what doctors are supposed to do.
But herd medicine does not allow for such individualized decisions, nor does it allow that there may be grey areas in clinical medicine, or that what’s right for one patient may not be right for another. Instead, it insists that RCTs must yield the Truth, that panels of very smart experts can discern that Truth, and that these panels can determine the one Right Answer that is applicable to the entire herd.
In the next few chapters I will demonstrate more specifically how expert-driven herd medicine can cause extreme harm to individuals, and to our society. I will finish this chapter by showing a recent example of how an RCT, even a straightforward one, can be twisted quite easily into a pretzel by biased interpreters.
In 2010, the Archives of Internal Medicine published four (four!) articles assaulting the legitimacy and the importance of the JUPITER trial, a landmark clinical study published in 2008, which showed that certain apparently healthy people with normal cholesterol levels had markedly improved cardiovascular outcomes when taking a statin drug.
Superficially, at least, the JUPITER study appears to have been pretty straightforward. Nearly 18,000 men and women from 26 countries who had “normal” cholesterol levels but elevated C-reactive protein (CRP) levels were randomized to receive either the statin drug Crestor, or a placebo. CRP is a non-specific marker of inflammation, and an increased CRP blood level is thought to represent inflammation within the blood vessels, and is a known risk factor for heart attack and stroke. The study was stopped after a mean follow-up of little less than two years, when the study’s independent Data Safety Monitoring Board (DSMB) determined that it would be unethical to continue. For, at that point, individuals taking the statin had a 20% reduction in overall mortality, a dramatic reduction in heart attacks, a 50% reduction in stroke, and a 40% reduction in venous thrombosis and pulmonary embolism. All these findings were highly statistically significant.
This study is noteworthy because it was the first large randomized trial to show that taking a statin can markedly reduce the incidence of some very harmful cardiovascular outcomes in people who are considered to have “normal” cholesterol levels.*
* Notably, typical LDL cholesterol levels among primitive hunting/gathering cultures is around 50 mg/dL, instead of the 100 – 120 mg/dL we consider to be normal. These primitive folks have an extremely low incidence of cardiovascular disease, so maybe humans’ optimal cholesterol level is much lower than we now think. On the other hand, the low risk of cardiovascular disease among hunters/gatherers may instead be related to the fact that many more of them than of us are consumed by various species of carnivores before they’re 30.
To be sure, the JUPITER trial was far from perfect. Because of its design, it could not (and did not) tell us whether the beneficial outcome is specific to Crestor, or is a class effect of all statins (which seems very likely). It did not tell us whether reducing CRP levels is itself beneficial, or even whether using CRP as a screening tool is actually helpful. (The people enrolled in this trial tended to have several other risk factors, such as being overweight, having metabolic syndrome, and smoking, and it is not clear how much additional risk elevated CRP levels really added in this population.) And this trial did not tell us the risks of lifelong, or even very long-term, Crestor therapy.
But JUPITER did tell us something that is very useful to know, and with a very high degree of statistical surety: Giving Crestor to patients similar to the ones enrolled in this study can be expected to result in significantly and substantially improved cardiovascular outcomes, and in a relatively short period of time.
If medicine were practiced the way it ought to be – where the doctor takes the available evidence, as imperfect as it always is, and applies it to each of her individual patients – then the incompleteness of answers from the JUPITER trial would present no special problems. After all, doctors never have all the answers when they help patients make decisions. So, in this case the doctor would discuss the pros and cons of statin therapy – the risks, the potential benefits, and all the quite important unknowns – and place the decision in the perspective of what might be gained if the patient instead took pains to control their weight, exercise, diet, smoking, &c. At the end of the day, some patients would insist on avoiding drug therapy at all costs; others would insist on Crestor and nothing else; yet others would choose to try a much cheaper generic statin; and some would even opt (believe it or not) for a trial of lifestyle changes before deciding on statin therapy. In other words, there is an entire range of reasonable options given the limitations of our knowledge, as there often is in clinical medicine. As time goes by, more scientific evidence is often brought to bear and clinical decisions can become more informed. But whatever the state of the evidence, doctors and patients can generally get by without violating too severely any ethical or medical precepts that would cause objective and neutral observers to complain very much.
But this kind of individualized give-and-take between doctor and patient, in which the pros and cons are discussed in light of the patient’s own leanings, is no longer how doctors will practice medicine. Instead, they will practice herd medicine. Expert panels will decide whether people ought to take Crestor, or some other statin, or nothing – and that decision must apply to everybody.
And this makes the stakes very high when it comes to a clinical trial like JUPITER. For herd medicine does not permit a range of actions tailored to fit individual patients (consistent with the uncertainties inherent in the results of any clinical trial). Instead, under herd medicine the results of clinical trials generally cannot be permitted to remain imperfect or nuanced or subject to individual application, but must be resolved by a central panel of government-issue experts into a binary system – yes (do it) or no (don’t do it). In the case of JUPITER, the guidelines which some expert panel is going to have to produce will have to say whether or not to recommend Crestor to patients like the ones enrolled in the study, at a potential cost of several billion dollars a year.
It should be obvious that the answer which would be more pleasant to the ends of the Central Authority, and by a large margin, would be: No, don’t adopt the JUPITER results into clinical practice.
However, the expert panels which are called for by Obamacare have not been formulated yet, and we are still operating under the “old” rules. So, still subject to all the duress which is created by unfortunately-resolved clinical trials like this one, the FDA, somewhat reluctantly, approved the use of Crestor for JUPITER-like patients in late 2009. That approval, of course, is subject to review by the new expert panels, once they are actually in operation.
This, I submit for your consideration, is likely what instigated the almost violently anti-JUPITER issue of the Archives. It might even be suggested that the production of this extraordinary Archives indicates that we may be dealing here with a bunch of wannabe federally-sanctioned experts, auditioning for positions on the expert panels. What better way to get the Central Authority’s attention than to let them know that you are of the appropriate frame of mind to assiduously seek out scientific-sounding arguments to discount the straightforward and compelling, but fiscally unfortunate, results of a well-known clinical trial?
Of the four papers appearing in the Archives, three are more-or-less legitimate academic articles that make reasonable points, but do no harm to the main result of JUPITER. The fourth is a straightforward polemic, which has no place in a peer-reviewed medical journal, and whose very presence, I believe, strongly suggests that the editors of the Archives themselves may be auditioning for spots on an expert panel.
We can make short work of the three reasonably legitimate articles. One pointed out that JUPITER did not tease out the real importance of CRP levels, or whether lowering those levels is useful. This is true, but that fact does not touch the main conclusion of JUPITER. The second article was a meta-analysis which incorporated several other primary prevention trials using statins, and concluded that there is no overall benefit to statins in primary prevention patients. Aside from the usual problems inherent in meta-analyses, a) the JUPITER study looked at a specific sub-population of primary prevention patients unlike those addressed by these other studies, so whether these studies can be legitimately pooled is an open question, and b) since JUPITER is the first study to show a benefit in using statins for primary prevention, it is a foregone conclusion that if you assemble enough of the previous, negative studies and lump them together with JUPITER in a meta-analysis, you will be able to dilute the results of JUPITER sufficiently to achieve an overall negative result. Actually doing such a meta-analysis, then, is merely an exercise in math, not in revelation.
The third article criticized the JUPITER DSMB for stopping the trial earlier than originally planned. The DSMB, however, had no real choice in the matter – ethically or legally – given the striking statistical significance of the benefit seen with Crestor. When a patient signs an informed consent agreement to participate in a clinical trial, part of that “contract,” a part required by law, is a statement to the effect that if information comes to light during the course of the study that might impact a patient’s willingness to continue participating, that information must be made available. The fact that the Crestor branch of the study was found to have markedly and significantly improved survival, fewer strokes and heart attacks, &c., than the placebo branch, clearly constitutes such information. Indeed, it is the job of the DSMB to monitor the study for this kind of information, and to stop the study whenever it becomes certain that continuing it would expose study participants to unreasonable risks. This is why independent DSMBs exist in the first place – to protect the rights and welfare of the research subjects under the fiduciary agreement that comprises informed consent. Stopping the study when they did was not “premature;” continuing the study would have been illegitimate.
This same argument – that RCTs should never be stopped prior to the original stopping point – has been raised in the intervening years by several other experts. It is a viewpoint one perhaps ought to expect from purveyors of herd medicine. The DSMB, after all, is an artifact from a time when the patients agreeing to be enrolled in an RCT were considered to be individuals, who of their own free will volunteered to participate in a clinical trial where some aspect of their therapy would be determined by chance, and whose interests, accordingly, ought to be protected. The notion that a trial ought to be driven to its pre-set conclusion, even after it is shown that doing so will cause predictable and measurable harm to individuals in one arm or another of the trial, derives naturally from a herd medicine paradigm. Such a notion ought to give anyone pause before agreeing to participate in an RCT today.
The fourth article is more striking (and more fun) than the other three. Interestingly, it was categorized by the Archives as an “Original Investigation,” despite the fact that it describes no investigation of any kind whatsoever – original or derivative. It merely revisits the data from JUPITER (in a spectacularly biased manner), and offers a spate of ad hominem attacks, alleging bias to the point of corruption, without any supporting evidence, against JUPITER’s sponsor, its investigators, and most astoundingly, the chair of the DSMB (who is a well known and highly respected figure, especially known and revered for his complete objectivity and lack of bias). If such an article has any place at all in a peer-reviewed medical journal – which I doubt – it ought to be clearly labeled as an opinion piece, and not as a piece of original research. Whatever it may be, it’s not that.
But the most delicious aspect of this fourth article is that two of its authors, including its lead author, are members of a fringe medical group known as The International Network of Cholesterol Skeptics (THINCS), whose stated mission is to “oppose” the notion that high cholesterol and animal fat play a role in cardiovascular disease. Members of THINCS also take an extraordinarily strong position opposing statins for any clinical use whatsoever. (One might actually assume that, since JUPITER shows that cardiovascular outcomes can be improved by statins in people with normal cholesterol levels, the THINCS would embrace the study as evidence that perhaps cholesterol is not as important as it’s cracked up to be. But apparently, this argument is completely negated by the fact that statins were the vehicle for making it. Many in the anti-statin crowd would object to statins even if they were proven to cure heart disease, cancer, baldness, and obesity AND produced fine and durable erections upon demand.)
The best part of all this is that the astounding anti-cholesterol, anti-statin bias of the authors was not disclosed in their article – whose main thrust, again, was to criticize the disclosed biases of the JUPITER investigators.
The venerable Pharmalot blog noted this irony, and contacted Dr. Rita Redberg (editor of the Archives) and Michel de Lorgeril (THINCS-master and prime author of the fourth article) to ask them why the association with THINCS was not disclosed.
Redberg: “I’m not clear this is an undisclosed conflict. The policy mentions a personal relationship that could influence one’s work. I think that could be a big stretch. My initial impression is the group has an intellectual message, but doesn’t fit as a personal relationship that could effect the authors’ work.”
de Lorgeril: “[While it is] very important to disclose financial conflicts of interest that can influence our way of working and thinking about cholesterol and statins, there is so far no obligation to provide a CV each time we publish any thing…May I underline the fact that being a member of THINCS – not a group of terrorists, mainly a club of very kind retired scientists with whom I have interesting and open discussion – is not a conflict of interest?”
I may be old fashioned, but I think that being a member of an “out there” group like THINCS, which appears to advance selected and distorted data on its own website aimed at furthering its stated mission of “opposing” (not investigating or questioning) the cholesterol hypothesis and the use of statins, might make one prone to a bit of bias when writing a broadside critiquing a study like JUPITER, and loudly criticizing anyone associated with that study for their bias.
The irony here is amazing. The lack of embarassment is astounding.
This sort of bias (demonstrably rooted in a willingness to select/ignore/distort data in order to make a preconceived point) is likely to be as strong as any that might accompany, for instance, receiving a stipend from a statin company for participating in clinical research. Membership in THINCS may not preclude one from writing such an article, but I think the association at least ought to be disclosed, just as financial relationships must be disclosed.
I have a hard time explaining how this can happen with a prestigious medical journal like the Archives. But like Sherlock Holmes says, when you have eliminated the impossible (such as, the idea that this article deserved to be published in its current form), whatever remains, however improbable, must be the truth.
And this is why I am forced to suggest that several of the authors appearing in that issue of the Archives of Internal Medicine, along with its editors, may be in the mode of ingratiating themselves to the sundry officials and czars within the govenment who will be assembling the expert medical panels which will be making the momentous decisions that will determine the flow of hundreds of billions of dollars, and (forgive me) of life and death.
Admittedly the issue of the Archives I have been discussing does not accurately reflect the general tenor of criticism the JUPITER trial has engendered in the academic community, which has been far calmer and less polemical. The fact is that the implications of this trial, when straightforwardly interpreted, are very disturbing to payers, both private insurers and the government. So in the years since this study was published there has been a general effort to diminish its results, from several fronts, that, taken together, should give future expert panels plenty of legitimate-sounding resources with which to deny its application to the herd.
This larger group of critics of the JUPITER trial all come from the legitimate medical establishment, and are proponents of using RCTs to make medical decisions. They claim to be willing to follow the data from RCTs to wherever they may lead.
For these critics, it seems pretty clear that the chief concern regarding JUPITER is its cost implications. That is, these people feel strongly that it would simply be too expensive to follow the results of the JUPITER trial to their logical conclusion. This, indeed, would be a very reasonable position to take – as long as their argument went something like this: “Yes, the JUPITER trial shows that many lives would be saved if people like those enrolled in the study would take Crestor, but it’s just too expensive to buy Crestor for all these people.”
But this sounds like rationing, and Americans don’t ration. So instead critics, even those pure thinkers in the academy, have tried to attack the results of JUPITER, arguing that the results of the study actually do not support the use of statins in these patients.
Unfortunately, turning aside the results of a statistically definitive RCT can be a challenge. In fact, the need to discount the results of JUPITER leaves critics little choice but to engage in statistical legerdemain. There are several useful techniques they can employ to this end.
Many of the arguments that have been ginned up in this effort have derived not from data published in the JUPITER trial itself, but instead from statements made in an editorial written by Dr. Mark A. Hlatky, and published in the same issue of the New England Journal of Medicine in which the JUPITER study itself appeared.
Most of Dr. Hlatky’s editorial is measured and reasonable. But along the way – either inadvertently or slyly – he threw in a key summary sentence that has been greedily grasped by those who would discount the JUPITER results, to wit: “The proportion of participants with hard cardiac events in JUPITER was reduced from 1.8% (157 of 8901 subjects) in the placebo group to 0.9% (83 of the 8901 subjects) in the rosuvastatin [Crestor] group; thus, 120 participants were treated for 1.9 years to prevent one event.”
This statement, at least taken at its face value as a stand-alone analysis, is statistically naive, and fundamentally wrong.
In a long-term clinical study in which the endpoints are events that can occur at any time (such as heart attack, stroke or death), then the probability that an enrolled patient will reach an endpoint during the trial increases the longer he/she has been enrolled. But in virtually all clinical trials, the length of time different people are enrolled varies greatly. This is because it often takes years to enroll people in clinical trials, so that when the trial ends, some will have been in the trial for many years, others for only a little while. This means that the risk exposure of each research subject is different, and is proportional to the total time they were enrolled. Not uncommonly, the enrollment process is not smooth – there are periods of more rapid enrollment, and periods of slower enrollment – so if all you do is average the enrollment time (as was done by Hlatky – 1.9 years) you are likely to get skewed results. So it is simply not statistically legitimate to do so.
There is a legitimate, well-known and universally accepted method for analyzing these kinds of longitudinal outcome statistics, and it’s called the Kaplan-Meier method. And indeed, the authors of the JUPITER trial presented in their paper a complete Kaplan-Meier analysis of their data, and the results look quite a bit different from Hlatky’s summary statement. The Kaplan-Meier analysis reveals that the risk of heart attack, stroke, and death all increase steadily through at least four years, so that at four years after enrollment the risk of reaching one of the “cardiovascular event” endpoints was about 8% (not 1.8%). Further, the Kaplan-Meier analysis shows that the protection imparted by Crestor persists through at least four years, and that indeed the magnitude of protection (i.e., the difference in outcomes between the treated group and the placebo group) increases throughout that entire duration. So, four years after enrollment in the study, the placebo group had roughly an 8% event rate, compared to roughly a 3% event rate for the Crestor group – an absolute difference of about 5% (not 0.9%). This is a far greater benefit than is suggested by Hlatky’s shorthand summary.
Suffice to say, then, that Hlatky’s summary statement apparently ignores the appropriately analyzed data which is clearly presented in the JUPITER paper itself, and which documents that the clinical benefit of Crestor was substantially more impressive than his widely-quoted summary statement suggests.
But as misleading as this summary statement may be, let us accept it at face value for a moment just for the sake of discussion, since that’s the data the JUPITER critics have chosen to latch on to.
Taking these numbers, the critics make the following argument: While the relative reduction in “hard cardiac events” is 50% (1.8 to 0.9), the absolute reduction is only 0.9%, which, anyone would agree, is a pretty small number. So, they conclude, the actual benefit imparted by Crestor is actually quite small.
That’s a very interesting argument. Let’s look at it in a couple of ways.
So we’ve got a population of patients whose risk of heart attack, stroke, bypass surgery/stenting, or death is about 2% after about two years, and by giving them a pill we can reduce that risk to about 1%, and we’re arguing that the absolute drop of 1% is not very much to crow about. Well, OK. But what if we found a pill that reduced their risk to zero at two years? That is, it completely wiped out the risk of cardiovascular catastrophes altogether. Would that be a good thing? Or would we say, “It’s just a 2% drop, really not much greater than the 1% drop we had with Crestor, so it’s no big deal?” I think not. I suppose we would think that totally eliminating all cardiovascular risk would be a very big deal indeed.
When you’re starting at a 2% risk, then any drop in risk is going to be an “absolutely” small number. And if we’re not going to pursue improvements in outcome of such a small magnitude, then why the heck are we worrying about preventative medicine in the first place? Once you get past the big things (drain the swamps, don’t drink the water downhill from the outhouse, &c.) then all preventative medicine tends to consist of small, incremental improvements in outcome. Popular pronouncements to the contrary notwithstanding, preventative medicine is largely the art of spending a lot of money for this magnitude of incremental improvement. If we Americans decide we shouldn’t do this anymore, then I would find it unfortunate but understandable. But it hardly seems reasonable to arbitrarily focus on this one, particular improvement in preventative cardiology, and (within a healthcare system that insists it is not rationing care) pronounce that this is the one we’re not paying for.
Another way of looking at this “the benefit is too small” argument is by considering that 7.4 million Americans fit the entrance criteria for JUPITER. By giving all these people a statin, we would be preventing about 66,600 major cardiovascular events over a two year period. If you’re going to say that 1% is a small number, I will counter by arguing that 66,600 is a big number. So do statins offer a substantial benefit or not? It depends on whether you choose to focus arbitrarily on the 1% or the 66,600.
(I understand that you may not be focusing at this moment on the 66,600 cardiovascular catastrophes that could be prevented, but on the 7.4 million people who will be taking a drug that costs $120 per month. But we’re not talking about cost yet, we’re only talking about whether the drug does some good. If we decide it does, then we’ll need to link that “good” to a procedure that measures whether the “good” is worth the money we would need to spend to achieve it. The critics of JUPITER try to avoid talking about cost – since that would admit they’re rationing – by insisting that there’s just not enough “good” to bother with. I am simply pointing out that such an argument – that preventing 66,600 very bad outcomes is not enough to bother with – is on its face absurd.)
Another argument invoked by critics is based on the “number needed to treat” (NNT) analysis. Again they rely on Hlatky’s unfortunate summary of the data: “120 participants were treated for 1.9 years to prevent one event.” This number – which the critics insist is just too high – is misleading for the reasons already discussed. The real NNT, based on more legitimate statistical analysis, is plainly laid out in the JUPITER paper itself. It turns out that the longer patients in this trial were treated with Crestor, the lower the NNT became. So: At two years, the NNT was 95; at four years, it was 31; and at five years, it was projected to be only 25. Whether you think it is reasonable to treat 25 people with a pill for five years to prevent one of them from having a heart attack, stroke, or death is, I suppose, a matter of opinion. But based on NNT analyses for many widely-accepted therapies in medicine today, it looks pretty good.
All these arguments, of course, are merely distractions. The fact is that JUPITER showed a pretty striking reduction in some very nasty cardiovascular events over s pretty brief period of time, and the only real reason there’s any controversy at all is because of the cost of Crestor.
That cost is what makes us want to withhold Crestor, even though it is imparting at least some (and, I am arguing, quite a bit of) clinical benefit. In other words, the high cost makes us want to ration Crestor. The fact that we can only ration covertly, instead of openly, is what makes us want to bastardize the science and do a Kabuki dance with the statistics.
If we worked under a Method Three healthcare system, where the strict limits on public spending were determined openly, then we could do an objective, full-bore cost-benefit analysis on the use of Crestor in JUPITER-like patients, using legitimate and not ginned-up statistical analysis, and taking into account not only the cost of the drug, but also the cost that would be incurred by failing to stop preventable heart attacks, strokes, &c., and then determining where the overall cost-benefit result fell within our coverage criteria. If it met the criteria we would cover it, if not, not. This decision would not be arbitrary. It would be a fully transparent process, so that if the sponsor did not like the results, they would try diligently to find a way to reduce the cost of Crestor (I think they would succeed) to a value that would be compatible with their staying in business. (And for the first time, the price of medical products would be determined by a Laffer-like curve, where a price that was too high – like taxes that are too high – would reduce revenue, instead of increase revenue. Companies, being fairly rational, would ratchet their prices down to the optimal price point.)
But since we insist on doing our rationing covertly, I am sorry to say that we’re destined to keep making spurious arguments, and using dumbed-down statistical analysis to back them up. The JUPITER trial, while it is imperfect and while it does not answer every question, really is pretty straightforward. That we get so wrapped around the axle trying to fold such clinical trials into our covert rationing paradigm is simply another demonstration of the fact that covert rationing corrupts everything it touches.
The fact that so many respected academics are making such spurious statistical arguments is disconcerting and discouraging. Among other things, it means that the Central Authority will have many, many fully-domesticated experts to choose from when they assemble their all-powerful expert panels.
Herd medicine will follow naturally from any centrally-controlled Progressive healthcare system. Unless you are lucky enough to be included in the expert class, or are a part of the government leadership that controls the expert class, this is not a good thing.
Medical services that give substantial benefit to a minority of people will not be offered to any people, since the “herd effect” will likely be below an arbitrary cut-off value. Medical services that do make the cut will be prescribed for everybody, even though (since herd medicine is tuned to the average response across the population), something like half the population will respond less favorably than average. Herd medicine will stifle medical progress. And herd medicine will entice medical experts, who need to curry the favor of Progressive leaders in order to be recognized as legitimate experts, to abuse the science and the statistics of clinical trials.
It is important to note that while those of us who reside within the herd will find these features of herd medicine problematic, for our Progressive leaders herd medicine – which offers the centralized control they find absolutely necessary – is an unalloyed boon.