Without a Randomized Study Your Results May Vary

Photo credit: Wikipedia

A family of osteoporosis medications called bisphosphonates (which include Fosamax, Actonel, and others) is known to cause irritation of the esophagus. These medications have been prescribed millions of times. There are a few reports of patients developing cancer of the esophagus while taking these medicines. Obviously, many more people who develop esophageal cancer have never taken bisphosphonates. So how can we tell if bisphosphonates increase the risk of esophageal cancer?

A fascinating article in yesterday’s Wall Street Journal attempts to answer that question and highlights how the wrong kind of study can mislead us.

Let’s first make sure we understand the two kinds of studies scientists can do to try to answer the question. An observational study would involve following lots of people and comparing those who happen to be taking bisphosphonates to those who are not. No intervention is made to any patient. The fraction of people who happen to be taking bisphosphonates and develop esophageal cancer is calculated and compared to the fraction of people who develop esophageal cancer without taking bisphosphonates.

The other kind of study is a randomized study. That would involve enrolling lots of subjects and (with their permission) flipping a coin for each person. Half the subjects would receive a bisphosphonate and half would receive a placebo. The subjects would be kept ignorant of which pill they are taking. Then the subjects would be followed for many years and the number of esophageal cancer cases in each group would be counted.

Randomized studies are complex and expensive. They’re also fairly reliable and as close to definitive as anything in medicine. Observational trials are prone to countless biases that lead to spurious results. For example, smoking is a significant risk factor for esophageal cancer. If the people in an observational study taking bisphosphonates by coincidence included a higher fraction of smokers than those not taking bisphosphonates, that would lead to a misleading result that bisphosphonates increase the likelihood of cancer. Some of these biases can be accounted for with statistical techniques, but there is no way to identify or account for all of them. Randomization is the only way to try to guarantee that there is no systematic difference between the two groups except for the medicine that they are taking.

The fascinating development in the Wall Street Journal article is that two British teams attempted to answer the question about bisphosphonates by doing two different observational studies. Neither team knew about the work of the other. The studies reached conflicting results. One found a link between bisphosphonates and esophageal cancer, and the other did not. Even more disheartening is that they used the same large database of British patients to arrive at their conclusions.

The article even cites the supreme example of the folly of observational studies – estrogen. For decades, based on observational studies doctors prescribed estrogen to post-menopausal women, convinced that this prevented strokes and heart attacks. A randomized trial definitively showed that there was no benefit, and potentially some increase in risk.

Despite the notorious unreliability of observational studies, more are being done all the time. The article is frank about the reason for this – they’re much less expensive than rigorous randomized trials. But it would be even cheaper to read tea leaves, check your Magic 8-Ball, or ask my chatty neighbor. We should either invest the resources to actually learn the answer to our question, or admit our ignorance and spare ourselves and our patients the confusion that observational trials inevitably cause.

Learn more:

Analytical Trend Troubles Scientists (Wall Street Journal)

More

Learning from Massachusetts

In 2006 Massachusetts passed sweeping health care reform which provided for insurance coverage for nearly all of its residents. In 2010 the Affordable Care Act (ACA) was passed at the federal level which will enact very similar reforms nationally. While the US Senate was debating the ACA, the New England Journal of Medicine (NEJM) published one opinion article after another extolling the ACA’s virtues and making positive comparisons to the benefits that Massachusetts had experienced under its health care reform.

Since then, the data coming from the Massachusetts experiment has not been encouraging, and I was gratified to see this week a NEJM opinion piece which gives a very frank appraisal of the state of health care in Massachusetts.

The one incontrovertible measure by which the Massachusetts plan has met its goals is that nearly everyone, 98% of the state’s population, has insurance. That has come at a cost which even the article’s authors admit is unsustainable. Massachusetts is now among the highest states in the country in per-capita health care spending, and health care is taking up a larger fraction every year of the state’s budget, crowding out other priorities. The growth of health care spending in Massachusetts is also consistently higher than economic growth, another indicator that the current system is unsustainable.

One of the justifications of the Massachusetts plan (and of the ACA nationally) was that it would make insurance more affordable for the middle class, but in Massachusetts insurance premiums have become more expensive, and have done so faster than in the rest of the nation.

Other sources, including the Massachusetts Medical Society, inform us that wait times for a primary care physician have skyrocketed and the number of doctors accepting new patients and accepting state insurance plans have dropped. That makes sense and was predicted by critics of the plan. If the number of patients who can seek care at little cost to themselves is suddenly increased without a corresponding increase in the number doctors, longer wait times are bound to result.

So Massachusetts has shown us how to build a system in which everyone has insurance but only few can get to a doctor. One would think that the authors of the NEJM article would conclude that it is a well-intentioned but unsustainable failure and a sobering warning about what we are about to impose on the nation. Instead, they are so wedded to the mirage of universal insurance coverage that they spend the second half of the article discussing desperate ways to save the plan through various cost-cutting measures. These schemes quickly degenerate into an alphabet soup of bureaucratic names like ACOs and the AQC. If any of these manage to cut costs without worsening care, I’ll eat my stethoscope.

I’ve explained before how the health insurance market broke and why buying routine care through insurance is the problem, not the answer. Universal insurance coverage simply universalizes a terrible way to acquire care. We should give that some thought before the ACA rolls out nationwide.

Learn more:

Controlling Health Care Spending — The Massachusetts Experiment (New England Journal of Medicine perspective article)
Kaiser Family State Health Facts – Massachusetts
Massachusetts Medical Society Releases 2011 Study of Patient Access to Health Care (Massachusetts Medical Society)
Six ways Romneycare changed Massachusetts (Washington Post)
Three Lessons from Massachusetts (National Center for Policy Analysis)

More

Measles Cases in 2011 Highest in Fifteen Years

rash due to measles (CDC / PHIL / Wikipedia)

Three years ago I wrote a post alarmed that measles was on the rise in the US. Little did I know then that this was only going to get worse.

This week the CDC released data in its Morbidity and Mortality Weekly Report and in a telebriefing for the media reviewing the measles statistics for 2011. The numbers are worrisome. (The picture on the right shows the typical rash caused by measles.)

There were 222 cases of measles in the US in 2011, the highest number since 1996, and much higher than the average annual case count in the last decade, 60. This may not sound like a big deal, since most cases of measles are mild, but a third of the patients with measles are hospitalized. Fortunately, there were no deaths in the last year.

Because the US population vaccination rate for measles is very high, most of these cases (200 of the 222) were linked to importations of measles from abroad, either due to a US traveler being infected while outside the country, or a foreigner traveling to the US while contagious. Half the cases from abroad were from Europe, primarily France, Italy and Spain. (This proves that despite their fiscal challenges the European Union can still export something.) Unlike the US, Europe has never eradicated year-round person-to-person transmission of measles, so it continues to act as a reservoir of disease. In fact, last year, over 37,000 cases of measles were reported in Europe.

So the CDC is stressing two points. The first is that the MMR (measles, mumps, rubella) vaccine is effective and safe, and all children should have two doses of it. Some of the measles cases last year were among patients who could have received the vaccine but claimed exemptions due to philosophical or personal beliefs. Unvaccinated people don’t only run the risk of being infected with measles themselves; they also risk infecting those around them, particularly infants too young to have been vaccinated.

The second message promoted by the CDC is that travelers abroad should make sure they’re immune to measles. Those born before 1957 are presumed to be immune because that was before the vaccine was widely used and everyone was exposed. Everyone born since 1957, however, should be sure they’ve had two doses of MMR. For those who are not sure, the CDC simply recommends revaccinating. An additional MMR is safe even if unnecessary and is more reliable than checking a blood test to determine immunity.

So when you go to the London Olympics this summer, keep in mind that the adorable Parisian child in the next seat might be a biohazard. Defend yourself.

Learn more:

In 2011, U.S. logged the most measles cases it’s had in 15 years (LA Times Booster Shots)
CDC Telebriefing on Measles – United States, 2011
Measles — United States, 2011 (Morbidity and Mortality Weekly Report)
WHO issues Europe measles warning (BBC News, December 2011)

My previous posts on measles and vaccinations

More

Healthcare That You Should Avoid

Why wouldn’t you want an EKG every year as part of your check up? Why would you not want to be screened for prostate cancer at the age of 80 (or maybe at any age)? Why should you decline the annual chest X ray that your doctor keeps ordering? Is it because you’re eager to save money for your insurance company? Is it because you think going without the test will help others who are more needy get the test in some complex rationing scheme? No. You should forego the above tests because they are much more likely to harm than help you.

Unfortunately, some of the care physicians deliver is entirely without benefit. I’m not saying merely that some care hasn’t been proven to be effective. That can be excused, since in many fields the scientific evidence is scant and the individual doctor’s judgment is our only guide. I’m saying that much of the care that is delivered has been rigorously proven to be ineffective or harmful.

Why are doctors ordering so many useless tests and treatments? Some blame “defensive medicine” the practice of ordering tests or treatments not for the patient’s benefit but to protect the physician from liability. Some blame unsophisticated or demanding patients. Neither of these explanations is fully persuasive.

Whatever the cause of this pervasive delivery of care that is worthless or worse, a group of American physician specialty societies have partnered with the American Board of Internal Medicine Foundation to do something about it. Their initiative, Choosing Wisely, lists 45 tests and treatments in nine different specialties that physicians should stop ordering and informed patients should decline. These tests and therapies have been definitively found to have no value and yet remain widely utilized.

Some of the 45 recommendations of Choosing Wisely are:

  • Don’t order sinus CT or prescribe antibiotics for uncomplicated acute rhinosinusitis.
  • Don’t screen for osteoporosis women younger than 65 or men younger than 70 with no risk factors.
  • Don’t order annual EKGs or any other cardiac screening for low-risk patients without symptoms.
  • Don’t repeat a colonoscopy for colon cancer screening sooner than 10 years after a normal screening colonoscopy.
  • Avoid admission or preoperative chest x-rays for ambulatory patients with unremarkable history and physical exam.

I strongly encourage you to explore the website and read the recommendations yourself.

Of course, physicians who have been trained recently or who keep abreast of the medical literature already know most of these recommendations, and patients going to doctors who practice evidence-based medicine have already been taught many of them.

But if these treatments and tests are known not to help patients, why are they still performed so frequently? The “defensive medicine” excuse rings false. After all, the best legal defense is ordering what’s best for the patient. Some use of ineffective tests and treatments could be attributed to ignorant and demanding patients, but where would the patients initially have learned to ask for an annual EKG or an annual chest X ray if their prior doctor had not been ordering these tests?

I think the only convincing explanation for the misuse of most of these tests and treatments is economic. Doctors make much more money in ordering these tests than in educating patients that they’re bad for them. Moreover, the patients don’t suffer the economic consequences of this misuse, since the cost is frequently borne by insurance. Our broken healthcare system insulates patients from the costs of their healthcare decisions and thereby encourages the use of expensive therapies that have little value. In other marketplaces, in electronics, or transportation, or clothing, or food, expensive goods that have little value are usually called rip-offs. A few unsuspecting customers might fall for them, but word soon spreads and consumers soon learn to watch their wallets. But in healthcare the patient isn’t paying, so he doesn’t bear the price of the rip-off but redistributes it to the other enrollees of his insurance company (or to taxpayers if he has Medicare or MediCal). The insurance company can then try to limit the utilization of these tests, but the insurance company isn’t in the examination room. The highly “motivated” doctor can simply add a word or two to the patient’s symptoms to have the test approved. The EKG can be billed for chest pain even if the patient doesn’t have any. The chest X ray is indicated for a cough that the patient doesn’t have.

The doctor gets paid. The patient is fooled into thinking that he got a useful test for free. Someone else gets the bill. Costs keep skyrocketing. Any efforts by the insurers to limit payment are answered with emotional shouting about “rationing”. Rationing is when you don’t use something so someone else can have it. We’re talking about things that simply have no benefit and shouldn’t be given to anyone.

Choosing Wisely is a welcomed effort. I hope it succeeds, but I predict it will not. As long as the perverse economic incentives persist so will the useless but expensive therapies and tests. I’ve written before about how our healthcare marketplace broke and what I think will be needed to fix it. Until then, we are wise to remember that we get what we pay for. And we’re all paying for expensive and ineffective healthcare.

Learn more:

The Choosing Wisely website

Doctor Panels Recommend Fewer Tests for Patients (New York Times)
Doctors seek end to 5 cancer tests, treatments (Chicago Tribune)
Doctors unveil “Choosing Wisely” campaign to cut unnecessary medical tests (CBS News)

WolframAlpha U.S. healthcare expenditures time series (click on “linear scale” by the graph to get a clear picture)

More

Weight-Loss Surgery More Effective for Diabetes than Medication

About 20 million Americans currently have type 2 diabetes, three times more than in 1980. Diabetes is a major risk factor for stroke and heart attack, is the leading cause of new cases of blindness, and is the largest cause of the need for dialysis. Diabetes is also usually progressive, meaning that on the same medications and on the same diet and exercise regimen, the blood sugar of a patient with diabetes will slowly increase, necessitating constantly increasing amounts of medications.

So despite new families of medications for diabetes, and despite the fact that most patients require more than one medication, many patients never achieve good control of their blood sugar.

Two studies published this week in the New England Journal of Medicine offer tantalizing hope for overweight patients with diabetes. Both studies attempted to discover whether overweight patients with diabetes would achieve better control of their diabetes through weight loss surgery or through standard medical care.

One study, conducted in Italy, randomized 60 patients to three groups. One group was treated with medication. Another underwent gastric bypass surgery. The third group underwent biliopancreatic diversion surgery. (See the helpful graphic in the NY Times article for an explanation of the different surgeries. I thought biliopancreatic diversion was the name of a gastroenterology theme park.) The endpoint of this study was very ambitious – remission of diabetes, defined as normal sugars without medication for over a year.

None of the patients receiving medical therapy achieved remission, compared with 75% of the patients who underwent gastric bypass, and 95% of the patients undergoing biliopancreatic diversion.

The second study, from the Cleveland Clinic, randomized 150 overweight diabetic patients to gastric bypass, sleeve gastrectomy, or medical therapy. The patients in the surgical groups had much better control of their diabetes than the medical therapy group, and many in the surgical group were able to stop their diabetes medications.

Those are very impressive results, but some questions remain unanswered. Does the remission of diabetes mean that the patient is cured? We don’t know. Since the studies followed patients for at most two years, it is entirely possible that years from now their diabetes will recur. Will the excellent control of diabetes translate to fewer diabetic complications, like strokes, heart attacks, and kidney disease? Do diabetics who are less overweight than those in these studies still benefit from surgery? Larger long-term studies will be needed to find out.

But for now it is clear that for overweight patients with diabetes, surgery should no longer be thought of as a last resort. Surgery is increasingly a proven therapy with much greater effectiveness than other alternatives.

Learn more:

Weight-loss surgery effective against diabetes, studies show (LA Times article)
Surgery for Diabetes May Be Better Than Standard Treatment (NY Times article)
Bariatric Surgery (NY Times, instructional diagrams explaining the anatomy of various weight loss surgeries)
Bariatric Surgery versus Conventional Medical Therapy for Type 2 Diabetes (New England Journal of Medicine article)
Bariatric Surgery versus Intensive Medical Therapy in Obese Patients with Diabetes (New England Journal of Medicine article)
Surgery or Medical Therapy for Obese Patients with Type 2 Diabetes? (New England Journal of Medicine editorial)
Evidence Mounts in favor of Weight Loss Surgery (My last post about weight loss surgery in 2011, with links to my previous posts about this topic)

More

Aspirin for Cancer Prevention not Ready for Prime Time

Aspirin tablets. Photo credit: Ragesoss / Wikipedia

Let’s imagine that we had a hunch that lighting incense at midnight contributes to weight loss, and we wanted to test that hunch. How would we do that? We would recruit lots of overweight adults and (with their permission) randomly assign them into two groups. The first group would receive a wakeup call every night at midnight and would then light some incense. The second group would still receive a wakeup call (so that the sleep deprivation itself is not a difference between the groups) and would do something else, like deep breathing exercises. The people in both groups would have their weights measured periodically and any difference in the weights between the two groups would be calculated.

Let’s also imagine that this beautifully designed experiment fails to show any benefit in weight loss from lighting incense at midnight. The two groups’ weights didn’t change, or changed by the same amount, and despite our hunch we are forced to conclude that lighting incense at midnight has no effect on weight loss.

But we have an abiding subjective sense that incense at midnight is extremely healthy, and we’re sure it has a benefit that we haven’t found yet. (An abiding subjective sense is also called a bias.) So a few years later we decide that maybe lighting incense at midnight prevents tooth decay.

We think about doing another experiment just like the one above but in which the two groups are followed to check for difference in dental cavity rates. But then we realize that we can save all the effort and expense by simply getting tooth decay data from the above experiment which was already done. We can look at the original study and get the dental records of all the participants in both groups, find all the cavities, and count whether the incense-lighters had fewer cavities than the non-incense-lighters. That should answer our question, right?

Wrong.

The reason we can’t get reliable data from the prior experiment about cavity risk or cancer risk or anything else other than weight loss is that the two groups are bound to be different in lots of ways simply due to chance. One group likely has more redheads than another or is shorter or has people who are on average richer or live closer to large bodies of water. That’s simply because everyone is different and no two large groups of people (even randomized) will be identical in all characteristics. So it’s very likely that if we went back to our does-incense-help-weight-loss study and looked for differences other than weight loss we would find some differences simply by chance.

To prevent being fooled by random differences, scientists make a big distinction between studies that look at primary endpoints and studies that look at post-hoc endpoints. A primary endpoint is the effect that a study was designed to measure. Before any study is done, the scientists have to clearly define and publicize their primary endpoint. In the example above the primary endpoint is weight loss. A study that shows a difference in a primary endpoint is reliable because the scientists showed the effect that they said they were looking for. The likelihood of doing that by chance is very low.

A post-hoc endpoint is one that is chosen after the trial has been finished to look back at the same data and see if some other characteristic is different. So in the above example, after the study was completed if we looked at the original experiment for differences between the two groups in tooth decay or cancer these would be post-hoc endpoints. These studies are notoriously unreliable because the likelihood of finding a difference between groups that has nothing to do with the experimental intervention is very high. If you look long enough, you will certainly find a difference between the two groups that was not caused by lighting incense but was just due to random differences between the individuals picked for each group.

This is exactly the problem plaguing the studies released this week in The Lancet attempting to link aspirin to cancer prevention. They received much publicity, but will not affect medical practice. They are mostly re-analyses of studies done initially to discover whether aspirin prevents strokes or heart attacks. It does. But using the same data set to ask whether aspirin prevents cancer leaves us vulnerable to the spurious results that post-hoc endpoints allow.

So most doctors, appropriately, will still not recommend aspirin for cancer prevention. We need a large prospective randomized trial to settle the question. Aspirin is inexpensive, so such a trial is unlikely to be sponsored by pharmaceutical companies, but I would think that this would make it a perfect candidate for a government sponsored study.

Learn more:

Studies Link Daily Doses of Aspirin to Reduced Risk of Cancer (NY Times)
Studies Find New Evidence Aspirin May Prevent Cancer (Wall Street Journal)
Should you take aspirin to prevent or treat cancer? (LA Times Booster Shots)

Short-term effects of daily aspirin on cancer incidence, mortality, and non-vascular death: analysis of the time course of risks and benefits in 51 randomised controlled trials (Lancet article, abstract available without subscription)
Effect of daily aspirin on risk of cancer metastasis: a study of incident cancers during randomised controlled trials (Lancet article, abstract available without subscription)
Effects of regular aspirin on long-term cancer incidence and metastasis: a systematic comparison of evidence from observational studies versus randomised trials (Lancet Oncology, abstract available without subscription)

Effect of Aspirin on Vascular and Nonvascular Outcomes (Archives of Internal Medicine article, January)

More

Epidemiology is Much Worse For You Than Red Meat

“Red meat is not bad for you. Now blue-green meat, that’s bad for you!”
— Tommy Smothers

Mmmmm...

I generally try to avoid writing about meaningless studies that should be ignored. First, there are a lot of them. Second, I don’t want to attract more attention to them than they already get in the media. But sometimes a meaningless study seems to perfectly confirm what we already wanted to believe. Then a feedback loop of reader gullibility and media misunderstanding leads inevitably to reaching a conclusion entirely unsupported by the science. Then I feel obligated to shine some light on the confusion.

This week’s expedition into folly was occasioned by a study published in the Archives of Internal Medicine which attempted to find a link between eating red meat and mortality. My regular readers know that the only way to test whether some substance has some effect is to do a randomized study. That means if we wanted to know whether eating more red meat caused people to die sooner than eating less red meat we would need to do the following: Recruit a few thousand people with moderate meat intake and get their permission to control their diets. Then randomize them into two groups. One group eats a vegetarian diet, and the second group eats a whole lot of red meat. Follow them all and count deaths. Voila! This would be good science and would teach us a lot about any link between eating red meat and longevity. It would be expensive and logistically difficult, but nature does not yield her secrets easily.

Is this what was done in the study published this week? Not even close. The study looked at data collected in two previous large epidemiologic studies, the Health Professionals Follow-up Study, which started in 1986, and the Nurses’ Health Study which started in 1980. Neither of these studies was randomized. They simply followed large groups of people and assessed their health periodically. There was absolutely no intervention done, just observation. They were given questionnaires every few years about their diet, from which their meat consumption was estimated. Then the deaths among the participants were recorded, and calculations were done to see if there is a correlation between meat ingestion and mortality.

And guess what? There is. The people who ate more red meat had a slightly higher mortality than people who ate less red meat. That means that eating red meat is correlated with increased mortality. It does not mean that eating red meat is what kills people. Meaning, it doesn’t mean that changing your diet changes your risk. The authors of the study, of course, know this and never use words like “cause”, but media coverage that followed completely missed this distinction and waxed hysterically that “all red meat is bad for you”.

Observational studies have almost never steered us towards the truth. Remember that observational studies suggested that estrogen prevents strokes and heart attacks. It took a randomized study to show that it doesn’t. That’s because without randomization you never know if the people that are choosing to eat red meat are different from the people who don’t in some important way that increases their mortality but has nothing to do with the meat. For example, in this study the people who ate more meat were less likely to be physically active, more likely to be current smokers, to drink alcohol, and to be overweight than those who ate less meat. The authors of the study used statistical methods to account for these differences, but there were almost certainly other differences that could not be guessed or accounted for.

Also, an observational study can’t tell us in which direction the causal arrow points. Meaning, if sick people craved more meat, then the link between the two would be due to high mortality causing more meat eating, not the other way around.

So this study teaches us absolutely nothing about a putative link between eating meat and death. It should have been completely ignored by the media, and it doesn’t deserve a moment of your attention.

But let’s take the study’s data at face value and see what all the media hullabaloo is about. The study found that an increase of one serving of unprocessed red meat per day was associated with a 13% increase in mortality, and a 20% increase for processed meat. Let’s take the higher number, 20%, that’s terrible right? That must amount to people dropping dead in droves soon after biting into their hot dogs.

The study followed people for a total of 2,960,00 person-years, during which almost 24,000 deaths were counted, for an average of 0.0080 deaths per person-year. 20% of that is 0.0016 deaths per person year, which is one additional death for 619 person-years.

So let’s pretend that the link between red meat and death is real (which is completely unsupported by this study) and let’s imagine two groups of people. The first group is composed of 100 vegetarians. The second group is 100 people who eat one serving of red meat daily, perhaps the delicious burger in the picture above, which happens to be from Jeff’s Gourmet Kosher Sausage Factory. (The folks at Jeff’s don’t know me and didn’t pay me for this post, but if they were to thank me with one of their beef wraps, that would be just fine.) The group of meat eaters would have one additional death after six years and two months. In that time they would have consumed 225,935 burgers. So 225,935 servings of meat correlate to one additional death.

That makes a burger a lot less dangerous than, say, having general anesthesia, and about as dangerous as driving 300 miles, but much yummier.

So we’ve learned nothing about whether eating more red meat affects longevity, but we’ve learned a lot about what happens when preconceived opinions seem to be confirmed. People attach a lot of weight to arguments which purport to demonstrate what they already think should be true. We feel that red meat should be bad for us. We feel guilty because cows are so cute and meat is so tasty. There must be a health risk to balance the scales and atone for our guilt.

If there is, it will take a well designed randomized study to prove it. Until then, skepticism, and a slice of brisket, is in order.

“Mmmmm… Burger.”
— Homer Simpson

Learn more:
All red meat is bad for you, new study says (LA Times)
Risks: More Red Meat, More Mortality (NY Times Vital Signs)
Red Meat Consumption and Mortality (Archives of Internal Medicine)
I calculated the risk of driving from the WolframAlpha calculations for “US auto fatalities per year” and “US auto miles driven per year

More

Clostridium difficile Infections on the Increase

Photomicrograph credit: CDC/Louis S. Wiggs/Wikipedia

In 2010 I predicted that Clostridium difficile (C. dif.) would become a household name. C. dif. is a bacterium that infects the colon causing severe, sometimes life-threatening, diarrhea. C. dif. infection is frequently a complication of antibiotic use. Antibiotics can kill the normal bacteria in the colon and establish an opportunity for C. dif. to proliferate. After a course of antibiotics, a person can remain susceptible for a few months, and subsequent exposure to C. dif., usually in a healthcare setting, can lead to infection.

This week the Centers for Disease Control and Prevention (CDC) released a report publishing the latest data on the trends in C. dif. infections. These trends are not encouraging.

The number of annual C. dif. infections, and the number of those which are fatal, are higher than ever, with 14,000 estimated annual deaths. Virtually all of them were transmitted in healthcare settings. About a quarter were acquired in hospitals, and most of the rest in nursing homes. Most of the deaths were in patients 65 years and older.

The increased number of infections and deaths is attributed to a more virulent strain that has emerged in the last few years, and on our continued misuse of antibiotics. The CDC estimates that about half of all antibiotics given are unnecessary.

The CDC has some important advice to help stem the tide of C. dif. infections. This epidemic will require attention from patients, physicians, hospital and nursing home administrators, and regional and national health agencies. The challenges to hospitals seem quite daunting. Many patients develop C. dif. infections in nursing homes and are admitted to hospitals without the information about their infection being sent with them. Moreover, hand sanitizers now in universal use in hospitals don’t kill C. dif. spores, so doctors must use gloves and gowns to prevent spreading the infection to other patients.

The CDC recommends that doctors prescribe antibiotics with greater care, diagnose C. dif. more promptly, and assure that patients with C. dif. are appropriately isolated from other patients. Patients should take antibiotics only as prescribed, inform the doctor if diarrhea develops within a few months of an antibiotic course, wash hands carefully after using the bathroom, and if possible use a separate bathroom if they have diarrhea.

Unfortunately, we will continue to hear much more about this germ in coming years.

Learn more:

The Latest on Clostridium Difficile, From the CDC (Wall Street Journal Health Blog)
CDC: Deadly and preventable C. difficile infections at all-time high (CNN Health)
Making Health Care Safer, Stopping C. difficile Infections (CDC Vital Signs)
Preventing Clostridium difficile Infections (CDC Morbidity and Mortality Weekly Report)
A New Treatment for Clostridium difficile (my post about C. dif. In 2010)

More

Doctor, Test Me for Everything

“Doctor, I really want to stay healthy and I just got a big promotion/had a baby/had a grandchild, so I really don’t want to end up with some horrible illness. Please test me for everything.”

Primary care doctors hear requests like this all the time. It’s an impossible request to fulfill because it assumes two premises that are usually false. It assumes that we have a test for all illnesses, and that being diagnosed early with a dreaded illness makes a difference.

Monday’s NY Times published a terrific op-ed about the myth of early diagnosis. I highly recommend it. It’s brilliant and short, and the rest of my post will make a lot more sense if you read the op-ed first. Go ahead. I’ll wait.

*****

I hope you found that illuminating, and I assume you also found it counterintuitive. That’s because for over a generation we have seen doctors on TV dramas shake their heads in sorrow and say “If only we had caught it earlier”. We have also been urged to get tested for the very few diseases in which early diagnosis makes a difference. For example high cholesterol and high blood pressure cause no symptoms, but detecting and treating them prevent strokes and heart attacks. So we assume that most other diseases work the same way – catch them early, before they cause symptoms, and you’ll have a better outcome.

But it just isn’t so. We’ve proven that screening for breast cancer and colon cancer saves lives, but for the vast majority of diseases, early diagnosis makes absolutely no difference in outcomes. So if I’m going to get lymphoma or lupus or pernicious anemia or myriad other illnesses, there’s absolutely no reason for me to do a thing about it until I feel sick. Even writing this feels sacrilegious because we are constantly inundated with messages that being proactive is praiseworthy. But in terms of health, being proactive means exercising, getting enough sleep, maintaining a normal weight, and abstaining from unhealthy habits like drinking too much or smoking. Add to that a handful of tests for the diseases in which testing helps, and you just can’t get more proactive.

It doesn’t make sense, does it?

There are actually two reasons that screening for many diseases doesn’t help. (Remember, screening means testing for an illness in someone with no symptoms or signs of the illness.)

The first reason is just that the best treatments we have for many illnesses work the same whether the illness is diagnosed before or after it starts causing symptoms. Why test everyone for a disease that only a few people have if those few people would do as well if they just waited until they got sick? If you’re going to get leukemia, catching it early won’t help. Some leukemias are cured, and some aren’t, but it doesn’t much matter when the diagnosis is made. So it makes sense to diagnose leukemia after it makes people sick.

The second reason has to do with the harms done by testing errors.

To explain this, indulge me in a little thought experiment. Let’s pretend there’s a disease called RBD (Rare Bad Disease) that is curable if caught before symptoms start, but is rapidly fatal otherwise. But it’s rare; only one in 10,000 people has it. That sounds like a perfect opportunity for screening, right? If we just test everybody then we can cure the ones with RBD. Now the treatment must be either expensive or dangerous, because otherwise it would be simpler to just treat everyone. (That’s why we just add folic acid to flour rather than test everyone for folic acid deficiency. It’s easier and safer to treat everyone in that case.) So let’s assume that the treatment of RBD if given to a person without RBD has a one percent fatal complication rate. And let’s also imagine that we have a test for RBD that is 99% accurate.

So in a city of a million people, one hundred of them have RBD and 999,900 don’t. If we test everyone in the city, because the test is inaccurate 1% of the time, one person with RBD will falsely test negative, but almost 10,000 healthy people will test positive. If we give everyone who tests positive the treatment for RBD, we’ll be treating a hundred times more healthy people than people with RBD and we’ll be killing as many people from the treatment as we’re saving. Better to forget the screening.

Are people in real life actually harmed by screening tests? Absolutely. Primary care doctors have all seen many patients go through unnecessary angiograms because of falsely-positive screening stress tests, unnecessary biopsies because their whole-body CT scan found some benign lumps, unnecessary sleepless nights because unproven blood tests suggested cancer that wasn’t there. The number of patients actually helped from these tests is much smaller, and the peace of mind that patients have when such tests are normal is entirely illusory. They could still develop leukemia or be hit by a truck the next day.

So keep yourself healthy. And whatever you do, don’t get tested for everything.

Learn more:

If You Feel O.K., Maybe You Are O.K. (NY Times op-ed by Dr. H. Gilbert Welch)

For a wonderful review of randomness and probability which has no math, and has a section explaining the dangers of false positives even with very accurate tests, I highly recommend The Drunkard’s Walk – How Randomness Rules Our Lives by Leonard Mlodinow.

In the RBD example, above, the probability that I have RBD if I test positive is 1%, but the probability of the test being positive if I have the disease is 99%. The fact that these two numbers are not the same is very counterintuitive. We owe our understanding of these related probabilities to Thomas Bayes, an eighteenth century English mathematician and minister. Bayes’ theorem and Bayesian statistics has transformed our understanding of risk in general and medical testing in particular.

More

Untreatable Gonorrhea – The Next Infectious Threat

CDC / Joe Millar via Wikipedia

Our old nemesis, the clap, is in the news again this month.

Gonorrhea is the second most common sexually transmitted disease in the US, with more than 600,000 cases annually. In men it usually causes pain on urination, penile discharge, or sore throat. In women it may not cause symptoms or may cause painful urination, vaginal discharge, or sore throat. If untreated, gonorrhea can spread to the fallopian tubes, joints, and heart valves. I know that most readers simply can’t hear enough about penile discharge (especially if they’re reading this over lunch), so to the right I’ve included a microscopic image of exactly that. The gonorrhea bacteria are visible as the small dark dots.

With the discovery of penicillin in the 1940s the treatment of gonorrhea was revolutionized. But ever since that major victory gonorrhea has won several important battles. Gonorrhea developed resistance to sulfanilamide in the 1940s and to penicillins and tetracyclines in the 1980s. When I trained in internal medicine in the mid 1990s, Cipro (an antibiotic in the family called fluoroquinolones) was the preferred treatment for gonorrhea. In the 2000s some fluoroquinolone-resistant strains of gonorrhea appeared and by 2007 resistance was widespread.

Third generation cephalosporins are now the last antibiotic family to which gonorrhea is susceptible. But, as a decade ago with fluoroquinolones, sensitivity to cephalosporins is slowly decreasing, especially in the western US. Though no strain in the US has become resistant yet, a strain isolated from a patient in Japan in 2009 was highly resistant to cephalosporins.

The downward creeping cephalosporin sensitivity of gonorrhea prompted CDC researchers to sound the alarm in an editorial in the New England Journal of Medicine earlier this month. The editorial warns that if the early signs of decreasing sensitivity are analogous to what we observed with fluoroquinolones in the ‘90s, then we may be only a few years away from strains of gonorrhea that are untreatable by any antibiotics.

The authors make sound recommendations to accelerate development of new antibiotics and increase surveillance of gonorrhea antibiotic sensitivity. But it’s entirely possible that these efforts will fail, and that the only defense against gonorrhea will be from a vaccine which is not expected any time soon.

I’ve written before about the emerging problem of bacterial antibiotic resistance. Our grandchildren may study the period from the 1940s to the 2040s as the antibiotic century. Unless antibiotic development stays a step ahead of the wily microorganisms we may reach a time when sexually transmitted infections are managed the way they were a hundred years ago – promoting the use of condoms and corny public health posters encouraging men to keep their flies zipped.

Learn more:

CDC Warns Untreatable Gonorrhea is On the Way (Chicago Tribune)
Gonorrhea Could Join Growing List of Untreatable Diseases (Scientific American)
Antibiotic-Resistant Gonorrhea (Centers for Disease Control and Prevention)
The Emerging Threat of Untreatable Gonococcal Infection (New England Journal of Medicine)
Gonorrhea (U.S. National Library of Medicine)

More