Wednesday, March 7, 2012

Cancer Screening Statistics Not Completely Understood

This is a great article why there is misunderstanding by physicians in presenting statistics of Cancer Screening.

By Todd Neale, Senior Staff Writer, MedPage Today
Published: March 06, 2012
Reviewed by Dori F. Zaleznik, MD; Associate Clinical Professor of Medicine, Harvard Medical School, Boston

Many primary care physicians in the U.S. accept misleading statistics as proof that cancer screening saves lives, a survey showed.
About three-quarters (76%) of respondents incorrectly said that increased five-year survival and early detection of cancer proves that a screening test saves lives, according to Odette Wegwarth, PhD, of the Max Planck Institute for Human Development in Berlin, and colleagues.
That rate was similar to the proportion who correctly stated that a reduction in mortality in a randomized trial proves the efficacy of a screening test (81%), the researchers reported in the March 6 issue of the Annals of Internal Medicine.

Misunderstanding of statistics ... matters, because it may influence how physicians discuss screening with their patients or how they teach trainees," the authors wrote. "To better understand the true contribution of specific tests, physicians need to be made aware that in the context of screening, survival and early detection rates are biased metrics and that only decreased mortality in a randomized trial is proof that screening has a benefit."
Although improved survival rates and earlier detection of cancer are often used to demonstrate the efficacy of screening for cancer, those measures are subject to lead-time and overdiagnosis biases, according to Wegwarth and colleagues.

For example, they wrote, in a cohort of individuals who will die at age 70, the five-year survival rate for those diagnosed with cancer because of symptoms at age 67 will be 0%, whereas the five-year survival rate for those diagnosed through screening at age 60 will be 100%.
"Yet, despite this dramatic improvement in survival ... nothing has changed about how many people die or when," the authors explained.
Similarly, screening that detects cancer that will not ultimately progress also can inflate survival rates without having any effect on mortality.

Mortality rates in a randomized trial, however, are not affected by these types of biases, and a committee of the National Cancer Institute concluded that that measure is the only one that can reliably prove that a screening test saves lives.
To find out whether primary care physicians -- who often recommend screening tests to their patients -- understand which statistics are most meaningful, Wegwarth and colleagues conducted an online survey of a national sample of 412 U.S. physicians.
The physicians were asked general knowledge questions about cancer screening statistics and were presented with two hypothetical scenarios based on real-world prostate cancer data.

The first scenario described a screening test that improved five-year survival from 68% to 99% and increased the early detection of cancer (considered irrelevant evidence). The second described a screening test that reduced cancer mortality rate from 2 to 1.6 per 1,000 people (considered relevant evidence).
The respondents were more supportive of the screening test backed by the irrelevant evidence, as illustrated by the percentage who said the evidence proves that the test saves lives (80% for the test backed by irrelevant evidence versus 60% for test backed by relevant evidence, P<0.001).
When presented with the irrelevant evidence of improved five-year survival, 69% of physicians said they would definitely recommend the screening test. Only 23% said they would definitely recommend the test that was based on the relevant evidence.

"We believe that many of the physicians mistakenly interpreted survival in screening as if it were survival in the context of a treatment trial," the authors wrote, noting that in the context of screening, the starting point for survival calculations is different for screened and unscreened populations.
In an accompanying editorial, Virginia Moyer, MD, MPH, of Baylor College of Medicine in Houston, said that the study suggests that physicians do not understand statistical concepts well.
She highlighted two possible solutions for the problem: "Medical journal editors should carefully monitor publications about screening to ensure that results are presented in such a way as to avoid misinterpretation, and medical educators should improve the quality of teaching about screening tests."

Even together, however, those solutions likely will not be enough, she wrote, noting that journalists and the general public for which they write also should be targets of education about screening statistics.
Wegwarth and colleagues acknowledged some limitations of their study, including the fact that recommendations were based on hypothetical scenarios and not actual practice, the lack of information on the effect of subjective factors like the fear of malpractice on the interpretation of the evidence, and the lack of information on testing harms in the scenarios.

To read the online article:

No comments:

Post a Comment