To the Editor:
The recent study by Argamany et al
1
concluded that the incidence and hospital mortality for Clostridium difficile infection (CDI) differed between major regions of the United States and across different seasonal times of the year. However, these conclusions were not supported by the data in their study because the authors based them exclusively on statistical significance without considering the effect size of their findings. The effect sizes of region and season on CDI were very low or near zero, contradicting their conclusion, as subsequently explained.The effect sizes for U.S. region (Northeast, Midwest, South, and West) and seasonality (winter, spring, summer, and fall) were estimated using the data for patients overall and presented in their Figures 1-5.
1
The population rates provided were first converted into a contingency table with population counts from which Pearson χ2 was calculated. The χ2 was converted to a Pearson correlation coefficient, which is a standard estimator of effect size2
using the conversion formula of r = χ2 / (χ2 + N), for when the degrees of freedom are >1. The effect sizes were interpreted using Cohen's recommended criteria.3
The reanalysis of data showed that effect sizes (r) for the effect of U.S. region on CDI incidence was r = 0.016, and the effect of season was r = 0.003. The effect of region on CDI hospital mortality was r = 0.014, and the effect of season was r = 0.023. By Cohen's criteria,
3
an effect size of r = 0.10 is considered a small effect size, but these effect sizes average less than a tenth of that value, approaching zero. Although statistically significant because of the enormous sample size in the study (N = 2,279,004), the differences of the CDI incidence and patient mortality, as the effect sizes estimates indicated, are so trivial that the regional and seasonal differences can be safely ignored. This is consistent with the small percentage differences reported in the study: the CDI mortality for all patients differed only by approximately 1% between regions or seasons, and incidence differed across seasons by a fraction of a percent.An additional complication is that there was little agreement on the riskiest region and season. For overall CDI mortality, the highest risk seasons were fall for adults and winter for older adults. On the other hand, for CDI incidence, the riskiest seasons were spring for both adults and older adults. Therefore, there was no agreement between incidence and mortality measures for CDI, which leaves open the question of whether any single season can be identified as the highest risk season. The same is true regarding regional differences, with the Northeast having the highest risk for CDI incidence, but the Midwest region having the worst for hospital mortality. The lowest risk was the Northeast for mortality and West for incidence. Therefore, similar to seasonality, no single region can really be identified as the high-risk region overall.
To sum up, the conclusion that the CDI incidence and hospital mortality of patients significantly differed among regions and seasons is contradicted by the calculated effect sizes and other inconsistencies in the findings. Our additional analysis does not support the authors' suggestion that “These results underscore the need for improved infection control and antimicrobial stewardship measures to prevent CDI and its transmission, particularly in high-risk regions and seasons.” Instead, our estimates that the effect size was near zero appear to support the conclusion that CDI incidence and hospital mortality of patients is nearly the same across all 4 regions and seasons. We would suggest a replacement conclusion that there is a very similar CDI risk across regions and seasons, with possible exception of particular subsets (eg, older adults).
The study by Argamany et al
1
has many of strengths, including large sample size, clear study design, excellent writing, an excellent discussion section that includes a thorough examination on the limitations of the data set, and finally, a focus on an important problem in infection control. Despite these strengths, because of the near-zero effect sizes of the findings, the conclusion of the study that the CDI incidence and patient mortality differed significantly across regions and seasons is not sufficiently supported. We suggest that there is no need at this time to redirect resources or implement targeted control measures according to region and seasons.References
- Regional and seasonal variation in Clostridium difficile infections among hospitalized patients in the United States, 2001-2010.Am J Infect Control. 2015; 43: 435-440
- The other half of the story: effect size analysis in quantitative research.CBE Life Sci Educ. 2013; 12: 345-351
- Statistical power analysis for the behavioral sciences.2nd ed. L. Erlbaum Associates, Hillsdale [NJ]1988
Article info
Publication history
Published online: September 23, 2015
Footnotes
Conflicts of interest: None to report.
Identification
Copyright
© 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
ScienceDirect
Access this article on ScienceDirectLinked Article
- Response to “Effect of geographic region and seasonality on Clostridium difficile incidence and hospital mortality”American Journal of Infection ControlVol. 43Issue 12
- PreviewWe appreciate the comments regarding our recently published article on the regional and seasonal variation in Clostridium difficile infection (CDI) among hospitalized patients in the United States.1 We agree that measuring effect size is important in any study, particularly with a large sample size where small variations between groups could result in a statistically significant difference. However, effect size is calculated without accounting for the meaning of the measures used. We believe that the use of Cohen's criteria2 without regard to clinical significance results in underestimation of the importance of our study findings.
- Full-Text
- Preview