search this site.

1004L- CRITICAL READING OF A JOURNAL ARTICLE

Print Friendly and PDFPrint Friendly

Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College.



1.0  WHY CRITICAL READING?
In order for public health practitioners to use an article as a source of evidence for decision making, they must read it critically to assess its quality. For critical reading of scientific literature, the reader must be equipped with tools to be able to analyze the methodology and data analysis critically before accepting the conclusions.

2.0 COMMONEST PROBLEMS IN PUBLISHED PAPERS
Common problems in published studies are incomplete documentation, design deficiencies, improper significance testing and interpretation.

3.0 PROBLEMS OF THE TITLE, ABSTRACT, and INTRODUCTION
The main problem of the title is irrelevance to the body of the article. Problems of the abstract are failure to show the focus of the study and to provide sufficient information to assess the study (design, analysis, and conclusions). Problems of the introduction are failures of the following: stating the reason for the study, reviewing previous studies, indicating potential contribution of the present study, giving the background and historical perspective, stating the study population, and stating the study hypothesis.

4.0 PROBLEMS OF STUDY DESIGN
Problems of study design are the following: going on a fishing expedition without a prior hypothesis, study design not appropriate for the hypothesis tested, lack of a comparison group, use of an inappropriate comparison group, and sample size not big enough to answer the research questions.

5.0 PROBLEMS OF DATA COLLECTION
Problems in data collection are: missing data due to incomplete coverage, loss of information due to censoring and loss to follow-up, poor documentation of data collection, and methods of data collection inappropriate to the study design.

6.0 PROBLEMS OF DATA ANALYSIS
Problems of data analysis are failures in the following: stating type of hypothesis testing (p value or confidence interval), use of the wrong statistical tests, drawing inappropriate conclusions, use of parametric tests for non-normal data, multiple comparisons or multiple significance testing, assessment of errors, assessment of normality of data, using appropriate scales and tests, using the wrong statistical formula, and confusing continuous and discrete scales.

7.0 PROBLEMS OF THE RESULTS SECTION
Problems in reporting results are: selective reporting of favorable results, numerators without denominator, inappropriate denominators, numbers that do not add up, tables not labeled properly or completely, numerical inconsistency (rounding, decimals, and units), stating results as mean +/- 2SD for non-normal data, stating p values as inequalities instead of the exact values, missing degrees of freedom and confidence limits.

8.0 PROBLEMS OF THE CONCLUSION
Problems of the conclusion are failures in the following: repeating the results section, discussion of the consistency of conclusions with the data and the hypothesis, extrapolations beyond the data, discussing short-comings and limitations of the study, evaluation of statistical conclusions in view of testing errors, assessment of bias (misclassification, selection, and confounding), assessment of precision (lack of random error), and assessment of validity (lack of systematic error).

Internal validity is achieved when the study is internally consistent and the results and conclusions reflect the data. External validity is generalizability (i.e. how far can the findings of the present study be applicable to other situations) and is achieved by several independent studies showing the same result. Inability to detect the outcome of interest due to insufficient period of follow-up, inadequate sample size, and inadequate power.

9.0 ABUSE or MISUSE OF STATISTICS
Statistics can be abused by incomplete and inaccurate documentation of results as well as selection of a favorable rate and ignoring unfavorable ones. This is done by 'playing' either with the numerator or the denominator. The scales of numerators and denominators can be made artificially wider or narrower giving false and misleading impressions.

Statistical results are misleading in the following situations: (a) violating the principle of parsimony, (b) study objective unclear and not reflected in the study hypothesis (c) fuzzy, inconsistently, and subjective definitions (of cases, non-cases, the exposed, the non-exposed, comparison groups, exposure, method of measurement), (f) incomplete information on response rates and missing data.