search this site.

200204P - CRITICAL READING OF A JOURNAL ARTICLE

Print Friendly and PDFPrint Friendly

Presented at CRC course held at King Fahad Medical City, Riyadh on 26 January 2020. 11:00 am - 12:00 pm by Professor Omar Hasan Kasule Sr. MB ChB (MUK). MPH (Harvard), DrPH (Harvard) Chairman of the Institutional Review Board, King Fahad Medical City

 

INTRODUCTION:

} For a critical reading of scientific literature, the reader must be equipped with tools to analyze their methodology and data analysis critically before accepting their conclusions.

} Common problems in published studies are incomplete documentation, design deficiencies, improper significance testing and interpretation.

 

ISSUES IN THE TITLE:

} The main problem of the title is irrelevance to the body of the article.

} Too long and too short.

} Title lacking key words of the research.

 

PROBLEMS OF THE ABSTRACT:

} Failure to show the focus of the study.

} Failure to provide sufficient information to assess the study (design, analysis, and conclusions).

} Focusing on only one part of the study.

} Including literature review and taking much space.

 

PROBLEMS OF THE INTRODUCTION:

} Failure to state the reason for the study.

} Failure to review previous studies, succinct and relevant.

} The laundry list of previous studies.

} Failure to point out the gap in knowledge and the contribution of the present study.

} Failure to provide the background and historical perspective.

} Failure to state the study population.

} Failure to state the study hypothesis.

 

PROBLEMS OF STUDY DESIGN:

} Going on a fishing expedition without a prior hypothesis.

} Study design not appropriate for the hypothesis tested.

} Lack of a comparison group.

} Use of an inappropriate comparison group.

} The Berkson's fallacy.

} Selection of cases and controls from different populations.

} Sample size not big enough to answer the research questions.

 

PROBLEMS OF CONFUSED TERMINOLOGY:

} ‘Measurement’ uses instruments. ‘Calculation’ deals with numbers and formulas.

} ‘Estimation’ is used in two senses as an approximation in measurements or as computation of statistical parameters.

} ‘Determination’ is a general term for getting to a conclusion by use of the 4 methods above.

} The term ‘study’ is generic and can be confused with experiment that refers to only some types of studies.


PROBLEMS IN DATA COLLECTION:

} Missing data due to incomplete coverage,

} Loss of information due to censoring and loss to follow-up,

} Poor documentation of data collection,

} Methods of data collection inappropriate to the study design.

 

PROBLEMS OF DATA ANALYSIS:

} Failures to state the type of hypothesis testing (p value or confidence interval).

} Use of the wrong statistical tests mostly because of confusing discrete and continuous data.

} Drawing inappropriate conclusions.

} Use of parametric tests for non-normal data.

} Multiple comparisons or multiple significance testing.

} Failure to assess errors.

} Failure to assess normality of data.

} Failure to use the appropriate data scale: qualitative/quantitative/discrete/continuous.

} Using the wrong statistical formula.

 

PROBLEMS IN REPORTING RESULTS:

} Selective reporting of favorable results,

} Numerators without denominator,

} Inappropriate denominators,

} Numbers that do not add up,

} Tables not labeled properly or completely,

} Numerical inconsistency (rounding, decimals, and units),

} Stating results as mean +/- 2sd for non-normal data,

} Stating p values as inequalities instead of the exact values,

} Missing degrees of freedom and confidence limits.

 

PROBLEMS OF THE CONCLUSION - 1:

} Repeating the results section,

} Failure to discuss the consistency of conclusions with the data and the hypothesis,

} Extrapolations beyond the data,

} Failure to discuss short-comings and limitations of the study,

} Failure to evaluate statistical conclusions in view of testing errors,

} Failure to assess bias (misclassification, selection, and confounding).

 

PROBLEMS OF THE CONCLUSION - 2:

} Failure to assess precision (lack of random error), and assessment of validity (lack of systematic error).

} Failure to appreciate the difference between internal and external validity: Internal validity is achieved when the study is internally consistent and the results and conclusions reflect the data. External validity is generalizability (i.e. how far can the findings of the present study be applicable to other situations) and is achieved by several independent studies showing the same result.

} Inability to detect the outcome of interest due to insufficient period of follow-up, inadequate sample size, and inadequate power.

 

ABUSE OR MISUSE OF STATISTICS:

} Incomplete and inaccurate documentation of results.

} Selection of a favorable rate and ignoring unfavorable ones. This is done by 'playing' either with the numerator or the denominator.

} The scales of numerators and denominators can be made artificially wider or narrower giving false and misleading impressions.

 

MISLEADING STATISTICS:

} Violating the principle of parsimony.

} Study objective unclear and not reflected in the study hypothesis.

} Fuzzy, inconsistently, and subjective definitions (of cases, non-cases, the exposed, the non-exposed, comparison groups, exposure, method of measurement).

} Incomplete information on response rates and missing data.