search this site.

180219P - GUIDELINES FOR CRITICAL READING OF A JOURNAL ARTICLE - OHK

Print Friendly and PDFPrint Friendly

For critical reading of scientific literature, the reader must be equipped with tools to be able to analyze their methodology and data analysis critically before accepting their conclusions. Prepared by Prof OHK MBChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics


Common problems in published studies are incomplete documentation, design deficiencies, improper significance testing, and interpretation. 


The main problem of the title is an irrelevance to the body of the article. Problems of the abstract are failure to show the focus of the study and to provide sufficient information to assess the study (design, analysis, and conclusions). Problems of the introduction are failures of the following: stating the reason for the study, reviewing previous studies, indicating the potential contribution of the present study, giving the background and historical perspective, stating the study population, and stating the study hypothesis. 


Problems of study design are the following: going on a fishing expedition without a prior hypothesis, study design not appropriate for the hypothesis tested, lack of a comparison group, use of an inappropriate comparison group, the Berkson's fallacy, selection of cases and controls from different populations, and sample size not big enough to answer the research questions. The following terms are often confused with one another. ‘Measurement’ is using instruments. ‘Calculation’ deals with numbers and formulas. ‘Estimation’ is used in two senses as an approximation in measurements or as a computation of statistical parameters. ‘Determination’ is a general term for getting to a conclusion by use of the 4 methods above. The term ‘study’ is generic and can be confused with an experiment that refers to only some types of studies.


Problems in data collection are missing data due to incomplete coverage, loss of information due to censoring and loss to follow-up, poor documentation of data collection, and methods of data collection inappropriate to the study design. 


Problems of data analysis are failures in the following: stating the type of hypothesis testing (p-value or confidence interval), use of the wrong statistical tests, drawing inappropriate conclusions, use of parametric tests for non-normal data, multiple comparisons or multiple significance testing, assessment of errors, assessment of normality of data, using appropriate scales and tests, using the wrong statistical formula, and confusing continuous and discrete scales. 


Problems in reporting results are selective reporting of favorable results, numerators without a denominator, inappropriate denominators, numbers that do not add up, tables not labeled properly or completely, numerical inconsistency (rounding, decimals, and units), stating results as mean +/- 2SD for non-normal data, stating p values as inequalities instead of the exact values, missing degrees of freedom and confidence limits. 


Problems of the conclusion are failures in the following: repeating the results section, discussion of the consistency of conclusions with the data and the hypothesis, extrapolations beyond the data, discussing shortcomings and limitations of the study, evaluation of statistical conclusions in view of testing errors, assessment of bias (misclassification, selection, and confounding), assessment of precision (lack of random error), and assessment of validity (lack of systematic error). 


Internal validity is achieved when the study is internally consistent and the results and conclusions reflect the data. External validity is generalizability (i.e. how far can the findings of the present study be applicable to other situations) and is achieved by several independent studies showing the same result. Inability to detect the outcome of interest due to insufficient period of follow-up, inadequate sample size, and inadequate power.