search this site.

220905P - RELIABILITY (VALIDITY AND PRECISION): DEFINITION AND MEASUREMENT

Print Friendly and PDFPrint Friendly

Presented at a Webinar on Research Methodology in Health Sciences at Northern Area Armed Forces Hospital (NAAFH) on 6th September 2022. Prof. Omar Hasan Kasule Sr. MB ChB (MUK), MPH (Harvard) DrPH (Harvard) Chairman, Institutional Review Board - KFMC


MEASUREMENT IN EPIDEMIOLOGY

  • An epidemiological study should be considered as a sort of measurement with parameters for validity and precision.
  • Validity is an expression of the degree to which a measurement measures what it purports to measure. It is assessed by measures of central tendency like the mean
  • Precision is a measure of the lack of random error. It is measured by measures of spread like the standard deviation.
  • Reliability is reproducibility i.e., does the instrument of measurement produce the same result under the same conditions all the time?


SOURCE OF ERROR

  • Instrument error,
  • Digit preference,
  • Observer variation,
  • Variations in individual response,
  • True biological variations,
  • Bias positive and negative including confounding.
  • Observer variation arises in 2 ways: within and between observers
  • Within-observer variation is largely random. A random subject variation on repeat measurement regresses to the mean.
  • Between-observer variation is usually systematic or biased (systematic) subject variation.


TYPE OF ERROR

  • Systematic errors lead to bias and therefore invalid parameter estimates. Systematic or biased errors are known as dirty dirt because they bias conclusions and are therefore epidemiologically fatal. They are not decreased by an increase in sample size. They are difficult to recognize and hard to quantify. It is therefore difficult to compensate for them in the analysis.
  • Random errors or non-systematic errors lead to imprecise parameter estimates. Random error leads to misclassification. It is however not serious because it affects both comparison groups equally and epidemiological study is concerned with making comparisons. Random errors lead to large standard errors in parameter estimates and can be controlled by increasing the sample size.
  • If the size of the random error is known and is small, the error can be tolerated as clean dirt.


VALIDITY: Internal And External

  • Internal validity is concerned with the results of each individual study. Internal validity is impaired by study bias.
  • In external validity, the inference is pertinent to the general population. Traditionally results are generalized if the sample is representative of the population. In practice, generalizability is achieved by looking at the results of several studies each of which is individually internally valid. It is therefore not the objective of each individual study to be generalizable because that would require assembling a representative sample.


PRECISION

  • Precision is a measure of the lack of random error.
  • An effect measure with a narrow confidence interval is said to be precise. An effect measure with a wide confidence interval is imprecise.
  • Precision is increased in three ways: increasing the study size, increasing study efficiency, and care taken in the measurement of variables to decrease mistakes.


TYPES OF QUESTIONNAIRE VALIDITY:

  • CRITERION VALIDITY: This is how well a questionnaire item predicts the outcome. Predictive validity studies take a long time and require a large sample size.
  • FACE VALIDITY: This is a subjective judgment of whether the questionnaire is relevant, reasonable, clear, etc. Experts judge each item as ‘yes/no’ and a kappa statistical test of inter-rater agreement between assessors is applied to determine face validity. Face validity is a weak form of validation because it is based on subjectivity.


TYPES OF VALIDITY:

  • CONTENT VALIDITY: how well do the questionnaire items reflect the universe to which the instrument will be applied. Content validity is based on literature review and expert opinion. A statistical test is applied to expert assessments
  • CONSTRUCT VALIDITY: how well a concept or idea was transformed into a functioning reality. This is either by discrimination from other realities or convergence with other realities. Factor analysis techniques are used to test for construct validity.
  • RELIABILITY: how consistent the results are.