The theory of signal detectability typically fits data from Yes-No detection experiments by assuming a particular form for the noise and signal plus noise distributions of the Observer. Previous work suggests that estimates of the Observer's sensitivity are little affected by small discrepancies between the assumed distributions (usually Gaussian) and the Observer's true underlying distributions. Possibly for this reason, estimates of the Observer's choice of criterion or likelihood ratio suggesting suboptimal performance have also been taken at face value. It is, for example, commonly accepted that human Observers are conservative: They are said to choose criteria corresponding to likelihood ratios that are closer to 1 than the ratios produced by optimal criteria. We demonstrate that estimates of likelihood ratio can be markedly biased when the distributions assumed in estimation are not the Observer's true distributions. We derive necessary and sufficient conditions for an optimal Observer to appear conservative when fitted by distributions different from those governing his choices. These results raise a fundamental question: What information about the Observer's underlying noise and signal plus noise distributions does the Observer's performance in a Yes-No detection task provide? We demonstrate that a small number of isosensitivity (ROC) curves completely determines the Observer's underlying noise and signal plus noise distributions for many familiar forms of the theory of signal detectability. These results open up the possibility of a semiparametric theory of signal detectability.
ASJC Scopus subject areas
- Applied Mathematics