Word position independent and work position dependent n-gram probabilities were estimated from a large English language corpus. A text-recognition problem was simulated, and using the estimated n-grain probabilities, four experiments were conducted by the following methods of classification: the context-free Bayes algorithm, the recursive Bayes algorithm exhibited by Raviv, the modified Viterbi algorithm, and a heuristic approximation to the recursive Bayes algorithm. Based on the estimates of the probabilities of misclassification observed in the four experiments, the above methods are compared. The heuristic approximation of the recursive Bayes algorithm reduced computation without degradation in performance.
ASJC Scopus subject areas