Humans use visual as well as auditory speech signals to recognize spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual features on a speechreading task. We have found that normal-ization of images to eliminate variation due to translation, scale, and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In addition, the dynamic information in the diierence between successive frames yielded better performance than optical-ow based approaches, and compression by local low-pass ltering worked surprisingly better than global principal components analysis (PCA). These results are examined and possible explanations are explored.
Download Full PDF Version (Non-Commercial Use)