Proc. SPIE. 7240, Human Vision and Electronic Imaging XIV
KEYWORDS: Curtains, Detection and tracking algorithms, Image segmentation, Image processing, Human vision and color perception, Chemical elements, Motion models, Electronic imaging, Communication theory, Current controlled current source
Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition
of sign language. Using such knowledge can improve ASLR because it can give an indication which elements
or phases of a sign are important for its meaning. Also, the current generation of data-driven ASLR methods
has shortcomings which may not be solvable without the use of knowledge on human sign language processing.
Handling variation in the precise execution of signs is an example of such shortcomings: data-driven methods
(which include almost all current methods) have difficulty recognizing signs that deviate too much from the
examples that were used to train the method. Insight into human sign processing is needed to solve these
problems. Perceptual research on sign language can provide such insights. This paper discusses knowledge
derived from a set of sign perception experiments, and the application of such knowledge in ASLR. Among
the findings are the facts that not all phases and elements of a sign are equally informative, that defining the
'correct' form for a sign is not trivial, and that statistical ASLR methods do not necessarily arrive at sign
representations that resemble those of human beings. Apparently, current ASLR methods are quite different
from human observers: their method of learning gives them different sign definitions, they regard each moment
and element of a sign as equally important and they employ a single definition of 'correct' for all circumstances.
If the object is for an ASLR method to handle natural sign language, then the insights from sign perception
research must be integrated into ASLR.