The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state
of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to
process long term dependencies in temporal signals in order to label unsegmented data. This paper describes
different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level
combination (feature space combination) but we also explore high level combination (decoding combination)
and mid-level (internal system representation combination). The results are compared on the RIMES word
database. Our results show that the low level combination works best, thanks to the powerful data modeling
of the LSTM neurons.
In this article, we propose a hybrid model for spotting words and regular expressions (REGEX) in handwritten
documents. The model is made of the state-of-the-art BLSTM (Bidirectional Long Short Time Memory) neural
network for recognizing and segmenting characters, coupled with a HMM to build line models able to spot the
desired sequences. Experiments on the Rimes database show very promising results.