[简体中文]

An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition (PAMI 2017)

Baoguang Shi Xiang Bai Cong Yao
IEEE Transactions on Pattern Analysis and Machine Intelligence [pdf]

Abstract
Image-based sequence recognition has been a longstanding research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.


Method

Figure 1. The network architecture. The architecture consists of three parts:
1) convolutional layers, which extract a feature sequence from the input image;
2) recurrent layers, which predict a label distribution for each frame;
3) transcription layer, which translates the per-frame predictions into the final label sequence.
Figure 2. The Receptive field
Figure 3.The LSTM used in the paper

Results

Table 1. Recognition accuracies (%) on four datasets.
In the second row, “50”, “1k”, “50k” and “Full” denote the lexicon used, and “None” denotes recognition without a lexicon.
Figure 4. Blue line graph: recognition accuracy as a function parameter δ.
Red bars: lexicon search time per sample. Tested on the IC03 dataset with the 50k lexicon.

BibTeX

@article{shi2017end,
  title={An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition},
  author={Shi, Baoguang and Bai, Xiang and Yao, Cong},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  volume={39},
  number={11},
  pages={2298--2304},
  year={2017},
  publisher={IEEE}
}

Join the Discussion