Recent efforts in biomedical visual question answering (VQA) research rely on combined information gathered from the image content and surrounding text supporting the figure. Biomedical journals are a rich source… Click to show full abstract
Recent efforts in biomedical visual question answering (VQA) research rely on combined information gathered from the image content and surrounding text supporting the figure. Biomedical journals are a rich source of information for such multimodal content indexing. For multipanel figures in these journals, it is critical to develop automatic figure panel splitting and label recognition algorithms to associate individual panels with text metadata in the figure caption and the body of the article. Challenges in this task include large variations in figure panel layout, label location, size, contrast to background, and so on. In this work, we propose a deep convolutional neural network, which splits the panels and recognizes the panel labels in a single step. Visual features are extracted from several layers at various depths of the backbone neural network and organized to form a feature pyramid. These features are fed into classification and regression networks to generate candidates of panels and their labels. These candidates are merged to create the final panel segmentation result through a beam search algorithm. We evaluated the proposed algorithm on the ImageCLEF data set and achieved better performance than the results reported in the literature. In order to thoroughly investigate the proposed algorithm, we also collected and annotated our own data set of 10,642 figures. The experiments, trained on 9,642 figures and evaluated on the remaining 1,000 figures, show that combining panel splitting and panel label recognition mutually benefit each other.
               
Click one of the above tabs to view related content.