LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Two open-source projects for image aesthetic quality assessment

Photo from wikipedia

Here we introduce two open-source projects for image aesthetic quality assessment. The first one is ILGnet, an open-source project for the aesthetic evaluation of images based on the convolution neural… Click to show full abstract

Here we introduce two open-source projects for image aesthetic quality assessment. The first one is ILGnet, an open-source project for the aesthetic evaluation of images based on the convolution neural network. The second is CJS-CNN, an opensource project for predicting the aesthetic score distribution of human ratings. ILGnet. An image aesthetic classification method based on a deep convolution neural network can classify the results of the image aesthetic quality as “good” or “bad”. This method combines local and global features and designs a new depth convolution neural network, ILGnet, which learns the image aesthetic classification model using approximately 230000 images from the largescale database for aesthetic visual analysis (AVA). The AVA dataset contains 255530 valid images, each of which comes from the website, which is a well-known photographic community abroad. Each image is scored by members, who are human artists registered on multiple websites. Each image is scored by 78–539 people, with an average of 210 participants. The dataset is a recognized benchmark in the field of image aesthetic evaluation. The quality of the annotated data is high, and it can support the study of the aesthetic classifications, aesthetic scores, and aesthetic distributions. The ILGnet can automatically distinguish two types of images: high aesthetic quality and low aesthetic quality. It uses a new structure of a convolution neural network for image classification and fuses the information of different receptive fields to complete the aesthetic feature extraction process. Compared with previous methods, the classification accuracy is greatly improved. The accuracy we achieve on the AVA dataset is 81.68%, and the accuracy is up to 82.66% by using the Inception V4 module [1, 2]. CJS-CNN. The classification and scoring of the image aesthetic assessment usually use a scalar to express the aesthetic quality of an image and largely ignore the diversity, subjectivity, and individuation of human aesthetics in a certain consensus. In general, nearly all image recognition tasks have standard answers but few have aesthetic images. This is the biggest difference between aesthetic evaluations and general image recognition. The probability distribution of an image aesthetic score can describe the aesthetic subjectivity to a certain extent; for example, the variance can describe the human consensus for an image to a certain extent and the kurtosis can describe the popularity of an image to a certain extent. In this study, a convolution neural network (CNN) based on the cumulative distribution with Jensen-Shannon divergence (CJS-CNN) is proposed to predict the fractional distribution of the image aesthetic quality evaluation, which is different from previous single scalar evaluations. This method can give the score distribution of an image aesthetic qual-

Keywords: aesthetic quality; image; convolution neural; image aesthetic; open source

Journal Title: Science China Information Sciences
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.