Abstract Images are an essential feature of many social networking services, such as Facebook, Instagram, and Twitter. Through brand-related images, consumers communicate about brands with each other and link the… Click to show full abstract
Abstract Images are an essential feature of many social networking services, such as Facebook, Instagram, and Twitter. Through brand-related images, consumers communicate about brands with each other and link the brand with rich contextual and consumption experiences. However, previous articles in marketing research have concentrated on deriving brand information from textual user-generated content and have largely not considered brand-related images. The analysis of brand-related images yields at least two challenges. First, the content displayed in images is heterogeneous, and second, images rarely show what users think and feel in or about the situations displayed. To meet these challenges, this article presents a two-step approach that involves collecting, labeling, clustering, aggregating, mapping, and analyzing brand-related user-generated content. The collected data are brand-related images, caption texts, and social tags posted on Instagram. Clustering images labeled via Google Cloud Vision API enabled to identify heterogeneous contents (e.g. products) and contexts (e.g. situations) that consumers create content about. Aggregating and mapping the textual information for the resulting image clusters in the form of associative networks empowers marketers to derive meaningful insights by inferring what consumers think and feel about their brand regarding different contents and contexts.
               
Click one of the above tabs to view related content.