Stina Westman of the Helsinki University of Technology presented on a image categorization study she completed. This work was on the contextual factors on categorization, specifically the context of the page on image categorization. Prior research has shown that people evaluate images with high level semantic descriptors, and that people describe image content on interpretational, as opposed to perceptual factors. The goal of the study was to determine if there is an effect on categorization based on context, and if so, how strongly does it work. They worked with professional magazine image archivists and split them into two groups, one with context, one without. The participants began with a free sorting exercise followed by reassignment to multiple categories.
The number of categories, the time taken to sort the photos and the number of times a image was placed into a category were all not significantly different. When you look at the types of categories that are created, there was a significant difference. Added context resulted in more categories based on theme and story, versus functional, or of the objects in the photo. People were grouped by fictional or real, and nonliving items were grouped by symbolic, object or scenes when context was present. Without context, people were grouped by posed photos versus action photos (context) and non living items were grouped by interiors, objects, or scene. Without context, images were more often set into multifaceted categorization, more hierarchy to the structure.
Text was seen to anchor the image, explained the image and why it was published, or the text was seen as elaborating or extending the image. This means we can manipulate the categorization of images by the archivists, so can determine how we want the image to be categorized. This implies that text data mining can be applied to image categorization through automated / software.