Variational saliency maps for dynamic image sequences

Author(s)
Aniello Raffaele Patrone, Christian Valuch, Ulrich Ansorge, Otmar Scherzer
Abstract

Saliency maps are an important tool for modeling and predicting human eye movements but their applications to dynamic image sequences, such as videos, are still limited. In this work we propose a variational approach for determining saliency maps in dynamic image sequences. The main idea is to merge information from static saliency maps computed on every single frame of a video (Itti & Koch, 2001) with the optical flow (Horn & Schunk, 1981; Weickert & Schnorr, 2011) representing motion between successive video frames. Including motion information into saliency maps is not a novelty but our variational approach presents a new solution to the problem, which is also conceptually compatible with successive stages of visual processing in the human brain. We present the basic concept and compare our modeling results to classical methods of saliency and optical flow computation. In addition, we present preliminary eye tracking results from an experiment in which 24 participants viewed 80 real-world dynamic scenes. Our data suggest that our algorithm allows feasible and computationally cheap modeling of human attention in videos.

Organisation(s)
Vienna Cognitive Science Hub, Department of Cognition, Emotion, and Methods in Psychology
Pages
224
Publication date
2015
Austrian Fields of Science 2012
101028 Mathematical modelling
Portal url
https://ucris.univie.ac.at/portal/en/publications/variational-saliency-maps-for-dynamic-image-sequences(f8893028-dc77-4134-aa20-781580afadcb).html