Mihir Jain, Jan C. van Gemert, Hervé Jégou, Patrick Bouthemy, and Cees G. M. Snoek. Action localization by tubelets from motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, Ohio, USA, June 2014. [ bib | .pdf ]
This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-theart on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.

 
Thomas Mensink, Efstratios Gavves, and Cees G. M. Snoek. Costa: Co-occurrence statistics for zero-shot classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, Ohio, USA, June 2014. [ bib | .pdf ]
In this paper we aim for zero-shot classification, that is visual recognition of an unseen class by using knowledge transfer from known classes. Our main contribution is COSTA, which exploits co-occurrences of visual concepts in images for knowledge transfer. These inter-dependencies arise naturally between concepts, and are easy to obtain from existing annotations or web-search hit counts. We estimate a classifier for a new label, as a weighted combination of related classes, using the co-occurrences to define the weight. We propose various metrics to leverage these co-occurrences, and a regression model for learning a weight for each related class. We also show that our zero-shot classifiers can serve as priors for few-shot learning. Experiments on three multi-labeled datasets reveal that our proposed zero-shot methods, are approaching and occasionally outperforming fully supervised SVMs. We conclude that co-occurrence statistics suffice for zero-shot classification.

 
Koen E. A. van de Sande, Cees G. M. Snoek, and Arnold W. M. Smeulders. Fisher and vlad with flair. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, Ohio, USA, June 2014. [ bib | .pdf ]
A major computational bottleneck in many current algorithms is the evaluation of arbitrary boxes. Dense local analysis and powerful bag-of-word encodings, such as Fisher vectors and VLAD, lead to improved accuracy at the expense of increased computation time. Where a simplification in the representation is tempting, we exploit novel representations while maintaining accuracy. We start from state-of-the-art, fast selective search, but our method will apply to any initial box-partitioning. By representing the picture as sparse integral images, one per codeword, we achieve a Fast Local Area Independent Representation. FLAIR allows for very fast evaluation of any box encoding and still enables spatial pooling. In FLAIR we achieve exact VLADs difference coding, even with l2 and power-norms. Finally, by multiple codeword assignments, we achieve exact and approximate Fisher vectors with FLAIR. The results are a 18x speedup, which enables us to set a new state-of-the- art on the challenging 2010 PASCAL VOC objects and the fine-grained categorization of the CUB-2011 200 bird species. Plus, we rank number one in the official ImageNet 2013 detection challenge.

 
Ran Tao, Efstratios Gavves, Cees G. M. Snoek, and Arnold W. M. Smeulders. Locality in generic instance search from one example. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, Ohio, USA, June 2014. [ bib | .pdf ]

 
Julien van Hout, Eric Yeh, Dennis Koelma Cees G. M. Snoek, Chen Sun, Ramakant Nevatia, Julie Wong, and Gregory Myers. Late fusion and calibration for multimedia event detection using few examples. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. Florence, Italy, May 2014. [ bib | .pdf ]
The state-of-the-art in example-based multimedia event detection (MED) rests on heterogeneous classifiers whose scores are typically combined in a late-fusion scheme. Recent studies on this topic have failed to reach a clear consensus as to whether machine learning techniques can outperform rule-based fusion schemes with varying amount of training data. In this paper, we present two parametric approaches to late fusion: a normalization scheme for arithmetic mean fusion (logistic averaging) and a fusion scheme based on logistic regression, and compare them to widely used rule-based fusion schemes. We also describe how logistic regression can be used to calibrate the fused detection scores to predict an optimal threshold given a detection prior and costs on errors. We discuss the advantages and shortcomings of each approach when the amount of positives available for training varies from 10 positives (10Ex) to 100 positives (100Ex). Experiments were run using video data from the NIST TRECVID MED 2013 evaluation and results were reported in terms of a ranking metric: the mean average precision (mAP) and R0, a cost-based metric introduced in TRECVID MED 2013.

 
Amirhossein Habibian, Thomas Mensink, and Cees G. M. Snoek. Composite concept discovery for zero-shot video event detection. In Proceedings of the ACM International Conference on Multimedia Retrieval. Glasgow, UK, April 2014. [ bib | .pdf ]
We consider automated detection of events in video without the use of any visual training examples. A common approach is to represent videos as classification scores obtained from a vocabulary of pre-trained concept classifiers. Where others construct the vocabulary by training individual concept classifiers, we propose to train classifiers for combination of concepts composed by Boolean logic operators. We call these concept combinations composite concepts and contribute an algorithm that automatically discovers them from existing video-level concept annotations. We discover composite concepts by jointly optimizing the accuracy of concept classifiers and their effectiveness for detecting events. We demonstrate that by combining concepts into composite concepts, we can train more accurate classifiers for the concept vocabulary, which leads to improved zero-shot event detection. Moreover, we demonstrate that by using different logic operators, namely ?AND?, ?OR?, we discover different types of composite concepts, which are complementary for zero-shot event detection. We perform a search for 20 events in 41K web videos from two test sets of the challenging TRECVID Multimedia Event Detection 2013 corpus. The experiments demonstrate the superior performance of the discovered composite concepts, compared to present-day alternatives, for zero-shot event detection.

 
Amirhossein Habibian and Cees G. M. Snoek. Stop-frame removal improves web video classification. In Proceedings of the ACM International Conference on Multimedia Retrieval. Glasgow, UK, April 2014. [ bib | .pdf ]
Web videos available in sharing sites like YouTube, are becoming an alternative to manually annotated training data, which are necessary for creating video classifiers. However, when looking into web videos, we observe they contain several irrelevant frames that may randomly appear in any video, i.e., blank and over exposed frames. We call these irrelevant frames stop-frames and propose a simple algorithm to identify and exclude them during classifier training. Stop-frames might appear in any video, so it is hard to recognize their category. Therefore we identify stop-frames as those frames, which are commonly misclassified by any concept classifier. Our experiments demonstrates that using our algorithm improves classification accuracy by 60 in terms of mean average precision for an event and concept detection benchmark.

 
Masoud Mazloom, Xirong Li, and Cees G. M. Snoek. Few-example video event retrieval using tag propagation. In Proceedings of the ACM International Conference on Multimedia Retrieval. Glasgow, UK, April 2014. [ bib | .pdf ]
An emerging topic in multimedia retrieval is to detect a complex event in video using only a handful of video examples. Different from existing work which learns a ranker from positive video examples and hundreds of negative examples, we aim to query web video for events using zero or only a few visual examples. To that end, we propose in this paper a tag-based video retrieval system which propagates tags from a tagged video source to an unlabeled video collection without the need of any training examples. Our algorithm is based on weighted frequency neighbor voting using concept vector similarity. Once tags are propagated to unlabeled video we can rely on off-the-shelf language models to rank these videos by the tag similarity. We study the behavior of our tag-based video event retrieval system by performing three experiments on web videos from the TRECVID multimedia event detection corpus, with zero, one and multiple query examples that beats a recent alternative.

 
Chen Sun, Brian Burns, Ram Nevatia, Cees G. M. Snoek, Bob Bolles, Greg Myers, Wen Wang, and Eric Yeh. Isomer: Informative segment observations for multimedia event recounting. In Proceedings of the ACM International Conference on Multimedia Retrieval. Glasgow, UK, April 2014. [ bib | .pdf ]
This paper describes a system for multimedia event detection and recounting. The goal is to detect a high level event class in unconstrained web videos and generate event oriented summarization for display to users. For this purpose, we detect informative segments and collect observations for them, leading to our ISOMER system. We combine a large collection of both low level and semantic level visual and audio features for event detection. For event recounting, we propose a novel approach to identify event oriented discriminative video segments and their descriptions with a linear SVM event classifier. User friendly concepts including objects, actions, scenes, speech and optical character recognition are used in generating descriptions. We also develop several mapping and filtering strategies to cope with noisy concept detectors. Our system performed competitively in the TRECVID 2013 Multimedia Event Detection task with near 100,000 videos and was the highest performer in TRECVID 2013 Multimedia Event Recounting task.

 
Efstratios Gavves, Basura Fernando, Cees G. M. Snoek, Arnold W. M. Smeulders, and Tinne Tuytelaars. Fine-grained categorization by alignments. In Proceedings of the IEEE International Conference on Computer Vision. Sydney, Australia, December 2013. [ bib | .pdf ]
The aim of this paper is fine-grained categorization without human interaction. Different from prior work, which relies on detectors for specific object parts, we propose to localize distinctive details by roughly aligning the objects using just the overall shape, since implicit to fine-grained categorization is the existence of a super-class shape shared among all classes. The alignments are then used to transfer part annotations from training images to test images (supervised alignment), or to blindly yet consistently segment the object in a number of regions (unsupervised alignment). We furthermore argue that in the distinction of fine-grained sub-categories, classification-oriented encodings like Fisher vectors are better suited for describing localized information than popular matching oriented features like HOG. We evaluate the method on the CU-2011 Birds and Stanford Dogs fine-grained datasets, outperforming the state-of-the-art.

 
Zhenyang Li, Efstratios Gavves, Koen E. A. van de Sande, Cees G. M. Snoek, and Arnold W. M. Smeulders. Codemaps segment, classify and search objects locally. In Proceedings of the IEEE International Conference on Computer Vision. Sydney, Australia, December 2013. [ bib | .pdf ]
In this paper we aim for segmentation and classification of objects. We propose codemaps that are a joint formulation of the classification score and the local neighborhood it belongs to in the image. We obtain the codemap by reordering the encoding, pooling and classification steps over lattice elements. Other than existing linear decompositions who emphasize only the efficiency benefits for localized search, we make three novel contributions. As a preliminary, we provide a theoretical generalization of the sufficient mathematical conditions under which image encodings and classification becomes locally decomposable. As first novelty we introduce l2 normalization for arbitrarily shaped image regions, which is fast enough for semantic segmentation using our Fisher codemaps. Second, using the same lattice across images, we propose kernel pooling which embeds nonlinearities into codemaps for object classification by explicit or approximate feature mappings. Results demonstrate that l2 normalized Fisher codemaps improve the state-of-the-art in semantic segmentation for PASCAL VOC. For object classification the addition of nonlinearities brings us on par with the state-of-the-art, but is 3x faster. Because of the codemaps? inherent efficiency, we can reach significant speed-ups for localized search as well. We exploit the efficiency gain for our third novelty: object segment retrieval using a single query image only.

 
Xirong Li and Cees G. M. Snoek. Classifying tag relevance with relevant positive and negative examples. In Proceedings of the ACM International Conference on Multimedia. Barcelona, Spain, October 2013. [ bib | .pdf ]
Image tag relevance estimation aims to automatically determine what people label about images is factually present in the pictorial content. Different from previous works, which either use only positive examples of a given tag or use positive and random negative examples, we argue the importance of relevant positive and relevant negative examples for tag relevance estimation. We propose a system that selects positive and negative examples, deemed most relevant with respect to the given tag from crowd-annotated images. While applying models for many tags could be cumbersome, our system trains efficient ensembles of Support Vector Machines per tag, enabling fast classification. Experiments on two benchmark sets show that the proposed system compares favorably against five present day methods. Given extracted visual features, for each image our system can process up to 3,787 tags per second. The new system is both effective and efficient for tag relevance estimation.

 
Masoud Mazloom, Amirhossein Habibian, and Cees G. M. Snoek. Querying for video events by semantic signatures from few examples. In Proceedings of the ACM International Conference on Multimedia. Barcelona, Spain, October 2013. [ bib | .pdf ]
We aim to query web video for complex events using only a handful of video query examples, where the standard approach learns a ranker from hundreds of examples. We consider a semantic signature representation, consisting of off-the-shelf concept detectors, to capture the variance in semantic appearance of events. Since it is unknown what similarity metric and query fusion to use in such an event retrieval setting, we perform three experiments on unconstrained web videos from the TRECVID event detection task. It reveals that: retrieval with semantic signatures using normalized correlation as similarity metric outperforms a low-level bag-of-words alternative, multiple queries are best combined using late fusion with an average operator, and event retrieval is preferred over event classi cation when less than eight positive video examples are available.

 
Svetlana Kordumova, Xirong Li, and Cees G. M. Snoek. Evaluating sources and strategies for learning video concepts from social media. In International Workshop on Content-Based Multimedia Indexing. Veszprém, Hungary, June 2013. [ bib | www: ]

 
Amirhossein Habibian, Koen E. A. van de Sande, and Cees G. M. Snoek. Recommendations for video event recognition using concept vocabularies. In Proceedings of the ACM International Conference on Multimedia Retrieval, pages 89-96. Dallas, Texas, USA, April 2013. [ bib | .pdf ]
Representing videos using vocabularies composed of concept detectors appears promising for event recognition. While many have recently shown the benefits of concept vocabularies for recognition, the important question what concepts to include in the vocabulary is ignored. In this paper, we study how to create an effective vocabulary for arbitrary event recognition in web video. We consider four research questions related to the number, the type, the specificity and the quality of the detectors in concept vocabularies. A rigorous experimental protocol using a pool of 1,346 concept detectors trained on publicly available annotations, a dataset containing 13,274 web videos from the Multimedia Event Detection benchmark, 25 event groundtruth definitions, and a state-of-the-art event recognition pipeline allow us to analyze the performance of various concept vocabulary definitions. From the analysis we arrive at the recommendation that for effective event recognition the concept vocabulary should i) contain more than 200 concepts, ii) be diverse by covering object, action, scene, people, animal and attribute concepts, iii) include both general and specific concepts, and iv) increase the number of concepts rather than improve the quality of the individual detectors. We consider the recommendations for video event recognition using concept vocabularies the most important contribution of the paper, as they provide guidelines for future work.

 
Masoud Mazloom, Efstratios Gavves, Koen E. A. van de Sande, and Cees G. M. Snoek. Searching informative concept banks for video event detection. In Proceedings of the ACM International Conference on Multimedia Retrieval, pages 255-262. Dallas, Texas, USA, April 2013. [ bib | .pdf ]
An emerging trend in video event detection is to learn an event from a bank of concept detector scores. Different from existing work, which simply relies on a bank containing all available detectors, we propose in this paper an algorithm that learns from examples what concepts in a bank are most informative per event. We model finding this bank of informative concepts out of a large set of concept detectors as a rare event search. Our proposed approximate solution finds the optimal concept bank using a cross-entropy optimization. We study the behavior of video event detection based on a bank of informative concepts by performing three experiments on more than 1,000 hours of arbitrary internet video from the TRECVID multimedia event detection task. Starting from a concept bank of 1,346 detectors we show that 1.) some concept banks are more informative than others for specific events, 2.) event detection using an automatically obtained informative concept bank is more robust than using all available concepts, 3.) even for small amounts of training examples an informative concept bank outperforms a full bank and a bag-of-word event representation, and 4.) we show qualitatively that the informative concept banks make sense for the events of interest, without being programmed to do so. We conclude that for concept banks it pays to be informative.

 
Davide Modolo and Cees G. M. Snoek. Can object detectors aid internet video event retrieval?. In Proceedings of the IS&T/SPIE Symposium on Electronic Imaging. San Francisco, CA, USA, February 2013. [ bib | .pdf ]
The problem of event representation for automatic event detection in Internet videos is acquiring an increasing importance, due to their applicability to a large number of applications. Existing methods focus on representing events in terms of either low-level descriptors or domain-speci c models suited for a limited class of video only, ignoring the high-level meaning of the events. Ultimately aiming for a more robust and meaningful representation, in this paper we question whether object detectors can aid video event retrieval. We propose an experimental study that investigates the utility of present-day local and global object detectors for video event search. By evaluating object detectors optimized for high-quality photographs on low-quality Internet video, we establish that present-day detectors can successfully be used for recognizing objects in web videos. We use an object-based representation to re-rank the results of an appearance-based event detector. Results on the challenging TRECVID multimedia event detection corpus demonstrate that objects can indeed aid event retrieval. While much remains to be studied, we believe that our experimental study is a rst step towards revealing the potential of object-based event representations.

 
Efstratios Gavves, Cees G. M. Snoek, and Arnold W. M. Smeulders. Convex reduction of high-dimensional kernels for visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA, June 2012. [ bib | .pdf ]
Limiting factors of fast and effective classifiers for large sets of images are their dependence on the number of images analyzed and the dimensionality of the image representation. Considering the growing number of images as a given, we aim to reduce the image feature dimensionality in this paper. We propose reduced linear kernels that use only a portion of the dimensions to reconstruct a linear kernel. We formulate the search for these dimensions as a convex optimization problem, which can be solved efficiently. Different from existing kernel reduction methods, our reduced kernels are faster and maintain the accuracy benefits from non-linear embedding methods that mimic non-linear SVMs. We show these properties on both the Scenes and PASCAL VOC 2007 datasets. In addition, we demonstrate how our reduced kernels allow to compress Fisher vector for use with non-linear embeddings, leading to high accuracy. What is more, without using any labeled examples the selected and weighed kernel dimensions appear to correspond to visually meaningful patches in the images.

 
Xirong Li, Cees G. M. Snoek, Marcel Worring, and Arnold W. M. Smeulders. Fusing concept detection and geo context for visual search. In Proceedings of the ACM International Conference on Multimedia Retrieval. Hong Kong, China, June 2012. Best paper runner-up. [ bib | .pdf ]
Given the proliferation of geo-tagged images, the question of how to exploit geo tags and the underlying geo context for visual search is emerging. Based on the observation that the importance of geo context varies over concepts, we propose a concept-based image search engine which fuses visual concept detection and geo context in a concept-dependent manner. Compared to individual content-based and geo-based concept detectors and their uniform combination, concept-dependent fusion shows improvements. Moreover, since the proposed search engine is trained on social-tagged images alone without the need of human interaction, it is flexible to cope with many concepts. Search experiments on 101 popular visual concepts justify the viability of the proposed solution. In particular, for 79 out of the 101 concepts, the learned weights yield improvements over the uniform weights, with a relative gain of at least 5% in terms of average precision.

 
Daan T. J. Vreeswijk, Koen E. A. van de Sande, Cees G. M. Snoek, and Arnold W. M. Smeulders. All vehicles are cars: Subclass preferences in container concepts. In Proceedings of the ACM International Conference on Multimedia Retrieval. Hong Kong, China, June 2012. [ bib | .pdf ]
This paper investigates the natural bias humans display when labeling images with a container label like vehicle or carnivore. Using three container concepts as subtree root nodes, and all available concepts between these roots and the images from the ImageNet Large Scale Visual Recogni- tion Challenge (ILSVRC) dataset, we analyze the differences between the images labeled at these varying levels of abstraction and the union of their constituting leaf nodes. We find that for many container concepts, a strong preference for one or a few different constituting leaf nodes occurs. These results indicate that care is needed when using hierarchical knowledge in image classification: if the aim is to classify vehicles the way humans do, then cars and buses may be the only correct results.

 
Bauke Freiburg, Jaap Kamps, and Cees G. M. Snoek. Crowdsourcing visual detectors for video search. In Proceedings of the ACM International Conference on Multimedia. Scottsdale, AZ, USA, December 2011. [ bib | .pdf ]
In this paper, we study social tagging at the video fragment-level using a combination of automated content understanding and the wisdom of the crowds. We are interested in the question whether crowdsourcing can be beneficial to a video search engine that automatically recognizes video fragments on a semantic level. To answer this question, we perform a 3-month online field study with a concert video search engine targeted at a dedicated user-community of pop concert enthusiasts. We harvest the feedback of more than 500 active users and perform two experiments. In experiment 1 we measure user incentive to provide feedback, in experiment 2 we determine the tradeoff between feedback quality and quantity when aggregated over multiple users. Results show that users provide sufficient feedback, which becomes highly reliable when a crowd agreement of 67% is enforced.

 
Xirong Li, Efstratios Gavves, Cees G. M. Snoek, Marcel Worring, and Arnold W. M. Smeulders. Personalizing automated image annotation using cross-entropy. In Proceedings of the ACM International Conference on Multimedia. Scottsdale, AZ, USA, December 2011. [ bib | .pdf ]
Annotating the increasing amounts of user-contributed images in a personalized manner is in great demand. However, this demand is largely ignored by the mainstream of automated image annotation research. In this paper we aim for personalizing automated image annotation by jointly exploiting personalized tag statistics and content-based image annotation. We propose a cross-entropy based learning algorithm which personalizes a generic annotation model by learning from a user’s multimedia tagging history. Using cross-entropy-minimization basedMonte Carlo sampling, the proposed algorithm optimizes the personalization process in terms of a performance measurement which can be flexibly chosen. Automatic image annotation experiments with 5,315 realistic users in the social web show that the proposed method compares favorably to a generic image annotation method and a method using personalized tag statistics only. For 4,442 users the performance improves, where for 1,088 users the absolute performance gain is at least 0.05 in terms of average precision. The results show the value of the proposed method.

 
Xirong Li, Cees G. M. Snoek, Marcel Worring, and Arnold W. M. Smeulders. Social negative bootstrapping for visual categorization. In Proceedings of the ACM International Conference on Multimedia Retrieval. Trento, Italy, April 2011. [ bib | .pdf ]
To learn classifiers for many visual categories, obtaining labeled training examples in an efficient way is crucial. Since a classifier tends to misclassify negative examples which are visually similar to positive examples, inclusion of such informative negatives should be stressed in the learning process. However, they are unlikely to be hit by random sampling, the de facto standard in literature. In this paper, we go beyond random sampling by introducing a novel social negative bootstrapping approach. Given a visual category and a few positive examples, the proposed approach adaptively and iteratively harvests informative negatives from a large amount of social-tagged images. To label negative examples without human interaction, we design an effective virtual labeling procedure based on simple tag reasoning. Virtual labeling, in combination with adaptive sampling, enables us to select the most misclassified negatives as the informative samples. Learning from the positive set and the informative negative sets results in visual classifiers with higher accuracy. Experiments on two present-day image benchmarks employing 650K virtually labeled negative examples show the viability of the proposed approach. On a popular visual categorization benchmark our precision at 20 increases by 34%, compared to baselines trained on randomly sampled negatives. We achieve more accurate visual categorization without the need of manually labeling any negatives.

 
Wolfgang Hürst, Cees G. M. Snoek, Willem-Jan Spoel, and Mate Tomin. Size matters! how thumbnail number, size, and motion influence mobile video retrieval. In International Conference on MultiMedia Modeling. Taipei, Taiwan, January 2011. [ bib | .pdf ]
Various interfaces for video browsing and retrieval have been proposed that provide improved usability, better retrieval performance, and richer user experience compared to simple result lists that are just sorted by relevance. These browsing interfaces take advantage of the rather large screen estate on desktop and laptop PCs to visualize advanced configurations of thumbnails summarizing the video content. Naturally, the usefulness of such screen-intensive visual browsers can be called into question when applied on small mobile handheld devices, such as smart phones. In this paper, we address the usefulness of thumbnail images for mobile video retrieval interfaces. In particular, we investigate how thumbnail number, size, and motion influence the performance of humans in common recognition tasks. Contrary to widespread believe that screens of handheld devices are unsuited for visualizing multiple (small) thumbnails simultaneously, our study shows that users are quite able to handle and assess multiple small thumbnails at the same time, especially when they show moving images. Our results give suggestions for appropriate video retrieval interface designs on handheld devices.

 
Efstratios Gavves and Cees G. M. Snoek. Landmark image retrieval using visual synonyms. In Proceedings of the ACM International Conference on Multimedia. Firenze, Italy, October 2010. [ bib | .pdf ]
In this paper, we consider the incoherence problem of the visual words in bag-of-words vocabularies. Different from existing work, which performs assignment of words based solely on closeness in descriptor space, we focus on identifying pairs of independent, distant words - the visual synonyms - that are still likely to host image patches with similar appearance. To study this problem, we focus on landmark images, where we can examine whether image geometry is an appropriate vehicle for detecting visual synonyms. We propose an algorithm for the extraction of visual synonyms in landmark images. To show the merit of visual synonyms, we perform two experiments. We examine closeness of synonyms in descriptor space and we show a first application of visual synonyms in a landmark image retrieval setting. Using visual synonyms, we perform on par with the state-of-the-art, but with six times less visual words.

 
Wolfgang Hürst, Cees G. M. Snoek, Willem-Jan Spoel, and Mate Tomin. Keep moving! revisiting thumbnails for mobile video retrieval. In Proceedings of the ACM International Conference on Multimedia. Firenze, Italy, October 2010. [ bib | .pdf ]
Motivated by the increasing popularity of video on handheld devices and the resulting importance for effective video retrieval, this paper revisits the relevance of thumbnails in a mobile video retrieval setting. Our study indicates that users are quite able to handle and assess small thumbnails on a mobile's screen - especially with moving images - suggesting promising avenues for future research in design of mobile video retrieval interfaces.

 
Koen E. A. van de Sande, Theo Gevers, and Cees G. M. Snoek. Accelerating visual categorization with the GPU. In ECCV Workshop on Computer Vision on GPU. Crete, Greece, September 2010. [ bib | www: ]

 
Bouke Huurnink, Cees G. M. Snoek, Maarten de Rijke, and Arnold W. M. Smeulders. Today's and tomorrow's retrieval practice in the audiovisual archive. In Proceedings of the ACM International Conference on Image and Video Retrieval, pages 18-25. Xi'an, China, July 2010. [ bib | .pdf ]
Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video retrieval methods can improve search in the audiovisual archive. In particular, we propose an evaluation methodology tailored to the specific needs and circumstances of the audiovisual archive, which are typically missed by existing evaluation initiatives. We utilize logged searches and content purchases from an existing audiovisual archive to create realistic query sets and relevance judgments. To reflect the retrieval practice of both the archive and the video retrieval community as closely as possible, our experiments with three video search engines incorporate archive-created catalog entries as well as state-of-the-art multimedia content analysis results. We find that incorporating content-based video retrieval into the archive’s practice results in significant performance increases for shot retrieval and for retrieving entire television programs. Our experiments also indicate that individual content-based retrieval methods yield approximately equal performance gains. We conclude that the time has come for audiovisual archives to start accommodating content-based video retrieval methods into their daily practice.

 
Xirong Li and Cees G. M. Snoek. Visual categorization with negative examples for free. In Proceedings of the ACM International Conference on Multimedia. Beijing, China, October 2009. [ bib | .pdf ]
Automatic visual categorization is critically dependent on labeled examples for supervised learning. As an alternative to traditional expert labeling, social-tagged multimedia is becoming a novel yet subjective and inaccurate source of learning examples. Different from existing work focusing on collecting positive examples, we study in this paper the potential of substituting social tagging for expert labeling for creating negative examples. We present an empirical study using 6.5 million Flickr photos as a source of social tagging. Our experiments on the PASCAL VOC challenge 2008 show that with a relative loss of only 4.3% in terms of mean average precision, expert-labeled negative examples can be completely replaced by social-tagged negative examples for consumer photo categorization.

 
Arjan T. Setz and Cees G. M. Snoek. Can social tagged images aid concept-based video search?. In Proceedings of the IEEE International Conference on Multimedia & Expo, pages 1460-1463. June-July 2009. Invited paper. [ bib | .pdf ]
This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We present a systematic experimental study that evaluates concept detectors based on social tagged images, and their disambiguated versions, in three application scenarios: within-domain, cross-domain, and together with an interacting user. The results indicate that social tagged images can aid concept-based video search indeed, especially after disambiguation and when used in an interactive video retrieval setting. These results open-up interesting avenues for future research.

 
Xirong Li, Cees G. M. Snoek, and Marcel Worring. Annotating images by harnessing worldwide user-tagged photos. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. Taipei, Taiwan, April 2009. Invited paper. [ bib | .pdf ]
Automatic image tagging is important yet challenging due to the semantic gap and the lack of learning examples to model a tag's visual diversity. Meanwhile, social user tagging is creating rich multimedia content on the web. In this paper, we propose to combine the two tagging approaches in a search-based framework. For an unlabeled image, we first retrieve its visual neighbors from a large user-tagged image database. We then select relevant tags from the result images to annotate the unlabeled image. To tackle the unreliability and sparsity of user tagging, we introduce a joint-modality tag relevance estimation method which efficiently addresses both textual and visual clues. Experiments on 1.5 million Flickr photos and 10 000 Corel images verify the proposed method.

 
Daragh Byrne, Aiden R. Doherty, Cees G. M. Snoek, Gareth J. F. Jones, and Alan F. Smeaton. Validating the detection of everyday concepts in visual lifelogs. In Proceedings of the International Conference on Semantic and Digital Media Technologies, SAMT 2008, Koblenz, Germany, December 3-5, 2008, LNCS, pages 15-30. Springer-Verlag, Berlin, Germany, December 2008. [ bib | .pdf ]
The Microsoft SenseCam is a small lightweight wearable camera used to passively capture photos and other sensor readings from a user's day-to-day activities. It can capture up to 3,000 images per day, equating to almost 1 million images per year. It is used to aid memory by creating a personal multimedia lifelog, or visual recording of the wearer's life. However the sheer volume of image data captured within a visual lifelog creates a number of challenges, particularly for locating relevant content. Within this work, we explore the applicability of semantic concept detection, a method often used within video retrieval, on the novel domain of visual lifelogs. A concept detector models the correspondence between low-level visual features and high-level semantic concepts (such as indoors, outdoors, people, buildings, etc.) using supervised machine learning. By doing so it determines the probability of a concept's presence. We apply detection of 27 everyday semantic concepts on a lifelog collection composed of 257,518 SenseCam images from 5 users. The results were then evaluated on a subset of 95,907 images, to determine the precision for detection of each semantic concept and to draw some interesting inferences on the lifestyles of those 5 users. We additionally present future applications of concept detection within the domain of lifelogging.

 
Xirong Li, Cees G. M. Snoek, and Marcel Worring. Learning tag relevance by neighbor voting for social image retrieval. In Proceedings of the ACM International Conference on Multimedia Information Retrieval, pages 180-187. Vancouver, Canada, October 2008. [ bib | .pdf ]
Social image retrieval is important for exploiting the increasing amounts of amateur-tagged multimedia such as Flickr images. Since amateur tagging is known to be uncontrolled, ambiguous, and personalized, a fundamental problem is how to reliably interpret the relevance of a tag with respect to the visual content it is describing. Intuitively, if different persons label similar images using the same tags, these tags are likely to reflect objective aspects of the visual content. Starting from this intuition, we propose a novel algorithm that scalably and reliably learns tag relevance by accumulating votes from visually similar neighbors. Further, treated as tag frequency, learned tag relevance is seamlessly embedded into current tag-based social image retrieval paradigms. Preliminary experiments on one million Flickr images demonstrate the potential of the proposed algorithm. Overall comparisons for both single-word queries and multiple-word queries show substantial improvement over the baseline by learning and using tag relevance. Specifically, compared with the baseline using the original tags, on average, retrieval using improved tags increases mean average precision by 24%, from 0.54 to 0.67. Moreover, simulated experiments indicate that performance can be improved further by scaling up the amount of images used in the proposed neighbor voting algorithm.

 
Ork de Rooij, Cees G. M. Snoek, and Marcel Worring. Balancing thread based navigation for targeted video search. In Proceedings of the ACM International Conference on Image and Video Retrieval, pages 485-494. Niagara Falls, Canada, July 2008. [ bib | .pdf ]
Various query methods for video search exist. Because of the semantic gap each method has its limitations. We argue that for effective retrieval query methods need to be combined at retrieval time. However, switching query methods often involves a change in query and browsing interface, which puts a heavy burden on the user. In this paper, we propose a novel method for fast and effective search trough large video collections by embedding multiple query methods into a single browsing environment. To that end we introduced the notion of query threads, which contain a shot-based ranking of the video collection according to some feature-based similarity measure. On top of these threads we define several thread-based visualizations, ranging from fast targeted search to very broad exploratory search, with the ForkBrowser as the balance between fast search and video space exploration. We compare the effectiveness and efficiency of the ForkBrowser with the CrossBrowser on the TRECVID 2007 interactive search task. Results show that different query methods are needed for different types of search topics, and that the ForkBrowser requires signifficantly less user interactions to achieve the same result as the CrossBrowser. In addition, both browsers rank among the best interactive retrieval systems currently available.

 
Koen E. A. van de Sande, Theo Gevers, and Cees G. M. Snoek. A comparison of color features for visual concept classification. In Proceedings of the ACM International Conference on Image and Video Retrieval, pages 141-149. Niagara Falls, Canada, July 2008. [ bib | .pdf ]
Concept classification is important to access visual information on the level of objects and scene types. So far, intensity-based features have been widely used. To increase discriminative power, color features have been proposed only recently. As many features exist, a structured overview is required of color features in the context of concept classification. Therefore, this paper studies 1. the invariance properties and 2. the distinctiveness of color features in a structured way. The invariance properties of color features with respect to photometric changes are summarized. The distinctiveness of color features is assessed experimentally using an image and a video benchmark: the PASCAL VOC Challenge 2007 and the Mediamill Challenge. Because color features cannot be studied independently from the points at which they are extracted, different point sampling strategies based on Harris-Laplace salient points, dense sampling and the spatial pyramid are also studied. From the experimental results, it can be derived that invariance to light intensity changes and light color changes affects concept classification. The results reveal further that the usefulness of invariance is concept-specific.

 
Koen E. A. van de Sande, Theo Gevers, and Cees G. M. Snoek. Evaluation of color descriptors for object and scene recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, Alaska, June 2008. [ bib | .pdf ]
Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used. To increase illumination invariance and discriminative power, color descriptors have been proposed only recently. As many descriptors exist, a structured overview of color invariant descriptors in the context of image category recognition is required. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors in a structured way. The invariance properties of color descriptors are shown analytically using a taxonomy based on invariance properties with respect to photometric transformations. The distinctiveness of color descriptors is assessed experimentally using two benchmarks from the image domain and the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results reveal further that, for light intensity changes, the usefulness of invariance is category-specific.

 
Koen E. A. van de Sande, Theo Gevers, and Cees G. M. Snoek. Color descriptors for object category recognition. In Proceedings of the IS&T European Conference on Colour in Graphics, Imaging, and Vision. Terrassa-Barcelona, Spain, June 2008. [ bib | .pdf ]
Category recognition is important to access visual information on the level of objects. A common approach is to compute image descriptors first and then to apply machine learning to achieve category recognition from annotated examples. As a consequence,the choice of image descriptors is of great influence on the recognition accuracy. So far, intensity-based (e.g. SIFT) descriptors computed at salient points have been used. However, color has been largely ignored. The question is, can color information improve accuracy of category recognition? Therefore, in this paper, we will extend both salient point detection and region description with color information. The extension of color descriptors is integrated into the framework of category recognition enabling to select both intensity and color variants. Our experiments on an image benchmark show that category recognition benefits from the use of color. Moreover, the combination of intensity and color descriptors yields a 30% improvement over intensity features alone.

 
Ork de Rooij, Cees G. M. Snoek, and Marcel Worring. Query on demand video browsing. In Proceedings of the ACM International Conference on Multimedia, pages 811-814. Augsburg, Germany, September 2007. [ bib | .pdf ]
This paper describes a novel method for browsing a large collection of news video by linking various forms of related video fragments together as threads. Each thread contains a sequence of shots with high feature-based similarity. Two interfaces are designed which use threads as the basis for browsing. One interface shows a minimal set of threads, and the other as many as possible. Both interfaces are evaluated in the TRECVID interactive retrieval task, where they ranked among the best interactive retrieval systems currently available. The results indicate that the use of threads in interactive video search is very beneficial. We have found that in general the query result and the timeline are the most important threads. However, having several additional threads allow a user to find unique results which cannot easily be found by using query results and time alone.

 
Arnold W. M. Smeulders, Jan C. van Gemert, Bouke Huurnink, Dennis C. Koelma, Ork de Rooij, Koen E. A. van de Sande, Cees G. M. Snoek, Cor J. Veenman, and Marcel Worring. Semantic video search. In International Conference on Image Analysis and Processing. Modena, Italy, September 2007. [ bib | .pdf ]
In this paper we describe the current performance of our MediaMill system as presented in the TRECVID 2006 benchmark for video search engines. The MediaMill team participated in two tasks: concept detection and search. For concept detection we use the MediaMill Challenge as experimental platform. The MediaMill Challenge divides the generic video indexing problem into a visual-only, textual-only, early fusion, late fusion, and combined analysis experiment. We provide a baseline implementation for each experiment together with baseline results. We extract image features, on global, regional, and keypoint level, which we combine with various supervised learners. A late fusion approach of visual-only analysis methods using geometric mean was our most successful run. With this run we conquer the Challenge baseline by more than 50%. Our concept detection experiments have resulted in the best score for three concepts: i.e. desert, flag us, and charts. What is more, using LSCOM annotations, our visual-only approach generalizes well to a set of 491 concept detectors. To handle such a large thesaurus in retrieval, an engine is developed which allows users to select relevant concept detectors based on interactive browsing using advanced visualizations. Similar to previous years our best interactive search runs yield top performance, ranking 2nd and 6th overall.

 
Cees G. M. Snoek, Marcel Worring, Arnold W. M. Smeulders, and Bauke Freiburg. The role of visual content and style for concert video indexing. In Proceedings of the IEEE International Conference on Multimedia & Expo, pages 252-255. Beijing, China, July 2007. [ bib | .pdf ]
This paper contributes to the automatic indexing of concert video. In contrast to traditional methods, which rely primarily on audio information for summarization applications, we explore how a visual-only concept detection approach could be employed. We investigate how our recent method for news video indexing - which takes into account the role of content and style - generalizes to the concert domain. We analyze concert video on three levels of visual abstraction, namely: content, style, and their fusion. Experiments with 12 concept detectors, on 45 hours of visually challenging concert video, show that the automatically learned best approach is concept-dependent. Moreover, these results suggest that the visual modality provides ample opportunity for more effective indexing and retrieval of concert video when used in addition to the auditory modality.

 
Cees G. M. Snoek and Marcel Worring. Are concept detector lexicons effective for video search?. In Proceedings of the IEEE International Conference on Multimedia & Expo, pages 1966-1969. Beijing, China, July 2007. [ bib | .pdf ]
Until now, systematic studies on the effectiveness of concept detectors for video search have been carried out using less than 20 detectors, or in combination with other retrieval techniques. We investigate whether video search using just large concept detector lexicons is a viable alternative for present day approaches. We demonstrate that increasing the number of concept detectors in a lexicon yields improved video retrieval performance indeed. In addition, we show that combining concept detectors at query time has the potential to boost performance further. We obtain the experimental evidence on the automatic video search task of TRECVID 2005 using 363 machine learned concept detectors.

 
Marcel Worring, Cees G. M. Snoek, Ork de Rooij, Giang P. Nguyen, and Arnold W. M. Smeulders. The MediaMill semantic video search engine. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages -. Honolulu, Hawaii, USA, April 2007. Invited paper. [ bib | .pdf ]
In this paper we present the methods underlying the MediaMill semantic video search engine. The basis for the engine is a semantic indexing process which is currently based on a lexicon of 491 concept detectors. To support the user in navigating the collection, the system defines a visual similarity space, a semantic similarity space, a semantic thread space, and browsers to explore them. We compare the different browsers and their utility within the TRECVID benchmark. In 2005, We obtained a top-3 result for 19 out of 24 search topics. In 2006 for 14 out of 24.

 
Giang P. Nguyen, Marcel Worring, and Arnold W. M. Smeulders. Similarity learning via dissimilarity space in CBIR. In Proceedings of the ACM SIGMM International Workshop on Multimedia Information Retrieval, pages 107-116. Santa Barbara, USA, October 2006. [ bib | .pdf ]
In this paper, we introduce a new approach to learn dissimilarity for interactive search in content based image retrieval. In literature, dissimilarity is often learned via the feature space by feature selection, feature weighting or a parameterized function of the features. Different from existing techniques, we use relevance feedback to adjust dissimilarity in a dissimilarity space. To create a dissimilarity space, we use Pekalska’s method [15]. After the user gives feedback, we apply active learning with one-class SVM on this space. Results on a Corel dataset of 10000 images and a TrecVid collection of 43907 keyframes show that our proposed approach can improve the retrieval performance over the feature space based approach.

 
Cees G. M. Snoek, Marcel Worring, Jan C. van Gemert, Jan-Mark Geusebroek, and Arnold W. M. Smeulders. The challenge problem for automated detection of 101 semantic concepts in multimedia. In Proceedings of the ACM International Conference on Multimedia, pages 421-430. Santa Barbara, USA, October 2006. [ bib | .pdf ]
We introduce the challenge problem for generic video indexing to gain insight in intermediate steps that affect performance of multimedia analysis methods, while at the same time fostering repeatability of experiments. To arrive at a challenge problem, we provide a general scheme for the systematic examination of automated concept detection methods, by decomposing the generic video indexing problem into 2 unimodal analysis experiments, 2 multimodal analysis experiments, and 1 combined analysis experiment. For each experiment, we evaluate generic video indexing performance on 85 hours of international broadcast news data, from the TRECVID 2005/2006 benchmark, using a lexicon of 101 semantic concepts. By establishing a minimum performance on each experiment, the challenge problem allows for component-based optimization of the generic indexing issue, while simultaneously offering other researchers a reference for comparison during indexing methodology development. To stimulate further investigations in intermediate analysis steps that influence video indexing performance, the challenge offers to the research community a manually annotated concept lexicon, pre-computed low-level multimedia features, trained classifier models, and five experiments together with baseline performance, which are all available at http://www.mediamill.nl/challenge/.

 
Jan C. van Gemert, Cees G. M. Snoek, Cor Veenman, and Arnold W. M. Smeulders. The influence of cross-validation on video classification performance. In Proceedings of the ACM International Conference on Multimedia, pages 695-698. Santa Barbara, USA, October 2006. [ bib | .pdf ]
Digital video is sequential in nature. When video data is used in a semantic concept classification task, the episodes are usually summarized with shots. The shots are annotated as containing, or not containing, a certain concept resulting in a labeled dataset. These labeled shots can subsequently be used by supervised learning methods (classifiers) where they are trained to predict the absence or presence of the concept in unseen shots and episodes. The performance of such automatic classification systems is usually estimated with cross-validation. By taking random samples from the dataset for training and testing as such, part of the shots from an episode are in the training set and another part from the same episode is in the test set. Accordingly, data dependence between training and test set is introduced, resulting in too optimistic performance estimates. In this paper, we experimentally show this bias, and propose how this bias can be prevented using "episode-constrained" cross-validation. Moreover, we show that a 15% higher classifier performance can be achieved by using episode constrained cross-validation for classifier parameter tuning.

 
Jan-Mark Geusebroek. Compact object descriptors from local colour invariant histograms. In British Machine Vision Conference. Edinburgh, UK, September 2006. [ bib | .pdf ]
Much emphasis has recently been placed on the detection and recognition of locally (weak) affine invariant region descriptors for object recognition. In this paper, we take recognition one step further by developing features for non-planar objects. We consider the description of objects with locally smoothly varying surface. For this class of objects, colour invariant histogram matching has proven to be very encouraging. However, matching many local colour cubes is computationally demanding. We propose a compact colour descriptor, which we call Wiccest, requiring only 12 numbers to locally capture colour and texture information. The Wiccest features are shown to be fairly insensitive to photometric effects like shadow, shading, and illumination colour. Moreover, we demonstrate the features to be applicable to highly compressed images while retaining discriminative power.

 
Marcel Worring, Cees G. M. Snoek, Ork de Rooij, Giang P. Nguyen, and Dennis C. Koelma. Lexicon-based browsers for searching in news video archives. In Proceedings of the International Conference on Pattern Recognition, pages 1256-1259. Hong Kong, China, August 2006. [ bib | .pdf ]
In this paper we present the methods and visualizations used in the MediaMill video search engine. The basis for the engine is a semantic indexing process which derives a lexicon of 101 concepts. To support the user in navigating the collection, the system defines a visual similarity space, a semantic similarity space, a semantic thread space, and browsers to explore them. The search system is evaluated within the TRECVID benchmark. We obtain a top-3 result for 19 out of 24 search topics. In addition, we obtain the highest mean average precision of all search participants.

 
Cees G. M. Snoek, Marcel Worring, Dennis C. Koelma, and Arnold W. M. Smeulders. Learned lexicon-driven interactive video retrieval. In H. Sundaram et al., editors, Proceedings of the International Conference on Image and Video Retrieval, CIVR 2006, Tempe, Arizona, July 13-15, 2006, volume 4071 of LNCS, pages 11-20. Springer-Verlag, Heidelberg, Germany, July 2006. [ bib | .pdf ]
We combine in this paper automatic learning of a large lexicon of semantic concepts with traditional video retrieval methods into a novel approach to narrow the semantic gap. The core of the proposed solution is formed by the automatic detection of an unprecedented lexicon of 101 concepts. From there, we explore the combination of query-by-concept, query-by-example, query-by-keyword, and user interaction into the MediaMill semantic video search engine. We evaluate the search engine against the 2005 NIST TRECVID video retrieval benchmark, using an international broadcast news archive of 85 hours. Top ranking results show that the lexicon-driven search engine is highly effective for interactive video retrieval.

 
Cees G. M. Snoek, Marcel Worring, Jan-Mark Geusebroek, Dennis C. Koelma, Frank J. Seinstra, and Arnold W. M. Smeulders. The semantic pathfinder for generic news video indexing. In Proceedings of the IEEE International Conference on Multimedia & Expo. Toronto, Canada, July 2006. [ bib | .pdf ]
This paper presents the semantic pathfinder architecture for generic indexing of video archives. The pathfinder automatically extracts semantic concepts from video based on the exploration of different paths through three consecutive analysis steps, closely linked to the video production process, namely: content analysis, style analysis, and context analysis. The virtue of the semantic pathfinder is its learned ability to find a best path of analysis steps on a per-concept basis. To show the generality of this indexing approach we develop detectors for a lexicon of 32 concepts and we evaluate the semantic pathfinder against the 2004 NIST TRECVID video retrieval benchmark, using a news archive of 64 hours. Top ranking performance indicates the merit of the semantic pathfinder.

 
Jan C. van Gemert, Jan-Mark Geusebroek, Cor J. Veenman, Cees G. M. Snoek, and Arnold W. M. Smeulders. Robust scene categorization by learning image statistics in context. In Int'l Workshop on Semantic Learning Applications in Multimedia, in conjunction with CVPR'06. New York, USA, June 2006. [ bib | .pdf ]
We present a generic and robust approach for scene categorization. A complex scene is described by proto-concepts like vegetation, water, fire, sky etc. These proto-concepts are represented by low level features, where we use natural images statistics to compactly represent color invariant texture information by a Weibull distribution. We introduce the notion of contextures which preserve the context of textures in a visual scene with an occurrence histogram (context) of similarities to proto-concept descriptors (texture). In contrast to a codebook approach, we use the similarity to all vocabulary elements to generalize beyond the code words. Visual descriptors are attained by combining different types of contexts with different texture parameters. The visual scene descriptors are generalized to visual categories by training a support vector machine. We evaluate our approach on 3 different datasets: 1) 50 categories for the TRECVID video dataset; 2) the Caltech 101-object images; 3) 89 categories being the intersection of the Corel photo stock with the Art Explosion photo stock. Results show that our approach is robust over different datasets, while maintaining competitive performance.

 
Arnold W. M. Smeulders, Jan van Gemert, Jan-Mark Geusebroek, Cees Snoek, and Marcel Worring. Browsing for the national dutch video archive. In ISCCSP2006. Marrakech, Morocco, March 2006. [ bib | .pdf ]
Pictures have always been a prime carrier of Dutch culture. But pictures take a new form. We live in times of broad- and narrowcasting through Internet, of passive and active viewers, of direct or delayed broadcast, and of digital pictures being delivered in the museum or at home. At the same time, the picture and television archives turn digital. Archives are going to be swamped with information requests unless they swiftly adapt to partially automatic annotation and digital retrieval. Our aim is to provide faster and more complete access to picture archives by digital analysis. Our approach consists of a multi-media analysis of features of pictures in tandem with the language that describes those pictures, under the guidance of a visual ontology. The general scientific paradigm we address is the detection of directly observables fused into semantic features learned from large repositories of digital video. We use invariant, natural-image statisticsbased contextual feature sets for capturing the concepts of images and integrate that as early as possible with text. The system consists of a large for science yet small for practice set of visual concepts permitting the retrieval of semantically formulated queries. We will demonstrate a PC-based, off-line trained state of the art system for browsing broadcast news-archives.

 
Giang P. Nguyen and Marcel Worring. Scenario optimization for interactive category search. In Proceedings of the ACM SIGMM International Workshop on Multimedia Information Retrieval. Singapore, November 2005. [ bib | .pdf ]
Most of the existing work in interactive content based retrieval concentrates on machine learning methods for effective use of relevance feedback. On the other end of the spectrum, the information visualization community focusses on effective methods for conveying information to the user. What lacks is research considering the information visualization and interactive content based retrieval as truly integrated parts of one search system. In such an integrated system there are many degrees of freedom like the number of images to display, the image size, different visualization modes, and possible feedback modes. To find optimal values for all of those using user studies is unfeasible. We therefore develop scenarios in which tasks and user actions are simulated. These are then optimized based on objective constraints and evaluation criteria. In such a manner the degrees of freedom are reduced and the remaining degrees can be evaluated in user studies. In this paper we present a system which integrates advanced similarity based visualization with active learning. We have performed extensive scenario based experimentation on an interactive category search task. The results show that indeed the use of advanced visualization and active learning pays off.

 
Laura Hollink, Marcel Worring, and Guus Schreiber. Building a visual ontology for video retrieval. In Proceedings of the ACM International Conference on Multimedia, pages 479-482. Singapore, November 2005. [ bib | .pdf ]
To ensure access to growing video collections, annotation is becoming more and more important. Using background knowledge in the form of ontologies or thesauri is a way to facilitate annotation in a broad domain. Current ontologies are not suitable for (semi-) automatic annotation of visual resources as they contain little visual information about the concepts they describe. We investigate how an ontology that does contain visual information can facilitate annotation in a broad domain and identify requirements that a visual ontology has to meet. Based on these requirements, we create a visual ontology out of two existing knowledge corpora (WordNet and MPEG-7) by creating links between visualand general concepts. We test performance of the ontology on 40 shots of news video, and discuss the added value of each visual property.

 
Cees G. M. Snoek, Marcel Worring, and Arnold W. M. Smeulders. Early versus late fusion in semantic video analysis. In Proceedings of the ACM International Conference on Multimedia, pages 399-402. Singapore, November 2005. [ bib | .pdf ]
Semantic analysis of multimodal video aims to index segments of interest at a conceptual level. In reaching this goal, it requires an analysis of several information streams. At some point in the analysis these streams need to be fused. In this paper, we consider two classes of fusion schemes, namely early fusion and late fusion. The former fuses modalities in feature space, the latter fuses modalities in semantic space. We show by experiment on 184 hours of broadcast video data and for 20 semantic concepts, that late fusion tends to give slightly better performance for most concepts. However, for those concepts where early fusion performs better the difference is more significant.

 
Cees G. M. Snoek, Marcel Worring, Jan-Mark Geusebroek, Dennis C. Koelma, and Frank J. Seinstra. On the surplus value of semantic video analysis beyond the key frame. In Proceedings of the IEEE International Conference on Multimedia & Expo. Amsterdam, The Netherlands, July 2005. [ bib | .pdf ]
Typical semantic video analysis methods aim for classification of camera shots based on extracted features from a single key frame only. In this paper, we sketch a video analysis scenario and evaluate the benefit of analysis beyond the key frame for semantic concept detection performance. We developed detectors for a lexicon of 26 concepts, and evaluated their performance on 120 hours of video data. Results show that, on average, detection performance can increase with almost 40% when the analysis method takes more visual content into account.

 
Giang P. Nguyen and Marcel Worring. Similarity based visualization of image collections. In Proceedings of the 7th International Workshop of the EU Network of Excellence DELOS on Audio-visual Content and Information Visualization in Digital Libraries. Cortona, Italy, May 2005. [ bib | .pdf ]
In literature, few content based multimedia retrieval systems take the visualization as a tool for exploring the collections. However, when searching for images without examples to start with, one needs to explore the data set. Up to now, most available systems just show random collections of images in 2D grid form. More recently, advanced techniques have been developed for browsing based on similarity. However, none of them analyze the problems that occur when visualizing large visual collections. In this paper, we make these problems explicit. From there, we establish three general requirements: overview, visibility, and data structure preservation. Solutions for each requirement are proposed. Finally, a system is presented and experimental results are given to demonstrate our theory and approach.

 
Frank J. Seinstra, Cees G. M. Snoek, Dennis C. Koelma, Jan-Mark Geusebroek, and Marcel Worring. User transparent parallel processing of the 2004 NIST TRECVID data set. In Proceedings of the 19th International Parallel & Distributed Processing Symposium. Denver, USA, April 2005. [ bib | .pdf ]
The Parallel-Horus framework, developed at the University of Amsterdam, is a unique software architecture that allows non-expert parallel programmers to develop fully sequential multimedia applications for efficient execution on homogeneous Beowulf-type commodity clusters. Previously obtained results for realistic, but relatively small-sized applications have shown the feasibility of the Parallel-Horus approach, with parallel performance consistently being found to be optimal with respect to the abstraction level of message passing programs. In this paper we discuss the most serious challenge Parallel-Horus has had to deal with so far: the processing of over 184 hours of video included in the 2004 NIST TRECVID evaluation, i.e. the de facto international standard benchmark for content-based video retrieval. Our results and experiences confirm that Parallel- Horus is a very powerful support-tool for state-of-the-art research and applications in multimedia processing.

 
Cees G. M. Snoek and Marcel Worring. Multimedia pattern recognition in soccer video using time intervals. In Classification the Ubiquitous Challenge, Proceedings of the 28th Annual Conference of the Gesellschaft fur Klassifikation e.V., University of Dortmund, March 9-11, 2004, Studies in Classification, Data Analysis, and Knowledge Organization, pages 97-108. Springer-Verlag, Berlin, Germany, 2005. Invited paper. [ bib | www: ]
We focus on the problem of learning rich semantic patterns from the multimedia data associated with broadcast video documents. In this talk we propose a generic and flexible framework for produced video classification that is capable to learn semantic concepts from multimodal sources based on analyzed style elements. Four properties that are indicative for style are identified, i.e. layout, content, capture, and concept context. The framework allows for robust classification of different semantic concepts in produced video by using a fixed core of common layout, content, and capture elements in combination with varying concept specific context elements. Concepts are classified using a Stacked Probabilistic Support Vector Machine. Results on 120 hours of video data from the 2003 TRECVID benchmark show that, by using the proposed framework, several rich semantic concepts in broadcast news can be classified with state-of-the-art accuracy.

 
Laura Hollink, Giang Nguyen, Guus Schreiber, Jan Wielemaker, Bob Wielinga, and Marcel Worring. Adding spatial semantics to image annotations. In International Workshop on Knowledge Markup and Semantic Annotation. Hiroshima, Japan, November 2004. [ bib | .pdf ]
In this paper we discuss a the support of users in adding spatial information semi-automatically to annotations of images. Descriptions of objects depicted in an image are extended with information about the position of those objects. We distinguish two types of spatial concepts: absolute positions of objects (e.g., east, west) and relative spatial relations between objects (e.g., left, above). We show the use of a tool for a collection of art paintings with preexisting RDF annotations, including a list of image objects. First, the tool segments a painting into regions. The user selects regions, and labels these with objects from the existing annotation. Then, the tool computes absolute positions and relative spatial relations of the selected regions, and adds these to the annotation. A small evaluation study is reported in which annotations generated by the tool are compared to manual annotations by ten volunteers.

 
Giang P. Nguyen and Marcel Worring. A user based framework for salient detail extraction. In Proceedings of the IEEE International Conference on Multimedia & Expo. Taipei, Taiwan, June 2004. [ bib | .pdf ]
In this paper, we consider the interaction with salient details in the image i.e. points, lines, and regions. Interactive salient detail definition goes further than summarizing the image into a set of salient details since the saliency of details depends on the context, the application and the user. We propose an interaction framework for salient details from the perspective of the user, which dynamically updates the user- and context-dependent definition of saliency based on relevance feedback. A number of instantiations of the framework are presented.

 
Giang P. Nguyen and Marcel Worring. Optimizing similarity based visualization in content based image retrieval. In Proceedings of the IEEE International Conference on Multimedia & Expo. Taipei, Taiwan, June 2004. [ bib | .pdf ]
In any CBIR system, visualization is important, either to show the final result to the user or to form the basis for interaction. Advanced systems use 2-dimensional similarity based visualization which show not only the information of one image itself but also the relations between images. A problem in interactive 2D visualization is the overlap between the images displayed. This obviously reduces the search capability. Simply spreading the images on the screen space will not preserve the relations between them. In this paper, we propose a visualization scheme which reduces the overlap as well as preserves the general distribution of the images displayed. Results show that an effective balance between display of structures and limited overlap can be achieved.

 
Cees G. M. Snoek, Marcel Worring, and Alexander G. Hauptmann. Detection of TV news monologues by style analysis. In Proceedings of the IEEE International Conference on Multimedia & Expo. Taipei, Taiwan, June 2004. [ bib | .pdf ]
We propose a method for detection of semantic concepts in produced video based on style analysis. Recognition of concepts is done by applying a classifier ensemble to the detected style elements. As a case study we present a method for detecting the concept of news subject monologues. Our approach had the best average precision performance amongst 26 submissions in the 2003 TRECVID benchmark.

 
Marcel Worring, Giang P. Nguyen, Laura Hollink, Jan C. van Gemert, and Dennis C. Koelma. Accessing video archives using interactive search. In Proceedings of the IEEE International Conference on Multimedia & Expo. Taipei, Taiwan, June 2004. [ bib | .pdf ]
In this presentation we present a system for interactive search in video archives. In our view interactive search is a fourstep process composed of indexing, filtering, browsing, and ranking. We have experimentally verified, using 22 groups of two participants each, how users apply these steps in the interactive search and how well they perform.

 
Laura Hollink, Giang P.Nguyen, Dennis Koelma, Guus Schreiber, and M.Worring. User strategies in video retrieval: a case study. In P. Enser, Y. Kompatsiaris, N.E. O'Connor, A.F. Smeaton, and A.W. M. Smeulders, editors, Proceedings of the International Conference on Image and Video Retrieval, CIVR 2004, Dublin, Ireland, July 21-23, 2004, volume 3115 of LNCS, pages 6-14. Springer-Verlag, Heidelberg, Germany, 2004. [ bib | .pdf ]
In this paper we present the results of a user study that was conducted in combination with a submission to TRECVID 2003. Search behavior of students querying an interactive video-retrieval system was analyzed. 242 Searches by 39 students on 24 topics were assessed. Questionnaire data, logged user actions on the system, and a quality mea- sure of each search provided by TRECVID were studied. Analysis of the results at various stages in the retrieval process suggests that retrieval based on transcriptions of the speech in video data adds more to the average precision of the result than content-based retrieval. The latter is particularly useful in providing the user with an overview of the dataset and thus an indication of the success of a search.

 
Giang P. Nguyen and Marcel Worring. Query definition using interactive saliency. In Proceedings of the ACM SIGMM International Workshop on Multimedia Information Retrieval. Berkeley, USA, November 2003. [ bib | .pdf ]
Content-based image retrieval (CBIR) has been under investigation for a long time with many systems built to meet different application demands. However, in all systems, there is still a big gap between the user's expectation and the system's retrieval capabilities. Therefore, user interaction is an essential component of any CBIR system. Interaction up to now has mostly focused on global image features or similarities. We consider the interaction with salient details in the image i.e. points, lines, and regions. Interactive salient detail definition goes further than automatically summarizing the image into a set of salient details. We aim to dynamically update the user- and context-dependent definition of saliency based on relevance feedback from the user. In this paper, we propose an interaction framework for salient details from the perspective of the user.

 
Cees G. M. Snoek and Marcel Worring. Time interval maximum entropy based event indexing in soccer video. In Proceedings of the IEEE International Conference on Multimedia & Expo, pages 481-484. Baltimore, USA, July 2003. [ bib | .pdf ]
Multimodal indexing of events in video documents poses problems with respect to representation, inclusion of contextual information, and synchronization of the heterogeneous information sources involved. In this paper we present the Time Interval Maximum Entropy (TIME) framework that tackles aforementioned problems. To demonstrate the viability of TIME for event classification in multimodal video, an evaluation was performed on the domain of soccer broadcasts. It was found that by applying TIME, the amount of video a user has to watch in order to see almost all highlights can be reduced considerably.

 
Cees G. M. Snoek and Marcel Worring. A review on multimodal video indexing. In Proceedings of the IEEE International Conference on Multimedia & Expo, volume 2, pages 21-24. Lausanne, Switzerland, August 2002. [ bib | .pdf ]
Efficient and effective handling of video documents depends on the availability of indexes. Manual indexing is unfeasible for large video collections. Efficient, single modality based, video indexing methods have appeared in literature. Effective indexing, however, requires a multimodal approach in which either the most appropriate modality is selected or the different modalities are used in collaborative fashion. In this paper we present a framework for multimodal video indexing, which views a video document from the perspective of its author. The framework serves as a blueprint for a generic and flexible multimodal video indexing system, and generalizes different state-of-the-art video indexing methods. It furthermore forms the basis for categorizing these different methods.

 
Marcel Worring, Andrew Bagdanov, Jan van Gemert, Jan-Mark Geusebroek, Minh Hoang, Guus Schreiber, Cees G. M. Snoek, Jeroen Vendrig, Jan Wielemaker, and Arnold W. M. Smeulders. Interactive indexing and retrieval of multimedia content. In Proceedings of the 29th Annual Conference on Current Trends in Theory and Practice of Informatics, volume 2540 of Lecture Notes in Computer Science, pages 135-148. Springer-Verlag, Milovy, Czech Republic, 2002. [ bib | .pdf ]
The indexing and retrieval of multimedia items is difficult due to the semantic gap between the user's perception of the data and the descriptions we can derive automatically from the data using computer vision, speech recognition, and natural language processing. In this contribution we consider the nature of the semantic gap in more detail and show examples of methods that help in limiting the gap. These methods can be automatic, but in general the indexing and retrieval of multimedia items should be a collaborative process between the system and the user. We show how to employ the user's interaction for limiting the semantic gap.