ACM International Conference on Image and Video Retrieval, July 9-11 2007
 
 
University of Amsterdam, Amsterdam, The Netherlands
 
 

Image Retrieval Showcase

Contact: Allan Hanbury, Vienna University of Technology, Austria.

Database

An extended version of the IAPR-TC12 dataset of 20,000 vacation images. The version of the dataset used in the ImageCLEF 2006 campaign can be obtained on the IAPR TC12 Benchmark page. This dataset is a good simulation of a collection of photos that could be stored on anybody's harddisk. The version of the dataset used in the showcase event will be modified as follows:

  • New images will be added.
  • The Description field will be removed from the image annotations (all other fields will remain).
  • Only the English annotations will remain.
  • A few fields for some images will be left blank, to simulate the poor annotation which is usually done for personal photo collections.
This dataset for use in the image retrieval showcase will be made available in the middle of April.

Types of Queries

For an idea of the types of queries, see the queries used in the ImageCLEF 2006 campaign, also available on the IAPR TC12 Benchmark page. For the showcase event, two types of queries will be provided:

  • Text only queries: e.g. "Find images of snowy mountains", "Find images of hotels with swimming pools", "Find images with three people on a beach".
  • Text queries with example images: e.g. "What is the name of the monument in the provided image?" (i.e. find other images of the same monument and look at the text annotation), "Find other images of Uncle Fred (see attached image)".
Some example queries will also be made available in the middle of April.

Organization

Each participant should bring a laptop computer with their image retrieval system and the dataset installed, and set it up at the event. Internet access will not be available. The queries will be handed out one by one at the event. The "searcher" designated by each team will then use the system to find the images from the dataset matching the query (note that the same searcher must do all the queries for a team). The searches may be as interactive as desired and go through as many iterations as needed. The searches on all the systems will happen in parallel.

Kudos will be awarded for the systems that:

  • find the first correct image the fastest.
  • find the most correct responses in the time limit.
  • find the most correct images that other systems did not find.
  • have the best looking user interface.
  • find the most images of Uncle Fred.
  • ...

During the second part of the event, queries on the systems will be tried out by a few members of the audience. The most user-friendly system, easiest to learn system, best result display, ... will be chosen by the audience members.

None of the evaluation results will be published, the emphasis is on demonstrating the capabilities of the technology for a well-defined task that interests many people: what is the best way to search my home photo collection?