Invited Speakers

Prof. Nicolai Petkov. University of Groningen, The Netherlands.

Nicolai Petkov is professor of computer science with a chair in intelligent systems and parallel computing at the University of Groningen since 1991. In the period 1998-2009 he was scientific director of the Institute for Mathematics and Computer Science. He applies machine learning and pattern recognition to various problems. www.cs.rug.nl/is

Talk: Representation learning with trainable COSFIRE filters

 

Abstract

In order to be effective, traditional pattern recognition methods typically require a careful manual design of features, involving considerable domain knowledge and effort by experts. The recent popularity of deep learning is largely due to the automatic configuration of effective early and intermediate representations of the data presented. The downside of deep learning is that it requires a huge number of training examples.

Trainable COSFIRE filters are an alternative to deep networks for the extraction of effective representations of data. COSFIRE stands for Combinations of Shifted Filter Responses. Their design was inspired by the function of certain shape selective neurons in areas V4 and TEO of visual cortex. A COSFIRE filter is configured by the automatic analysis of a single pattern. The highly non-linear filter response is computed as a combination of the responses of simpler filters, such as Difference of (color) Gaussians or Gabor filters, taken at different positions of the concerned pattern. The identification of the parameters of the simpler filters that are needed and the positions at which their responses are taken is done automatically. An advantage of this approach is its ease of use as it requires no programming effort and little computation – the parameters of a filter are derived automatically from a single training pattern. Hence, a large number of such filters can be configured effortlessly and selected responses can be arranged in feature vectors that are fed into a traditional classifier.

This approach is illustrated by the automatic configuration of COSFIRE filters that respond to randomly selected parts of many handwritten digits. We configure automatically up to 5000 such filters and use their maximum responses to a given image of a handwritten digit to form a feature vector that is fed to a classifier. The COSFIRE approach is further illustrated by the detection and identification of traffic signs and of sounds of interest in audio signals.

The COSFIRE approach to representation learning and classification yields performance results that are comparable to the best results obtained with deep networks but at a much smaller computational effort. Notably, COSFIRE representations can be obtained using numbers of training examples that are many orders of magnitude smaller than those used by deep networks.

 

 

Prof. John Tsotsos. York University, Canada.

John K. Tsotsos is Professor of Vision Science at York University. He is Director of the Centre for Innovation in Computing at Lassonde, Canada Research Chair in Computational Vision, and is a Fellow of the Royal Society of Canada. Current research has a main focus in developing a comprehensive theory of visual attention in humans and embodying elements of the theory into the vision systems of mobile robots.

Talk: It’s all about the constraints

Abstract

Our ever-growing knowledge about the human brain and human behavior is opening doors to increasingly impressive technological achievements. This neurobiological inspiration has a significant history and involves almost equal parts neuroscience, computation, engineering and art. This presentation looks at a selective and highly condensed snapshot at some of these inspirations for computational and machine vision, pointing the way to where new inspirations may lead. How one chooses these inspirations is critical, much more so if one seeks a model or system that is biologically plausible. Which aspects of biological visual processing are used to constrain a model, and which aspects are left out, are critical questions. A case study of how such constraints are derived and applied in our current STAR – Selective Tuning Attentive Reference – model, will illustrate how biological and computational constraints can be effectively integrated.