Home Environment Themes Projects Core Curriculum Organization Events Publications

Machine Learning

CADICS Workshop, Feb. 13, 13-17, 2014


Location

Visionen, B-building, Campus Valla.

Workshop program

The workshop program consists of two invited talks and three LiU-internal
talks.

13.15    Opening
13.20    Thomas Mensink, UVA: Learning new classes at near-zero cost
14.15    Mattias Villani, LiU: MCMC for tall data
14.40    Nenad Markuš, LiU: Pixel Intensity Comparisons Organized in Decision
Trees for Object Detection and Landmark Point Localization
15.05    break
15.25    Carl Henrik Ek, KTH: Manifold Relevance Determination
16.20    Fahad Khan, LiU: Learning Multi-Cue Visual Vocabularies for
Fine-Grained Object Recognition
16.45    Closing words

Abstracts of the invited talks may be found below.

Welcome!

Contact person

Michael Felsberg, email: michael.felsberg@liu.se

Abstracts of invited talks

Carl Henrik Ek - Manifold Relevance Determination

Learning consolidated models of multiple views is a challenging problem,
especially from a generative perspective. The main culprit of this stems
from the fact that all the variations in the data needs to be represented
within the model while often the case is that only a subset of the
variations in the views are actually common. This means that in many
scenarios the corresponding view pollutes the representation of the other
leading to a worse representation of the data. One approach in such
scenarios is to learn a factorised latent representation which allows for
parts of the latent representation to be responsible for generating parts
of the data. In this talk I will describe a model referred to as Manifold
Relevance Determination which is a non-parametric latent variable model
based on Gaussian process priors which learns factorised latent
representations in a Bayesian learning framework.

Thomas Mensink - Learning new classes at near-zero cost

In this talk I'll present recent research on the ability to
learn classifiers for new classes at negligible cost. For this, we we make
use of distance based classifiers, such as the k-Nearest Neigbours (kNN)
and Nearest Class Means (NCM), since these methods can incorporate new
classes and training images continuously over time at negligible cost.
This is not possible with the popular one-vs-rest SVM approach, but is
essential when dealing with real-life open-ended datasets.
For the NCM classifier, which assigns an image to the class with the
closest mean, we introduce a new metric learning approach based on
multi-class logistic discrimination. During training we enforce that an
image from a class is closer to its class mean than to any other class
mean in the projected space. Experiments on the ImageNet 2010 challenge
dataset, which contains over 1 million training images of thousand
classes, show that, surprisingly, the NCM classifier compares favorably to
the non-linear k-NN classifier. Moreover, the NCM performance is
comparable to that of linear SVMs which obtain current state-of-the-art
performance. Experimentally we also study the generalization performance
to classes that were not used to learn the metrics and obtain surprisingly
good results.
If time permits, I'll discuss some recent (unpublished) work on the
adaptation of the NCM framework to multi-label image classification. In
this multi-label formulation we can also learn a low-rank metric and
obtain results similar to binary SVMs, again with the advantage of being
able to generalize to new labels by just computing class means.

Logga in Produced by Mediatron