MITACS 2005 workshop on Statistical Learning
of Complex Data
with Complex Distributions

Université de Montréal
Pavillon André Aisenstadt, 2920 Chemin de la Tour
Room 6214 ( Centre de Recherches Mathematiques )
Friday August 26th, 2005
Organizer: Yoshua Bengio
Scope
This workshop brings together members of the MITACS NCE group on
Statistical Learning of Complex Data with Complex Distributions. This
pan-Canadian (British Columbia to Nova-Scotia) group focusses on a set
of core research questions in statistical data analysis and statistical
machine learning, whose applications range from data-mining in the
pharmaceutical industry to modeling the graph of interactions between
communicating agents. The web page for the group is the following:
www.iro.umontreal.ca/~bengioy/mitacs
Speakers
Yoshua Bengio, Université de Montréal, QC, Canada
Hugh Chipman, Acadia University, NS, Canada
Christian Léger, Université de Montréal, QC, Canada
Jim Ramsay, McGill University, QC, Canada
Dale Schuurmans, University of Alberta, AB, Canada
Will Welch, University of British Columbia, BC, Canada
Mu Zhu, University of Waterloo, ON, Canada
Schedule:
Proceedings
There will be no paper proceedings, but with the authorization of the authors we will post links to the talk slides on this page (see the "Talks" section).
Talks
LAGO: A Computationally Efficient Method for Statistical Detection [pdf] (intro [pps])
Mu Zhu
Faculty of Mathematics
University of Waterloo
We study a general class of statistical detection problems
where the underlying objective is to detect items belonging
to a rare class from a very large database. We propose a
computationally efficient method to achieve this goal.
Our method consists of two steps. In the first step, we
estimate the density function of the rare class alone with
an adaptive bandwidth kernel density estimator. The
adaptive choice of the bandwidth is inspired by the ancient
Chinese board game known today as Go. In the second step,
we adjust this density locally depending on the density of
the background class nearby. We show that the amount of
adjustment needed in the second step is approximately equal
to the adaptive bandwidth from the first step, which gives
us additional computational savings. We name the resulting
method LAGO for "locally adjusted Go-kernel density
estimator." We then apply LAGO to a real drug discovery
data set and compare its performance with a number of
existing and popular methods.
This is joint work with Wanhua Su and Hugh Chipman.
Classification for Ranking in Drug Discovery: Identifying and Aggregating Relevant Subsets of Variables [pdf]
Will Welch
Statistics Department
University of British Columbia
High-throughput screening is used in drug discovery to assay
compounds for activity against a biological target.
Models can be built to relate activity to chemical structure, as
characterized by various explanatory variables. The models are
used to predict the activity of further compounds,
and only those compounds most likely to be active are assayed in a
sequential-screening strategy.
Essentially, the modelling problem is to rank the unassayed compounds with
respect to their probabilities of activity.
Empirically, it is often found that models that perform well in identifying
active compounds make few assumptions about the functional form of the
chemical structure-activity relationship. These methods include
K-nearest neighbours and classification trees. We will describe
some adaptations of these methods based on averaging aggregates of classifiers
built from subsets of variables. Aggregating classifiers in this way
can improve predictive performance and can identify relevant variables.
Curse of Dimensionality, Convexity, Kernel Machines and Neural Networks [pdf]
Yoshua Bengio
Département d'Informatique et de Recherche Opérationnelle
Université de Montréal
In this talk we will connect all the words in the title in a series of
theoretical and empirical results that link neural networks and kernel
machines and draw light on both families of algorithms. First we present
several results that suggest strongly that the well-known limits of
classical kernel algorithms (the "Curse of Dimensionality") apply to modern
kernel algorithms such as SVMs, spectral manifold learning and graph-based
semi-supervised learning, when the kernel is local (i.e. in most
cases). Then we introduce an apparently unrelated result, which allows to
demonstrate that training neural networks with a data-dependent number of
hidden units can be seen as a convex program (with an infinite number of
variables), that can however be solved exactly in time exponential in the
number of inputs, using a boosting-like algorithm. Using ideas from this
derivation we can see one-hidden-layer neural networks as particular kernel
machines, and when the number of hidden units becomes not only infinite but
continuous, the corresponding kernel can be computed exactly. Finally
we discuss non-local kernel density estimation algorithms that can
generalize
far from the training examples, and propose research directions in order
to develop more general learning algorithms that can re-use parts in order
to obtain generalization to completely new instances.
Joint work with Nicolas Le Roux, Olivier Delalleau and Hugo Larochelle.
Convex Hidden Markov Models [ppt]
Dale Schuurmans
Department of Computing Science
University of Alberta
In this talk, I will discuss a new unsupervised algorithm for training
hidden Markov models that is convex and avoids the use of EM. The idea
is to formulate an unsupervised version of maximum margin Markov networks
(M3Ns) that can be trained via semidefinite programming. This extends our
earlier results on unsupervised support vector machines. The result is a
discriminative training criterion for hidden Markov models that remains
unsupervised and does not create local minima. Experimental results show
that the convex discriminative procedure can produce better conditional
models than conventional Baum-Welch (EM) training.
Joint work with Linli Xu.
(We also acknowledge the generous assistance of Li Cheng and Tao Wang.)
A random walk through complex statistical learning problems [pdf]
Hugh Chipman
Department of Mathematics and Statistics
Acadia University
This talk will focus on several related statistical learning problems,
with the common thread of developing and fitting probabilistic models to
cope with uncertainty. Areas discussed will include boosting-like
models, drug discovery problems, mixture models for curves, and dynamic
data on graphs and networks.
Parameter estimation for high dimensional mod
Jim Ramsay
Department of Psychology
McGill University
Our work at McGill is now focused on two areas, each of which extends the capacities of functional data analysis to analyze data distributed over time, space, and other continua:
- The estimation of systems of differential equations from noisy observed data. We are looking at equations drawn from neuroscience (e.g.: Hodgkins-Huxley equations and their offspring), chemical engineering (equations modeling the dynamics of systems producing nylong and other products) and ecology (equations describing predator-prey interactions)
- The development of methods for the analysis of event/intensity data (often called marked point process data by statisticians). A classic example is the time and intensity of an earthquake. At this point there are very few flexible tools for modeling data of this kind.
Our work in these and other areas pivots are a new approach for estimating very large number of parameters, where a smallish fraction of these are of direct interest (structural) and larger number are essential to model the data but of peripheral interest (nuisance). This approach involves a generalization of profiled estimation, a method often used in nonlinear least squares estimation problems as well as certain applications of maximum likelihood estimation.
I will try to explain in this short talk how profiled estimation and estimating functions may play an important role in the kinds of high dimensional data analyses that interest us in this MITACS team.
Selecting Likelihood Weights by the Bootstrap
Christian Léger
Département de Mathématiques et de Statistique
Université de Montréal
Weighted likelihood was introduced to provide a methodology to unify a
variety of statistical procedures that trade bias for precision. It
combines all relevant information while inheriting many of the
desirable features of the classical likelihood procedures, including
good asymptotic properties. However, in order to be effective, the
weights involved in its construction must be appropriately chosen from
the data. Wang and Zidek (2005) have studied the use of
cross-validation which works well when there is bias, i.e., when the
relevant parameter of the alternative sources of information differs
from the main source of information. It does not work, when they are
the same. We study the use of the bootstrap in choosing the
likelihood weights. By downsizing the bootstrap estimator of squared
bias, we get consistent estimators of the optimal weights, even when
the relevant parameter of the main and alternative sources of
information are the same. We also see that when we do not downsize
the squared bias term and the parameters are identical, the bootstrap
choice is no longer consistent, but it gives good results, unlike
cross-validation.
These preliminary results are joint with Steven Wang of York University.