15-30 avril 1996
H. Bourlard
Recently it has been shown that Artificial Neural Networks (ANNs)
can be used to augment speech recognizers whose underlying structure
is essentially that of Hidden Markov Models (HMMs). In particular, we
have shown that fairly simple ANN structures can be discriminatively
trained to estimate emission probabilities for HMMs.
Many (relatively simple) speech recognition systems based on this approach,
and generally referred to as hybrid HMM/ANN systems, have been proved, on
controlled tests, to be both effective in terms of accuracy
(recent results show this hybrid approach slightly ahead of more
traditional HMM systems when evaluated on both British and American English
tasks, using a 20,000 word vocabulary and a trigram language model)
and efficient in terms of CPU and memory run-time requirements.
In this talk, after a description of the HMM basics and of the HMM/ANN
approach, we will discuss some of the issues that were raised by this
approach, including: use of temporal information, role of prior probablities
vs likelihoods, and language information vs acoustic information.
We will then discuss some current research topics on extending these results
to somewhat more complex systems, including new theoretical and experimental
developments on transition-based recognition systems and training of HMM/ANN
hybrids to diretcly maximize the global posterior probabilities.
This talk will assume some background in both hidden Markov models and
artificial neural networks.
retour au programme
P. Dayan
The Helmholtz machine is a learning architecture for doing density
estimation. It works on data sets that can be described as being
generated by interacting underlying causes. It builds a top-down
probabilistic model of the density, and, simultaneously, a bottom-up
probabilistic inverse to that model.
The simplest version of the Helmholtz machine is purely linear and
performs a well-known statistical technique called factor analysis. I
will describe the relationship between factor analysis and principal
components analysis, show how we can use factor analysis to understand
something about the success of the wake-sleep learning algorithm,
which is one of the training methods for the Helmholtz machine, and
show some results of using the linear Helmholtz machine.
This is joint work with Radford Neal, Geoff Hinton and Mike Revow
retour au programme
P. Dayan
Temporal difference algorithms learn to predict future outcomes in
Markov environments. They are believed to trade off bias and variance
by learning partly from their own (initially biased) predictions and
partly from the (highly variable) outcomes of sample paths through the
Markov environment. Although they have been empirically highly
successful, it is not at clear why -- whether, even, there is any real
advantage to be had from this trade off.
We have calculated analytical expressions for how bias and
variance of the estimates change as a result of batch updating using
three different TD algorithms in absorbing Markov chains with terminal
returns. We have studied the resulting analytical learning curves,
which reveal the performance of the algorithms, and allow us to see
the effects of different schedules for changing the position on this
trade off and changing the learning rates. I will discuss the
trade off and changing the learning rates. I will discuss the
different algorithms and show how they perform.
This is joint work with Satinder Singh
retour au programme
G. Dreyfus
The modeling of dynamical systems is an increasingly important area of
applications of neural networks. Their universal approximation property,
together with their parsimony, make the latter attractive candidates for
performing such tasks. However, there is much more to dynamical system modeling
than just function approximation. My lectures will describe the problems arising
in the (inevitable) presence of noise, and will present both theoretical results
and a general methodology, together with illustrative examples from academic and
industrial problems. The important question of "semi-physical modeling", i.e. the
use of prior domain knowledge for designing the neural model will also be
addressed.
retour au programme
B. Giraud
Flexibility, robustness and algebraic convenience of
neural nets with neurons having a window-like response function
Resume : Usual formal neural nets assume that the response function of
each neuron is either a step function, or a smoother sigmoid. Actually, there
is an advantage in copying biological systems where neurons are protected
against overexcitation by inhibitory ``interneurons'', so that actually
basic units of the system are neuron pairs whose responses are window-like.
The lecture explains how such ``windows'' can be programmed to achieve any task,
how holographic robustness is present, and how a ``lorentzian'' parametrization
of the window reduces the training of the neural nets to just manipulations of
(big) polynomials.
Spectrum recognition (chemical mass spectra, nuclear
fuels, etc.) via the pseudo-inverse method and optimal background subtraction.
Resume : Feed-forward neural nets are known to be programmable as classifiers.
The lecture explains the so-called ``pseudo-inverse'' calculation of the
synaptic weights, and discusses which background subtraction is optimal
when the spectra to be sorted out are so close to one another that they
behave as ``twins''. It also discusses a strategy available when the
spectra under study strongly clusterize into distinct clusters.
retour au programme
N. Intrator
The utility of drawing decision and predictions from an ensemble of
predictors has been widely recognized. However, training methods for
optimal performance of ensemble of estimators are just emerging.
Several issues will be discussed: The effect of noise injection vs.
the effect of smoothing, and the importance of stabilizing the
ensemble predictors. Optimal stopping rules for ensembles and ways to
alleviate the effect of error correlation between the estimators on
ensemble performance.
Some applications and the specific details of neural network
implementations will be described.
retour au programme
M. Jordan
Graphical models are probabilistic graphs that have interesting
relationships to neural networks. Undirected graphical models
are closely related to Boltzmann machines and Markov random
fields. Directed graphical models (the more popular variety)
are related to feedforward neural networks, but have a stronger
probabilistic semantics. Many interesting network models,
including HMM's, mixture models, Kalman filters, hierarchical
mixtures of experts and factor analytic models can be viewed
as special cases of graphical models.
In the area of inference (i.e., the calculation of posterior
probabilities of certain nodes given that other nodes are given
particular values), the research on graphical models is quite mature.
The inference algorithms provide a clean probabilistic framework
for, e.g., calculating posterior probabilities of input nodes
given output nodes, calculating posterior probabilities of
hidden nodes given input and output nodes, and calculating most
probable configurations. In the area of learning, there have
been interesting developments in the area of structural learning
(deciding which links and which nodes to include in the graph)
and learning in the presence of hidden variables.
I will present an introductory lecture on learning and inference
in graphical models, emphasizing the connections to neural networks.
retour au programme
M. Jordan
Exact inference and learning in graphical models is NP-hard and
the exact methods can become overly slow for highly interconnected
networks. I describe several methods which are aimed at approximate
inference and learning in such networks. The basic idea behind
these methods---known collectively as ``mean-field methods''---is
to replace a complex, intractable graph with a simplified, tractable
graph. A parameterized family of simplified graphs is used and
parameters are chosen so as to match the simplified graph as
closely as possible to the complex graph. Learning and inference
are then based on quantities computed in the simplified graph.
The most sophisticated variants of these methods make use of the
exact methods as subroutines operating on the simplified graph.
I describe several examples of this methodology, focusing on
cases in which the simplified architectures are tree-like or
chain-like.
retour au programme
M. Marchand
We present simple statistical methods that efficiently learns
(in the probably approximately correct (PAC) sense) some classes of
nonoverlapping perceptron networks when the distribution that generates the
input examples is member of the family of product distributions.
These networks (also know as mu-perceptron networks
or read-once formulas over a weighted threshold basis) are loop-free
neural nets in which each node has only one outgoing weight.
The learner is able to discover the connectivity (or skeleton)
of these networks by using a new statistical test which exploits the
strong unimodality property of sums of independent random variables.
We will show how (and under which conditions) subclasses of
these networks can be exactly learned.
I will first explain the basics of the PAC learning model of Valiant
and then present some results we obtained for neural nets.
retour au programme
J. Moody
This tutorial will present an overview of both classical and
nonlinear approaches to analyzing and predicting time series. The
presentation will begin with a review of linear time series models
(eg. AR, MA, ARIMAX), followed by simple nonlinear models (eg.
bilinear, regime switching, ARCH). I will summarize the relevant
results of chaos theory, including the limits of predictability,
strange attractors, and "embeddology". Next, I'll discuss general
nonlinear models, including both feed forward and recurrent neural
networks. Finally, I will present several time series modeling
case studies from economics, physics, and medicine.
retour au programme
M. Mozer
Connectionist or _subsymbolic_ techniques have been effective in modeling
primitive cognitive processes, such as perception and memory, but the theory
of higher-level processes such as reasoning and language remains dominated by
_symbolic_ computation (the rule-based manipulation of structured arrays
of symbols). The artificial intelligence community has recognized the
importance of integrating symbolic and subsymbolic models. A popular trend
has been to develop--or at least pay lip service to--hybrid models that have
both symbolic and subsymbolic components. I will describe an alternative
approach that involves using domain-specific symbolic mechanisms and
representations to constrain connectionist network architectures, dynamics,
and training procedures. I illustrate with three models. The first model
learns explicit condition-action rules over categorized instances. The second
model learns rewrite rules that--in conjunction with an external stack--allow
it to parse strings in context-free grammars. The third model induces
finite-state grammars by means of restriction on its internal state space.
The symbolic constraints obtain robust and better solutions, and allow for the
interpretation of the resulting models.
This work was performed in collaboration with Clayton McMillan, Paul Smolensky,
Jay Alexander, and Sreerupa Das.
retour au programme
J. Shawe-Taylor
A generalisation of the SRM principle is introduced which allows
an estimation of the class hierarchy based on the training data.
The principle is couched in terms of a luckiness function defined
on dichotomies of a particular set of data points. The requirement
for the approach to work is termed `probable smoothness' of the
luckiness function meaning that the luckiness of the target
classification can be reliably estimated using only a small amount of
data. If the target is `lucky' then with high probability we will
deduce this from the training sample and be able to conclude that the
generalisation error will be significantly lower than that predicted
by the full hypothesis dimension. In order to apply the approach
to the maximal margin hyperplane example, a real valued generalisation
of the Vapnik-Chervonenkis dimension termed the level fat-shattering
dimension is introduced and estimated for wide margin hyperplanes.
The analysis suggests many alternative examples of luckiness functions.
Some of these will be discussed and approaches towards proving
their probable smoothness suggested. The overall strategy of the
luckiness principle will be reviewed over the `no free lunch'.
retour au programme
P. Simard
Nearest neighbor concepts have had a considerable impact on the field
of pattern recognition. One must not be fooled, however, by their
initial conceptual simplicity. In all but extreme cases, performance
requirement and computational limitations require the addition of
sophisticated variations, in order to insure competitiveness with
other algorithms (such as neural networks, for instance).
In this talk, I will present several of these variations, such as
Parzen windows, RBF, special norms, preprocessing, protype selections,
K-means, LVQ, kd-tree, etc...
retour au programme
P. Simard
Memory-based classification algorithms such as Radial Basis Functions
or K-nearest neighbors often rely on simple distances (Euclidean
distance, Hamming distance, etc.), which are rarely meaningful on
pattern vectors (e.g. images). More complex, better suited distance
measures are often expensive and rather ad-hoc (elastic matching,
deformable templates). We propose a new distance measure which (a)
can be made locally invariant to any set of transformations of the
input and (b) can be computed efficiently. We tested the method on
large handwritten character databases provided by the Post Office and
the NIST. Using invariances with respect to translation, rotation,
scaling, skewing and line thickness, the method outperformed all other
distance, Hamming distance, etc.), which are rarely meaningful on
pattern vectors (e.g. images). More complex, better suited distance
measures are often expensive and rather ad-hoc (elastic matching,
deformable templates). We propose a new distance measure which (a)
can be made locally invariant to any set of transformations of the
input and (b) can be computed efficiently. We tested the method on
large handwritten character databases provided by the Post Office and
the NIST. Using invariances with respect to translation, rotation,
scaling, skewing and line thickness, the method outperformed all other
systems on small (less than 10,000 patterns) databases and was
competitive on our largest (60,000 patterns) database.
retour au programme
V. Vapnik
Theory of consistency of learning processes
I would like to describe the conceptual theory of learning
and generalization. The main goal of this seminar is to introduce
complete conceptual model of learning that is
to define the necessary and sufficient conditions for the consistency
of learning processes. This concept will be used for constructing
the quantitative learning theory.
Non-asymptotic bounds on the rate of
convergence of learning processes
This seminar is devoted to describing the main bounds on the rate of
convergence that are constructed using concepts that give the necessary
and sufficient conditions for consistency of learning processes.
I will present both the bound for the pattern recognition problem and
the bounds for the regression estimation problem. Along with constructive
bounds based on the VC dimension of the set of functions (indicator or real)
I will also discuss nonconstructive distribution dependent and nonconstructive
distribution independent bounds which are the basis for any improvement of
bounds on the rate of learning processes.
The learning algorithms
This seminar is devoted to constructing learning algorithms based on induction
inferences for small sample size.
Along with classical algorithms that are based on the Empirical Risk
minimization induction principle, I will discuss the algorithms based on
Structural Risk minimization principle. In particular I will consider
Support vector machines both for pattern recognition and regression
estimation problems.
retour au programme
Robert Tibshirani
I will discuss a new method for estimation and model selection in linear and
generalized linear models. The ``lasso'' minimizes the residual sum
of squares subject to the sum of the absolute value of the coefficients
being bounded by a constant. Because of the nature of this constraint
it tends to produce some coefficients that are exactly zero and hence
gives interpretable models. Our simulation studies suggest that the
lasso enjoys some of the favourable properties of both subset selection
and ridge regression. It produces interpretable models like subset
selection and exhibits the stability of ridge regression.
The lasso idea is quite general and can be applied to many models: I will
illustrate applications to logistic regression, the proportional hazards model
and tree-based models.
retour au programme
Robert Tibshirani
We propose a bootstrap-based method for searching through a space of
models. The technique is well suited to complex, adaptively fitted
models: it provides a convenient method for finding better local
minima, for resistant fitting, and for optimization under constraints.
Applications to regression, classification and density estimation are
described. The collection of models can also be used to form a
confidence set for the true underlying model, using a generalization of
Efron's percentile interval. We also provide results on the
asymptotic behaviour of bumping estimates. This is joint work with
Keith Knight.
retour au programme
S. Bengio
In this presentation, we will introduce Hidden Markov Models for tasks
such as speech recognition. We will show the learning algorithm for HMMs,
which is a special case of EM, and the recognition
algorithm, which is an efficient dynamic programming recurrence.
We will also stress the advantages and disadvantages of such models.
retour au programme
S. Bengio
In this second presentation, we will introduce Input/Output Hidden Markov
Models, show their relation with classical HMMs as well as with connectionist
models such as recurrent mixture of experts. Again, we will give a learning
algorithm as well as a recognition algorithm for IOHMMs. Then we will introduce
a special version for the asynchronous case, which can be used for applications
such as speech recognition.
retour au programme
G. Hinton
Supervised neural networks generalize well when the amount of information in
the weights is considerably less than the amount of predictable information in
the output vectors of the training cases. So during learning, it helps to
keep the weights simple by penalizing the amount of information they contain.
The amount of information in a weight can be controlled by adding Gaussian
noise and the noise level can be adapted during learning to optimize the
trade-off between the expected squared error and the information in the
weights. I will describe a method of computing the derivatives of the
expected squared error and of the amount of information in the noisy weights
in a network that contains a layer of non-linear hidden units. Provided the
output units are linear, the derivatives can be computed efficiently without
time-consuming Monte Carlo simulations.
retour au programme
G. Hinton
For hierarchical generative models that use distributed representations in
their hidden variables, there are exponentially many ways in which the model
can produce each data point. It is therefore intractable to compute the
posterior distribution over the hidden distributed representations given a
datapoint and so there is no obvious way to use EM or gradient methods for
fitting the model to data. A Helmholtz machine consists of a generative model
that uses distributed representations and a recognition model that computes an
approximation to the posterior distribution over representations. The machine
is trained to minimize a Helmholtz free energy which is equal to the negative
log probability of the data if the recognition model computes the correct
posterior distribution. If the recognition model computes a more tractable,
but incorrect distribution, the Helmholtz free energy is an upper bound on the
negative log probability of the data, so it acts as a tractable and useful
Lyapunov function for learning a good generative model. It also encourages
generative models that give rise to nice simple posterior distributions, which
makes perception a lot easier. Several different methods have been developed
for minimizing the Helmholtz free energy. I will focus on the "wake-sleep"
algorithm, which is easy to implement with neurons, and give some examples of
it learning probability density functions in high dimensional spaces. I will
also briefly describe two other algorithms.
This talk describes joint work with Peter Dayan, Brendan Frey and
Radford Neal.
retour au programme
T. Hastie
In this series we show how standard statistical tools for
classification can be enhanced to perform under modern regimes
of large and high-dimensional datasets. In particular we introduce
enhancements for linear discriminant analysis, mixture models and
nearest neighbor classification.
retour au programme
Y. Bengio
This introductory seminar will present the basic notions of learning
theory in an intuitive way: generalization, capacity, learning
curves. Some of the basic approaches common to many learning algorithms
will be discussed, especially iterative learning algorithms such
as gradient-based learning.
retour au programme
Y. Bengio
This introductory seminar will present the basic elements of popular
learning algorithms for artificial neural networks, in particular
for the multi-layered networks trained with the back-propagation algorithm,
for pattern recognition and non-linear regression problems.
retour au programme
Y. Bengio
This seminar will present some advanced notions in the application of
neural networks to financial and economic time-series. We will
first summarize the problem of learning to represent context in
sequential data, and solutions based on states at multiple time scales.
Then we will show how decisions can be improved by training a neural
network predictor with respect to a financial decision-taking
criterion rather than a prediction criterion.
retour au programme
J.P. Nadal
I will show how a study of performances of a learning algorithm
with respect to the size of the training set allows a good
control of the algorithmic strategy, in order to tune several
parameters of the algorithm. This also allows one to estimate
the performances one can expect when the algorithm is
applied to a larger data set.
retour au programme
F. Pineda
Little is known theoretically about the TD-lambda algorithm. Indeed, all
known results are derived for the trivial "look-up table" representation.
This representation is equivalent to requiring that the observation vectors
be linearly independent. The results presented here finally go beyond the
look-up table representation and apply to more general
function-approximation representations.
I describe an average case analysis of the TD-lambda algorithm. In
particular the following novel results are obtained within an absorbing
Markov setting and without restrictive linear-independence assumptions on
the observation vectors: (1) A general equation for the equilibrium
TD-lambda prediction is derived. (2) A closed form solution is obtained
for the equilibrium equation under the assumption that the approximation
function is linear. (3) It is shown that the solution exists and is unique.
(4)I prove that the corresponding mean-field dynamics is convergent and
that a Lyapunov function exists if the transition probabilities between
transient states are symmetric.(5) I prove convergence of batched linear
TD-lambda for general observation vectors. (6) Finally, the equilibium
solutions predicted by the mean-field calculation are shown to reproduce
generalization curves obtained by actual TD-lambda learning trials.
retour au programme
F. Pineda
We describe an architecture for acoustic transient classification. The
real-time low-power analog architecture is based on time-frequency analysis
by an electronic cochlea, followed by feature-extraction and template
matching.
The accoustic classfier we are developing will have one cochlea and memory
for 6 templates -- all on a *single* small chip. The correlator/memory
layout in the design is comparable in density to dynamic ram. The reason
for this high density can be traced directly to the algorithm design and
illustrates the importance of taking into account implementation
constraints when developing neural algorithms.
We have performed preliminary classification experiments using the
digitized output of an actual electronic cochlea. The cochlea performs
real-time time-frequency decomposition while dissipating only 5.5
milliwatts. The transients gathered for this initial investigation
consisted of bangs, slams, claps and snaps. Our correlation algorithm
applied to a dataset with 221 samples in 10 classes, yielded a correct
classification rate of 92.8% on out-of-sample exemplars.
retour au programme
L. Bottou
This talk discusses the generalization properties of local algorithms. I will
first describe a simple experiment in handwriting recognition. This
experiment will illustrate theoretical aspects related to the adjustement of
number of training examples, neighborhood size and local regularization.
Links with non-parametrical statistics and kernel based methods will be
thoroughly reviewed.
retour au programme
L. Bottou
Stochastic gradient descent is a poor optimization method. Yet it has been
found to be an adequate learning method for a significant family of
problems. We will describe the mathematical aspects of this learning algorithm
(convergence and generalization properties). These aspects allow us to explain
which learning problems are well handled by stochastic algorithms.
retour au programme
F. Girosi
In the first part of this tutorial I will discuss a number of
approximation techniques, from splines to Multilayer Perceptrons to
kernel regression. The problem of approximation will be formulated in
the framework of Empirical Risk Minimization principle. Different
approximation techniques correspond to different choices of the set of
functions over which the Empirical Risk Minimization functional is
minimized. The similarity and common limitations of these techniques
will be discussed, together with the relationship between global and
local models.
In the second part of this tutorial I will discuss the problem of
estimating the generalization capabilities of an approximation
technique as a function of the number of data points, the number of
free parameters and of the properties of the function underlying the
data. The generalization error can always be bounded by the sum of an
approximation and an estimation error, which are quantities related to
bias and variance. General properties of these two different types of
error will be discussed, together with their tradeoff. Specific
results about Multilayer Perceptrons and Radial Basis Functions models
will be presented and compared.
retour au programme
M. Gori
Function optimization seems to be an ubiquitos formulation of
an impressive number of different problems.
In this talk we introduce the concept of suspiciousness
to address typical troubles arising from continuous function optimization.
The concept is strictly related to the absence of local minima,
that is to a classic case in which there is no suspect that properly
designed numerical algorithms get stuck and fail reaching the
optimal solution.
We point out that suspiciousness is inherently related to the problem
associated with the function at hand, and show that there are some
intriguing links with computational complexity.
We give an optimal algorithm for solving non-suspect problems
that is based on a canonical form of gradient descent.
We show that the knowledge of the lower bound on the complexity
of a given problem can be used in order to establish the
suspiciousness of the proposed formulation.
Finally, we show the application of the theory to learning in
multilayer perceptrons and to problem solving by Hopfield networks.
retour au programme
M. Gori
The automatic number-plate recognition is becoming a relevant problem
in several cases where one needs checking the access
to specific services or want to produce automatic reports
concerning vehicles that are performing a sort of infraction.
In this talk, I discuss different approaches for solving the problem
and focus on the description of a neural-based system which relies on
on the ``hypothesize and verify'' paradigm.
The system produces hypotheses on the position of the plate
and performs the segmentation of the characters, that are subsequently
recognized by a module also charged of predicting the
recognition accuracy. This evaluation of the recognition process allows us
to verify the segmentation of the characters and, eventually, to ask for
additional hypotheses.
We give theoretical arguments suggesting that a multilayered
autoassociator network (MAN) is very well-suited in both the
recognition and segmentation phases. In particular, MANs
allow us to score the recognition accuracy
and to refine very successfully the segmentation carried out by
traditional image processing techniques.
Despite the advantage of intrinsic modularity, a society of MANs may not
have enough discrimination power, especially if some classes are
significantly similar.
For this reason we have also introduced a set of discriminant
networks (trained on both positive and negative examples) based on a
multilayer architecture, aimed at discriminating minimal pairs
such as ``U-V,'' or ``O-D,'' or ``M-N.''
A software tool based on these idea has recently been developed
under MS-DOS and has also been ported to several Unix platforms, including
IBM 6000 and Digital Alpha. Massive experimentation was carried out in the
real highway environment.
The performance of the system compares favorably to the experimental
performance obtained by humans, warranting practical application of the
system. An exhibition is scheduled at the workshop on a PC platform of the
actual system behavior in highway environment.
retour au programme
H.P. Graf
There are still a large number of new neural net circuits
published every year with a majority of them using analog
circuit techniques at least to some extent. During the last
two years there has been a strong trend towards more cost effective
and versatile circuits rather than just maximizing raw speed.
Several neural net chips were introduced into consumer products
recently, such as a neural net retina in the latest track ball from
Logitech or the neural net circuit in the touch-pad produced by Synaptics.
In this lecture I will review the circuit techniques applied
in neural network implementations and go over recent examples of
neural net circuits.
retour au programme
Y. Le Cun
The role of learning techniques and neural networks in the design of pattern
recognition systems has become increasingly important over the last few years.
While simple multilayer neural nets and other "knowledge free" techniques are
the methods of choice for simple pattern recognition problems, more
specialized techniques must be used for tasks with high class variability, or
tasks involving multiple objects and contextual constraints, such as the
recognition of handwritten words. Low-level information (raw pixels, or
features) can be handled by convolutional neural networks, while the higher
level must be handled by model-matching methods such as Hidden Markov Models,
or other elastic matching techniques. With gradient-based learning
algorithms, all the modules can be trained simultaneously to optimize a global
performance measure. To illustrate the point, a comparison of shape
recognition methods will be presented, and a complete handwriting recognition
system will be described.
retour au programme
Y. Le Cun
Training a large Neural Network typically involves solving a non-linear
least-squares optimization problem with thousands of parameters. While the
non-linear optimization literature is full of efficient algorithms, none of
them seem to apply to neural-net training. In fact, for large networks,
nothing seems to works significantly better than a carefully tuned "on-line"
(or stochastic) gradient descent. The dynamics of learning in multi-layer nets
will be analyzed, and various algorithms, and tricks, will be presented for
improving the speed and reliability of neural-net training. Specific topics
include: finding the optimal learning rate, computing and using the second
derivative information, decorrelating the variables...
retour au programme
J. Pollack
In research on the reconciliation of symbolic and biological accounts
of complex cognitive behavior, our approach has been to develop neural
network architectures which include the useful qualities of symbolic
models, without giving up the promises of Connectionism through
hybridization or direct implementation. One of these architectures
supported a computational theory of learned language recognition based
on a discrete-time dynamical system, a initial condition, and a simple
(threshold) decision function. There is a clear analogy between such
dynamical recognizers and finite state automata, a field which is now
constantly plowed, but the analogy breaks down when the dynamical
system is nonlinear and shows sensitivity to initial conditions, or
obtains an infinite fractal limit set. Further research led to the
hypothesis that dynamical systems may be the direct substrate for
cognitive faculties, such generative language and mental imagery,
without any intervening symbol processing level.
retour au programme
J. Pollack
The one clear principle of machine learning is that in order to
succeed, a learner has to be very tuned to its task. So to avoid
working on toy problems, researchers must spend all their time
crafting algorithm, training environment, and gradient functions in
order to get something "big" to work. Scientific claims of success
which focus on the benefits of a learning method, for either
engineering or psychological modeling, become totally suspect when
further analysis shows the explicit or implicit tuning of the
"inductive bias." Coevolution, which involves dynamically increasing
the difficulty of a task in response to improvement by a learner, may
be the way out, and will be illustrated with several examples.
retour au programme
Prof. J. Cloutier
We present in this talk the implementation of an SIMD multiprocessor
that is build with large field-programmable logic devices (FPGA).
The SIMD architecture, together with a 2D torus connection topology,
is well suited for image processing, pattern recognition and neural
network algorithms. This board can be programmed on-line at the logic
level, allowing optimal hardware dedication to any given algorithm.
Also, we will discuss about testing new hardware-friendly
algorithms on this processor. Such algorithm permits to simplify hardware
implementation by replacing, for example, costly floating point
computations by fixed-point operations. Doing so permits to increase
the number of processing elements, and thus reduce the processing time.
retour au programme