Colloques du DIRO: automne 2008

Information générale

Responsable en automne 2008: Miklós Csűrös.

Nos colloques ont normalement lieu le jeudi à 15h30, dans la salle 6214 ou la salle 5340 du Pavillon André-Aisenstadt.

Les colloques sont affichés dans Google Agenda sous l'ID pan495jpitq56sh4r47tuflp5c@group.calendar.google.com. Vous pouvez y accéder comme RSS feed ou inclure le calendrier des colloques dans votre agenda éléctronique en format iCal.

Calendrier

18 septembre 2008, 15:30. AA 5340. Tom Schrijvers, Katholieke Universiteit Leuven.
Introduction to Constraint Programming and Constraint Handling Rules
Constraint Programming (CP) is a declarative programming paradigm. The programmer specifies the properties (constraints) of a solution rather than the algorithm to find it. To find the solution the CP language or library provides a generic algorithm, called a constraint solver. Well-known constraint domains and solvers are those for booleans (SAT solvers), reals (simplex solvers) and integers (finite domain solvers). CP has a wide range of applications from planning and scheduling to program analysis and type checking.
Constraint Handling Rules (CHR) is a high-level logic-based rewriting language for writing your own constraint solvers. While typical CP systems offer a fixed set of constraints and constraint domains, CHR supports writing and studying custom constraint solvers for new application domains.
2 octobre 2008, 15:30. AA 5340. Daniel Poulin et Marc-André Morisette, LexUM, Université de Montréal
LexUM: 15 ans d'informatique juridique à l'UdeM : perspectives actuelles de recherche
Daniel Poulin et Marc-André Morissette sont deux anciens du DIRO. L'un est professeurs à la Faculté de droit, l'autre est analyste principal au Laboratoire LexUM de la Faculté de droit. Le laboratoire LexUM est l'un des principaux laboratoires universitaires en informatique juridique. Une dizaine d'informaticiens y travaillent à développer des systèmes documentaires pour le droit, notamment la ressource CanLII.
Le domaine des systèmes d'information pour le droit et les principales réalisation du LexUM seront d'abord présentées. Ensuite, dans la deuxième moitée de la conférence, les diverses perspectives de recherche, le plus souvent bien concrètes, seront présentées dans l'esprit d'en discuter avec l'auditoire. Les problématiques qui nous intéressent s'attachent au traitement de la langue naturelle : classification automatique, extraction de l'information, production de résumés et extraction de mots-clés, et autres.
16 octobre 2008, 15:30. AA 6214. Geoffrey Hinton, University of Toronto.
Visualizing high-dimensional data using t-SNE
(La présentation se donnera en anglais.)
Over the last decade, many new methods have been developed for visualizing high-dimensional data by giving each data-point a location in a two-dimensional map. The goal is to represent the separations of pairs of data-points by the separations of their corresponding map-points, with an emphasis on representing the small separations accurately. I will describe a new method, called t-SNE, that is based on two ideas. The first idea is to convert the set of pairwise distances between data-points into a set of probabilities of selecting pairs of data-points. The selection probability of a pair of points is proportional to a Gaussian function of their separation. If the distances between map-points are converted into pairwise probabilities in the same way, any given arrangement of map-points can be evaluated by measuring the divergence between the probability distributions obtained from the data-points and the map-points. A good arrangement of map-points is then found by performing gradient descent in this divergence.
Unfortunately, if the probabilities of pairs of map-points are computed using a Gaussian function of their separation, the difference between the distributions of pairwise distances in high-dimensional and low-dimensional spaces causes the map-points to be crowded together in the center of the map. This problem can be largely overcome by using a heavy-tailed t-distribution when computing the selection probabilities of pairs of map-points. This leads to maps that look much better than those produced by other recent methods. In particular, t-SNE is very good at preserving clusters in the data at many different scales simultaneously.
The talk describes joint work with Laurens van der Maaten that will appear in the Journal of Machine Learning Research.
30 octobre 2008, 15:30. AA 6214. Wolfram Luther, Universität Duisburg-Essen.
Rule-based search in text databases with nonstandard orthography
(La présentation se donnera en français.)
The presentation describes ongoing research in the RSNSR (Regelbasierte Suche in Textdatenbanken mit nichtstandardisierter Rechtschreibung) project. The project is funded by the German research council (DFG) and is carried out in collaboration with U. Ammon and N. Fuhr (Duisburg). The focus of this project is making historical text documents digitally available; consequently, it examines the challenges for digitization procedures and subsequent retrieval operations, like fuzzy full-text search. Difficulties are posed by scans of low quality facsimiles, old font types, inconsistent transcriptions and especially typical optical character recognition (OCR) errors and spelling variation. The seminar discusses recent solutions to such problems, concentrating on stochastic string edit distance measures, so-called evidences and the avoidance of static dictionaries. Respective approaches when addressing issues of spelling variation in German and English historical texts are described. The common solution to the problem of unstandardized spellings, the use of large historical dictionaries, is costly. Instead, we use linguistic evidence transferred to formerly unknown spellings. We manually collected more than 12,800 word pairs of spelling variants or recognition errors and their related standard spellings. Stochastic training on such evidences allows for the development of reliable topic-related search engine modules. Since their quality heavily depends on the amount of available training data, we developed algorithms and interfaces for the automated support of our work. By presenting visualization approaches for retrieval in and browsing of historical databases and nonstandard text documents, we show that the prototypical software Metric Evaluation Tool is helpful in the evaluation of distance measures. The combination of overview, details and interactivity eases the complex task of determining quality problem-specific distance measures and proposes a progression of information visualization in linguistics.
6 novembre 2008, 15:30. AA 6214. Michel Dumontier, Carleton University.
Biological knowledge management using semantic Web technologies
(La présentation se donnera en anglais.)
Les diapos de la présentation de Michel Dumontier sont disponibles sur le web.
Bioinformatics will forever be burdened with the challenges of managing exponentially growing and highly dynamic biological data. Over the past 20 years, researchers have struggled with developing less than satisfactory solutions using a wide variety of formats and technologies. The standardization of the Web Ontology Language (OWL), as part of the W3C's Semantic Web effort, offers exciting new opportunities to facilitate biological knowledge management. In this talk, I will discuss the basic ideas behind the initiative and discuss how one creates and uses sophisticated knowledge bases to represent, integrate and query biological knowledge. In particular, I will highlight our work to represent and reason about biochemical structure and function, and recent efforts to capture and answer questions about the pharmacogenomics of depression.
13 novembre, 15:30. AA 6214. Nicolas Lartillot, Université de Montréal.
Méthodes de Monte Carlo pour l'inférence Bayésienne en évolution moléculaire
L'inférence Bayésienne est de plus en plus utilisée en biologie statistique. Au delà des arguments conceptuels et fondationnels qui opposent traditionnellement statistiques classique et Bayésienne, ce qui a favorisé l'essor de l'inférence Baysésienne est probablement la grande flexibilité offerte par les méthodes de Monte Carlo, permettant d'explorer une large gamme de modèles. Toutefois, la complexité croissante des modèles envisagés requiert le développement de méthodes de Monte Carlo plus sophistiquées, permettant de faire converger les échantillonneurs en un temps raisonable. Je présenterai une partie des développements récents dans cette direction, en particulier l'échantillonnage de Gibbs conjugué, permettant d'implémenter des modèles phylogénétiques non paramétriques efficaces. J'aborderai aussi les pistes permettant d'envisager d'adapter ces méthodes de Monte Carlo à l'inférence par Maximum de vraisemblance, et proposerai une perspective essayant de combiner les avantages de cette derniére approche avec ceux de l'inférence Bayésienne.
MERCREDI 19 novembre, 15:30 AA 6214. Jerzy Filar, University of South Australia.
Controlled Markov chains, graphs and Hamiltonicity
(La présentation se donnera en anglais.)
This presentation summarizes a line of research that maps certain, notoriously hard, problems of discrete mathematics - such as the Hamiltonian Cycle and the Traveling Salesman Problems - into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. The convexification of domains underpinning the reported results is achieved by assigning probabilistic interpretation to key elements of the original deterministic problems.
In particular, approaches summarized build on a technique that embeds Hamiltonian Cycle and Traveling Salesman Problems in a structured singularly perturbed Markov Decision Process. The unifying idea is to interpret subgraphs traced out by deterministic policies as extreme points of a convex polytope in a space filled with randomized policies. Special attention is devoted to the subset of the latter that corresponds to all doubly stochastic probability transition matrices induced by a graph. By the famous Birkhoff - von Neumann Theorem that subset is the convex hull of permutation matrices induced by the given graph. Clearly, Hamiltonian cycles (if any) are among the extreme points of this doubly stochastic polytope.
The topic has now evolved to the point where there are many, both theoretical and algorithmic, results that exploit the nexus between graph theoretic structures and both probabilistic and algebraic entities of related Markov chains. These include moments of first return times, limiting frequencies of visits to nodes, or the spectra of certain matrices traditionally associated with the analysis of Markov chains. There are also interesting connections with singular perturbation theory of mathematical programs and the cross-entropy method for the estimation of rare events. A number of open questions and problems will be described in the presentation. The latter will include a conjecture suggesting that instances of graphs where the underlying NP-difficulty of the Hamiltonian cycle problem is a substantive issue are exceptionally rare.
27 novembre, 15:30. AA 6214. Jérôme Waldispühl, Massachusetts Institute of Technology.
Modeling Structural Ensembles of Transmembrane Proteins
Computational protein structure modeling plays an important role in molecular biological research. In addition to well-established algorithms for the interpretation of experimental data (such as X-ray crystal diffraction and NMR), homology-based protein structure prediction tools have become accurate enough to significantly contribute to the understanding of a protein's structure, function, and interactions. Unfortunately, many protein families, such as transmembrane beta-barrels (found in the outer membrane of Gram-negative bacteria, mitochondria, and chloroplasts), are experimentally difficult to study using crystallography or NMR, and few homologues are fully characterized, rendering existing methods insufficient.
In this talk, Jerome Waldispuhl and Charles O'Donnell introduce a new family of algorithms, implemented as the tool partiFold, for investigating the folding landscape of transmembrane beta-barrel proteins based only on sequence information, broad investigator knowledge, and a statistical-mechanical approach using the Boltzmann partition function. This provides predictions of all possible structural conformations that might arise in-vivo, along with their relative likelihood of occurrence. Using a parameterizable grammatical model, these algorithms incorporate high-level information, such as membrane thickness, with an energy function based on stacked amino-acid pair statistical potentials to predicted ensemble properties, such as the likelihood of two residues pairing in a beta-sheet, or the per-residue X-ray crystal structure B-value. Complete conformations can also be sampled from the ensemble, providing a good picture of the subset of low-energy structures [1].
This framework has also been extended in more recent work to combine these same ensemble prediction with classical sequence alignment algorithms to obtain high-quality alignments for non-homologous transmembrane beta-barrel protein pairs [2].
  • [1] Modeling Ensembles of Transmembrane Beta-barrel Proteins. Jerome Waldispuhl*, Charles W. O'Donnell*, Srinivas Devadas, Peter Clote, Bonnie Berger. (*equal contribution). Proteins: Structure, Function, and Bioinformatics, Volume 71, Issue 3, pp. 1097-1112
  • [2] Simultaneous Folding and Alignment of Membrane Proteins. Jerome Waldispuhl, Charles W. O'Donnell, Sebastian Will, Srinivas Devadas, Rolf Backofen, Bonnie Berger. (Submitted.)
4 décembre, 15:30. AA 6214. Geňa Hahn, Université de Montréal.
Jeux de policiers et de voleurs
Le sujet des jeux de poursuite sur des graphes connaissent une avalanche d'interêt après un calme relatif entre les premiers articles des années 80 et (a peu près) la fin du siècle. Depuis, on les voit partout. Cette conférence donnera un survol des problèmes et de certaines solutions de base ainsi que des principales variantes et des problèmes ouverts.

Les colloques des années passées