Colloques du DIRO: automne 2009-hiver 2010

Information générale

Responsable en automne 2009 et hiver 2010: Miklós Csűrös.

Calendrier

20 mai 2010, 10:30 AA 3195. Pierre Bellec, Centre de recherche de l'Institut de Gériatrie de Montréal.
Classification non-supervisée dans de grandes bases de données : conception, validation et optimisation de chaînes de traitement.
Le volume et la richesse des bases de données acquises par les agences gouvernementales représentent un défi pour la conception et la validation de modèles d'analyse. Les techniques de classification non-supervisée permettent d'explorer les structures présentes dans de grands tableaux de données afin d'en simplifier l'interprétation et générer des hypothèses scientifiques. Je présenterai mes axes de recherche dans ce domaine :
  • Etude de la stabilité d'un processus de classification par ré-échantillonnage [1].
  • Développement de bases de données de simulations réalistes pour l'optimisation et la validation de l'ensemble de la chaîne de traitements de données [2].
  • Développement d'outils informatiques permettant l'implémentation rigoureuse et aisée de chaînes de traitement complexes, ainsi que leur application à des bases de données volumineuses dans un environnement de calcul distribué [3].
Le principal domaine d'application de ma recherche est l'imagerie en résonance magnétique fonctionnelle (IRMf), qui fournit une mesure non-invasive des conséquences vasculaires de l'activité neuronale avec une bonne résolution temporelle et spatiale. Les algorithmes de classification visent alors à cartographier des réseaux distribués de régions cérébrales dont l'activité présente un haut degré de cohérence temporelle. Ces réseaux fonctionnels permettent de mieux comprendre l'organisation cérébrale humaine et peuvent servir de marqueur précoce pour différentes pathologies, notamment la maladie d'Alzheimer.
  1. P. Bellec; P. Rosa-Neto; O.C. Lyttelton; H. Benali; A.C. Evans Multi-level bootstrap analysis of stable clusters in resting-State fMRI Neuroimage 51 (2010), pp. 1126-1139
  2. P. Bellec; V. Perlbarg; A.C. Evans Bootstrap generation and evaluation of an fMRI simulation database Magnetic Resonance Imaging 27 (2009), pp. 1382-1396.
  3. http://code.google.com/p/niak/

Annonce en PDF

Conférences passés de l'hiver

29 avril 2010, 15:30 AA 3195. Dan Brown, University of Waterloo, Ontario.
From DNA to hip hop: how ideas from bioinformatics can automate finding rhymes in rap music
Dan Brown is Associate Professor of Computer Science at the University of Waterloo, where he has been since 2001. From 2000 to 2001, he worked on the human and mouse genome projects at the Whitehead/MIT Center for Genome Research. His interests are in algorithms for understanding the information in discrete sequences, particularly identifying patterns in DNA and protein sequences.
Unlike most kinds of music, the core of rap music is found in the rhythm and rhyme of its lyrics. Different artists or subgenres will use different kinds of rhyme, which in some cases can be extremely complicated: the end of one line may rhyme with several parts of the previous line, and in some cases, rhymes may be imperfect.
Detecting these complex rhyme patterns manually is time consuming and tedious. We have designed a system for automatic rhyme annotation. Our approach is founded on several bioinformatics ideas. First, using a test corpus of known rhymes, we develop a probabilistic model of rhymed and unrhymed syllables. Then, we use that model to build a log-likelihood ratio scoring matrix for identifying what is and is not a rhyme. Finally, we create a local alignment procedure to find high-scoring lyrics segments.
Our procedure has high sensitivity and specificity in identifying true rhymes in an annotated corpus; essentially, it identifies most complex rhymes, and identifies few false rhymes. We can use it to characterize artists, and then to develop classifiers for individual artists with surprising success.
Joint work with MMath student Hussein Hirjee.
Annonce en PDF
lundi le 12 avril 2010, 15:30 AA 3195. Alain Denise, LRI et IGM, Université Paris-Sud 11.
Comparaison de structures d'ARN, comparaison d'arbres, comparaison de graphes
Depuis quelques années, le rôle de l'ARN dans les différents processus cellulaires a été considérablement réévalué. Il y a donc, plus que jamais, un besoin crucial d'outils informatiques pour aider à la manipulation et à la modélisation des ARN. L'exposé sera focalisé sur la problématique de la comparaison des structures d'ARN, qui est étroitement liée à des problèmes de comparaison d'arbres et de graphes. On y abordera notamment la complexité théorique des problèmes de comparaison ainsi que des algorithmes développées récemment, à Orsay et ailleurs.
Annonce en PDF
25 mars 2010, 15:30. AA 3195. Christopher Anand, McMaster University, Hamilton, Ontario.
Coconut: An experiment in safe, high-performance code generation
I will describe the Coconut (COde CONstructing User Tool) project, from initial goals and design philosophy to successes and failures in implementation. Our hope has been that the apparent opposition of safety and performance is an artifact of the way programming languages and compilers have evolved, and by rethinking some of the underlying processes and intermediate objects, we can achieve both. Our correctness strategy is simply to allow only code optimizations in the form of semantics-preserving (hyper-)graph transformations. I will show how even software pipelining can fit into this framework. Our programmability strategy is to use typically pure functional syntax and semantics as much as possible. I will show some examples of this. Our high performance strategy is to expose the tools programmers need, like wide-register formats and instructions to access them, while allowing the programmer to capture patterns of efficient implementation in higher-order functions. At the lowest level of parallelization, this leads to declarative assembly language.
Finally, I will show that incomplete as it is, our tool is already able to outperform the competition by a wide margin on the generation of special function libraries.
Annonce en PDF
28 janvier 2010, 15:30. AA 6214. Dannie Durand, Carnegie Mellon University, Pittsburgh, Pennsylvanie.
Reconciliation of non-binary evolutionary trees
Disagreement between trees representing the evolution of genes and species is evidence that these entities are not simply co-evolving by vertical descent. The problem of reconciling binary gene trees with binary species trees to infer gene duplication, gene loss and/or horizontal gene transfer is a well-studied. However, when the species tree is non-binary, disagreement can also arise from genetic variation in ancestral populations. I will present algorithms for reconciling binary gene trees with non-binary species trees that take population genetic factors into account and discuss their application to resolving uncertainty in species trees.
Annonce en PDF

Conférences passés de l'automne

10 décembre 2009, 15:30. AA 6214. Patrick Meyer, Université Libre de Bruxelles, Belgique.
Méthodes d'inférence de réseau basée sur la théorie de l'information et appliquées à l'analyse de régulations transcriptionnelles
Un des problèmes typiques en bioinformatique consiste à extraire de l'information structurée des données issues de biopuces. Les jeux de données issus de biopuces sont souvent constitués de très nombreuses variables, très peu d'échantillons et beaucoup de bruit. L'analyse de ces données représente donc un des grands défis actuels pour les méthodes d'apprentissage automatique. L'inférence de réseau est une technique d'apprentissage automatique qui vise à déterminer les dépendances entre variables d'un jeu de données et à les représenter à l'aide d'un graphe. Appliquée aux données issues de biopuces, cette technique permet de retrouver le réseau de régulations transcriptionnelles d'une cellule et d'identifier des gènes spécifiques impliqués dans diverses maladies. Cette présentation se focalise sur les méthodes d'inférence de réseau qui utilisent la théorie de l'information pour inférer les dépendances entre variables. On rencontre deux grandes catégories de méthodes: i) les méthodes qui tentent de détecter les relations causales à partir de dépendances conditionnelles ii) les méthodes qui s'appuient sur l'information mutuelle entre toutes les paires de variables. Ces différentes méthodes seront discutées en détail dans cette présentation.
Annonce en PDF
mardi le 8 décembre 2009, 15:30. AA 6214. Guillaume Bourque, Genome Institute of Singapore.
Impact des rétro-transposons sur l'évolution des génomes, sur le cancer et sur la régulation de la transcription
Séminaire conjoint avec le Centre Robert-Cedergren.
Les répétitions génomiques, et en particulier les rétro-transposons, sont présents dans les génomes des vertébrés depuis des millions d'années. Alors que l'ADN répétitive a longtemps été considéré comme de l'ADN parasitique, elle est maintenant de plus en plus reconnue comme une source majeure de changements au cours de l'évolution des génomes. Ma présentation se divise en trois parties. Dans la première, je présente une nouvelle méthode permettant d'identifier des réarrangements génomiques s'étant produits au cours de l'évolution. Les résultats obtenus à partir des génomes des mammifères illustrent à quel point les rétro-transposons ont été des catalyseurs importants de réarrangement génomiques. Dans la seconde partie, je montre, à partir du données issues du séquençage de tumeurs cancéreuse, que le même effet catalytique est aussi à la base de beaucoup de mutations somatiques. Finalement, dans la troisième partie, je présente des résultats sur l'évolution de la régulation de la transcription chez l'homme et la souris basés sur des données de ChIP-Seq (Chromatin-Immunoprecipitation-Sequencing) obtenues dans des lignées de cellules souche. Ces données indiquent que des rétro-transposons spécifiques à chaque espèce ont profondément modifié les circuits de transcriptions des cellules pluripotentes.
Annonce en PDF
26 novembre 2009, 15:30. AA 3195. Rouba Ibrahim, Columbia University, New York.
Real-time delay estimation in customer service systems
Séminaire conjoint avec la Chaire du Canada en simulation et optimisation stochastique.
Rouba Ibrahim is a doctoral student at the Industrial Engineering and Operations Research (IEOR) department of Columbia University. Her doctoral thesis advisor is Professor Ward Whitt. Her research interests lie in stochastic modeling, call centers, simulation, queuing science, and healthcare operations.
Motivated by the desire to make delay announcements to arriving customers, we study alternative ways of estimating customer delay in many-server service systems. Our delay estimators differ in the type and amount of information that they use about the system. We introduce estimators that effectively cope with real-life phenomena, such as customer abandonment (impatience), time-varying arrival rates, and general service-time distributions. We use computer simulation and heavy-traffic analysis to verify that our proposed estimators outperform several natural alternatives.
Joint work with Ward Whitt, IEOR department, Columbia University.
19 novembre 2009, 15:30. AA 6214. David G. Stork, Ricoh Innovations & Stanford University.
Computer vision in the study of art: New rigorous approaches to the study of paintings and drawings
Dr. David G. Stork, Chief Scientist of Ricoh Innovations, is a graduate in physics of the Massachusetts Institute of Technology and the University of Maryland at College Park, studied art history at Wellesley College and was Artist-in-Residence through the New York State Council of the Arts. He is a Fellow of the International Association for Pattern Recognition and has published six books/proceedings volumes and has one forthcoming, including Seeing the Light: Optics in nature, photography, color, vision and holography (Wiley), Computer image analysis in the study of art (SPIE), Pattern Classification (2nd ed., Wiley), and HAL's Legacy: 2001's computer as dream and reality (MIT).
New rigorous computer algorithms have been used to shed light on a number of recent controversies in the study of art. For example, illumination estimation and shape-from-shading methods developed for robot vision and digital photograph forensics can reveal the accuracy and the working methods of masters such as Jan van Eyck and Caravaggio. Computer box-counting methods for estimating fractal dimension have been used in authentication studies of paintings attributed to Jackson Pollock. Computer wavelet analysis has been used for attribution of the contributors in Perugino's Holy Family and works of Vincent van Gogh. Computer methods can dewarp the images depicted in convex mirrors depicted in famous paintings such as Jan van Eyck's Arnolfini portrait to reveal new views into artists' studios and shed light on their working methods. New principled, rigorous methods for estimating perspective transformations outperform traditional and ad hoc methods and yield new insights into the working methods of Renaissance masters. Sophisticated computer graphics recreations of tableaus allow us to explore what if scenarios, and reveal the lighting and working methods of masters such as Caravaggio.
12 novembre 2009, 15:30. AA 6214. Allan Borodin, University of Toronto, Ontario.
The power and limitations of greedy mechanism design for combinatorial auctions
We study combinatorial allocation problems in which objects are sold to selfish agents, each having a private valuation function that expresses values for sets of objects. In particular, we consider the social welfare objective where the goal of the auctioneer is to allocate the objects so as to maximize the sum of the values of the allocated sets. Our specific interest is to understand the power and limitations of conceptually simple allocation algorithms in this regard. For single minded combinatorial auctions (where agents only value and bid on a single set), there is a known truthful deterministic mechanism (Lehmann, O'Callaghan and Shoham) using a greedy allocation that achieves an O(√m) approximation ratio to the social welfare where m is the total number of objects. However, for general (multi-minded) combinatorial auctions the best known deterministic truthful efficient mechanism has approximation ratio O(m/√ log m) (Blumrosen and Nisan). We model greedy algorithms by priority algorithms and show that no truthful priority mechanism obtains a sub-linear approximation ratio. for two-minded agents. In contrast, we show that every c-approximation monotone greedy algorithm for a combinatorial allocation problem can be implemented as a mechanism that achieves a c + O(log c) approximation at every Bayesian Nash equilibrium and if we assume that agents do not overbid, then this ``price of anarchy'' is c+1. Finally, we discuss recent results by Brendan Lucier concerning the use of such simple algorithms in repeated games.
Joint work with Brendan Lucier.
29 octobre 2009, 15:30. AA 6214. Walid Taha, Rice University, Texas.
Mathematical equations as executable models of mechanical systems
Walid Taha is an assistant professor at Rice University, Houston, TX. His interests span programming languages semantics, type systems, compilers, program generation, real-time systems, and physically safe computing. He is the principal investigator on a number of NSF, Texas ATP, and SRC research grants and contracts, including an NSF CAREER Award. He is the principle designer of MetaOCaml, Acumen, and the Verilog Preprocessor system. He founded the ACM Conference on Generative Programming and Component Engineering (GPCE), the IFIP Working Group on Program Generation (WG 2.11), and the Middle Earth Programming Languages Seminar (MEPLS).
Increasingly, hardware and software systems are being developed for applications where they must interact directly with a physical environment. This trend significantly complicates testing computational systems and necessitates modeling and simulation of physical environments. While numerous tools provide differing types of assistance in simulating physical systems, there are surprising gaps in the support that these tools provide. Focusing on mechanics as an important class of physical environment, we address two such gaps, namely, the poor integration between different existing tools and the performance limitations of mainstream symbolic computing engines. We combine our solutions to these problems to create a domain-specific language that embodies a novel approach to modeling mechanical systems that is natural to engineers. The new language, called Acumen, enables describing mechanical systems as mathematical equations. These equations are transformed by a fast, well-defined sequence of steps into executable code. Key design features of Acumen include support for acausal modeling, point-free (or implicit time) notation, efficient symbolic differentiation, and families of equations. Our experience suggests that Acumen provides a promising example of balancing the needs of the engineering problem solving process and the available computational methods.
17 septembre 2009, 15:30. AA 3195. Rocco Oliveto, Università di Salerno, Italie.
Recovering traceability links via information retrieval methods: Challenges and opportunities
Dr. Rocco Oliveto received (cum laude) the Laurea in Computer Science from the University of Salerno (Italy) in 2004. From October 2006 to February 2007 he has been a visiting student at the University College London, UK, under the supervisor of prof. Anthony Finkelstein. He received the PhD in Computer Science from the University of Salerno (Italy) in 2008. Dr. Oliveto participated to the organisation of several international conferences. He was the Program Co-Chair of the International Workshop on Traceability in Emerging Forms of Software Engineering, 2009 (Vancouver, Canada). He co-organised a working session "Software Artefact Traceability: the Never-ending Challenge", at the 23rd IEEE International Conference on Software Maintenance, Paris, France, 2007. He has participated in the program committee of the International Conference on Program Comprehension, the European Conference on Software Maintenance and Reengineeering, the International Conference on Enterprise Information Systems, and the International Conference on Computer Supported Education. He is a reviewer of the International Journal of Software Maintenance and Evolution: Research and Practice, and Information Processing Letters. He is currently a research fellow at the Department of Mathematics and Informatics of the University of Salerno. Moreover, since 2005 he is also contract lecturer at the Faculty of Science of the University of Molise. His research interests include traceability management, information retrieval, software maintenance, program comprehension, cooperative supports for software engineering, empirical software engineering and e-learning.
Software artefact traceability is the ability to identify related artefacts created during the development of a software system. Traceability information are particularly important for a variety of software engineering tasks, such as impact analysis, program comprehension, and more encompassing tasks such as reverse engineering for redevelopment and systematic reuse. Unfortunately, traceability links are rarely explicit, thus they have to be identified and maintained by developers during software development. Since such a task is time consuming, very often traceability information is not maintained up-to-date during software life cycle.
Extensive effort in the software engineering community has been brought forth to provide methods and tools for traceability link recovery. The talk focuses on the use of Information Retrieval (IR) techniques for recovering traceability links between artefacts of different types. Such methods recover traceability links on the basis of the similarity between the text contained in the software artefacts. Indeed, the conjecture is that artefacts having a high textual similarity probably share several concepts, so they are likely good candidates to be traced from one to another. Other than presenting the background on IR-based traceability recovery, the talk will also present and discuss open issues and lesson learned from a family of experiments carried out to statistically analyse how the tracing accuracy of the software engineer are affected by the use of an IR-based traceability recovery tool. The talk will conclude highlighting opportunities and challenges in traceability link recovery.

Les colloques des années passées