Montreal-Toronto Computer Vision Workshop 2005

McGill

Trevor Ahmedali (poster)

Online Person Detection for Parallel Distributed Surveillance Camera Arrays

The video surveillance systems are increasingly relied upon as a cost-effective way of monitoring large areas. Since the tedious nature of this work can make the security personnel prone to error, it is desirable to have a network of multiple video surveillance cameras that could automatically detect individuals and notify security personnel. Such person-detection systems often have several drawbacks: there is a lengthy offline training period to create a classifier, which is then fixed, and these classifiers are dependant on the position and viewpoint of the camera, making training a large network of cameras impractical. We therefore seek to create a low-cost decentralized camera array that uses an online-learning system to learn a person-detection classifier "on the fly", built using inexpensive commercial off-the-shelf cameras and embedded microprocessor boards. Cameras learn their classifiers simultaneously, eliminating lengthy offline training periods. Processing is done on each camera module, so only high-level data and occasional images of detected people are transmitted. A computationally efficient offline classifier was developed using the Viola-Jones algorithm, which uses a cascade of small classifiers based on image features. A new online-learning classifier was then based on the Winnow learning algorithm while using the same principles and images features. To overcome the problems of obstructions and false positives, the algorithm was extended across multiple cameras, allowing them to pool their results to increase the likelihood of a correct prediction.

Isabelle Bégin (poster)

Multi-Scale Technique and Similarity Metric Comparison for Blind Super-Resolution

Most existing super-resolution algorithms assume knowledge of camera degradation parameters. In this research, we address the more general problem of generating a high- resolution image from a low-resolution image degraded with an unknown camera. Quality assessment is also addressed by comparing a variety of existing similarity measures. An existing learning-based algorithm using Markov networks was enhanced by using Laplacian pyramids. The pyramids are used to obtain the high and mid frequency images as well as to capture the relationship between a high resolution image and its low resolution version. Experiments were performed on many types of images. Results show that the algorithm reliably recovers the super-resolved image as well as the Point-Spread-Function (PSF) of the camera. The metric comparison shows that results vary greatly with the type of metric chosen. Metrics such as the Structure Similarity Measure are thought to be more suitable for quality assessment in the context of super-resolution, since measures based on pixel-to-pixel statistics do not necessarily represent what is perceived by a human eye. 

Francois Cayouette (Poster)

Generic Real Time Objects Tracking Method in a Semi-Dynamic Environment

We present a method able to track multiple of objects in a generic scene. The system is implemented in a very generic way so it does not know which objects are expected in the scene. Our tracker uses three features in the image to help detect new objects and track time while they stay in the camera field of view. The features used are motion, edges and colour. The importance of these features in the tracker changes over time as some features are used to recover from perceived tracker error when using a given feature. Simple methods are used in the tracker in order to keep the tracker as near as real-time as possible. A background learning and recognition algorithm has been implemented to cope with changes in the background.

Paul Di Marco (Poster)

not available

Philippe Giguere (Poster)

Towards autonomous amphibious locomotion

Aqua represents the underwater extension of the successful hexapod platform RHex, with 6 flippers replacing the compliant legs. Currently, Aqua is tele-operated via a fiber optic cable. A more autonomous behavior would help in early stage increase the usability of the robot, and in later stage render the communication link obsolete. The autonomy problem can be split into planning, localization, and acting. For the scope of this project, we will be concerned about the last part. In most legged robots, gait selection is primordial to achieve good locomotion. The optimal gait depends on the environment (underwater, staircases, inclined surface); hence gait selection requires good environment sensing. With the help of machine learning techniques (reinforcement learning) we hope to achieve automatic gait selection via environment sensing.

Maria Nadia Hilario (Poster)

Occlusion Detection in Front Projection Environments Based on Camera-Projector Calibration

Camera-projector systems are increasingly being used to create large interactive displays for data visualization, virtual environments and mixed reality. Front projection displays, however, suffer from occlusions, resulting in shadows and light being cast, respectively, onto the display and the user. Researchers have begun addressing the issue of occlusion detection to enable dynamic shadow removal and human-computer interaction. An occlusion detection technique for front projection environments is presented. The approach is based on camera-projector geometric and color calibration, which enable dynamic camera view synthesis of the projected scene. Occluded display regions are then detected through pixel-wise differencing between predicted and captured camera images.

Saul Simhon (Poster)

A Pen-Based Interface for Controlling Systems with Multiple DOF

In past research, most approaches for modeling constraints on curves consist of using specialized models that are based on rules and preferences for a given domain. Can we learn such rules by simply examining several example curves that exhibit the desired properties? In this work we present a machine learning framework for the automatic classification and refinement of hand drawn 2D curves. The main objective is to develop a dynamic computational model for processing gesture based inputs (such as a pen stroke) and inferring the user's intent under different contexts (i.e. a 'smart user interface'). This is exemplified by two distinct applications: tele-operated robot path planning and sketch beautification. In the robotics application, we present a sketch-based robotic control system where users can simply sketch out the path they wish the robot to take without having to worry about the low-level details. Using supervised learning, the system automatically synthesize kinematically correct paths that avoid obstacles without having to explicitly model the dynamics of the robot. This avoids the difficulties in modeling complex, multi-DOF systems, such as the AQUA robot. In the sketch beatification application, the same framework is applied to synthesize novel full-colored illustrations from coarse outlines. In both of these applications, it is demonstrated that we can learn constraints on curves from a set of examples and apply them to augment the rudimentary gesture information from a human operator. Further, it is demonstrated that, in maximum likelihood sense, we can identify what class of examples the human input belongs to, allowing for the automatic selection of the most appropriate type of refinement that should be applied. In cases where the gesture information has already been rendered to an image, it is also shown that the same methodology can be used to detect and extract from the image the most likely parametric curve.

Sandra Skaff and Carmen Au (poster)

Anomaly Detection for Video Surveillance Applications

Monitoring activity within a building is one common application of video surveillance. In this activity, security guards are primed to detect anomalous situations. In our work, we consider the problem of detecting when an image is similar to an image seen in the past, or conversely, when an image is different from the images seen in the past. This latter case will allow for anomaly detection. We propose to use a similarity measure from Bennet et al that can remove most of the mutual information between data sets. Images were obtained from a surveillance camera viewing a hallway at our research centre. A threshold corresponding to the relative reduction in the size of the compressed concatenated image of two images as compared with the sum of the sizes of the individually compressed images was chosen. Results show that as the threshold increases, the number of novel images detected decreases. Moreover, as more images are viewed, the rate of the number of novel images detected decreases. This rate reflects our a priori intuition that as we are exposed to more images fewer and fewer novel images are observed.

Jianfeng Yin (poster)

A New Photo Consistency Test for Voxel Coloring Volumetric scene reconstruction is an important task for many vision

applications. Most voxel coloring or space carving techniques required for this purpose suffer from the coupled problems of visibility and photo consistency. We propose a new photo consistency measurement that implicitly solves the visibility problem, thus permitting an efficient, single-scan voxel inspection that can be parallelized.

Tina Ehtiati (oral)

Interacting Scene and Object Identification Processes: Strongly Coupled Priors vs. Strongly Coupled Likelihoods.

Identification of scenes and objects are not independent operations and results of each process influence and facilitate the other process. Psychophysical studies provide evidence that the architecture of the human visual system allows interaction between the scene and the object identification processes. Our main objective is to study possible Bayesian models for relating the two processes dealing with different levels of abstract concepts. We present and compare two possible probabilistic models for this interaction. The capability of the models for improving results for scene identification is demonstrated through a comparison with results from a probabilistic solution with no feedback.

Michael Langer (oral)

Multilayered motion

Many types of visual motion involve layers. Examples include transparency, fog, dense 3D clutter such as foliage and falling snow, and specularities. Here I will review some recent analysis of layered motion. First, I will consider the motion of specularities on smooth random mirror surfaces and show that these motion produce similar motion parallax to that seen by an observer in a cluttered scene. Second, I will review some of what is known about layered motion perception, in particular sensitivity to gradients and discontinuities, and discuss what this implies about how well we can synthesize layered motion in the context of graphics and visualization. This is work with my students: Y. Farasat, J. Pereira, D. Rekhi, A. Bhatia

Junaed Sattar (oral)

A Visual Servoing System for an Aquatic Swimming Robot

The presentation describes a visual servoing system for an underwater legged robotic system named AQUA and initial experiments with the system performed in the open sea. A large class of signifcant applications can be leveraged by allowing such a robot to follow a diver or some other moving target. The robot uses a suite of sensing technologies, primarily based on computer vision, to allow it to navigate in shallow-water environments. The visual servoing system described here allows the robot to track and follow a given target underwater. The servo package is made up of two distinct parts: a tracker and a feedback controller. The system has been evaluated in the sea water and under natural lighting conditions. The servo system has been tested underwater, and with minor modifcations the system can be used while the robot is walking on the ground as well.

Matthew Toews (oral)

Mutual Information Matching in the Presence of Sparse Data

The mutual information (MI) similarity measure is useful for image matching in multi-modal imaging contexts, where other measures of similarity such as correlation or sum of squared differences do not apply. MI similarity is based on an estimate of the joint intensity distribution of pixel intensities in the images to be matched, which can result in sensitive, spurious registration in the presence of noisy or sparse data. When viewed from the perspective of statistical parameter estimation, this sensitivity is due to the fact that most estimators used are variants of ML (maximum likelihood) estimation, which is sensitive in the presence of sparse data. We propose a general MAP (maximum a posteriori) estimation technique based on a maximum entropy prior, which is well defined in sparse data or even in the absence of data all together. We apply our technique to the task of MR (magnetic resonance) to US (ultrasound) registration, a multi-modal image matching domain notorious for sparse, noisy data.

Luz Abril Torres-Mendez (oral)

Visual and Range Image Statistics for Mobile Robot Environment Modeling

We present a novel statistical learning method that infers the 3D layout of an unknown environment by using only few images and a very small amount of range data. Inferring the 3D layout of space is a critical problem in robotics and computer vision. In robotics, a huge amount of range data needs to be acquired to extract the geometry of the scene. This task is physically demanding and slow for many real systems, and when trying to speed up the acquisition process the paid off are noisy measurements and/or a low-resolution range image. Our approach considerably reduces the complexity in the acquisition process, by acquiring a small amount of range data and few intensity images. We exploit the assumption that intensity and range data are correlated, albeit in potentially complicated ways, but exhibiting useful structure. The scientific issue is to represent this correlation such that it can be used to recover range data where missing. Markov Random Fields are used as a model to capture the local statistics of the intensity and range. Contrary to previous work, our method does not depend on prior knowledge of reflectance or on surface smoothness or even on surface integrability. Experiments on data taken in our own lab are conducted under different configurations to demonstrate the feasibility of the method.

Juan Zhang (oral)

Medial Surfaces for Matching 3-D Models

We consider the use of medial surfaces to represent symmetries of 3-D objects. This allows for a qualitative abstraction based on a directed acyclic graph of components and also a degree of invariance to a variety of transformations including the articulation and deformation of parts. We demonstrate the use of this representation for both indexing and matching 3-D object models. Our formulation uses the geometric information associated with each node along with an eigenvalue labeling of the adjacency matrix of the subgraph rooted at that node. We provide empirical results comparing our algorithm with two other popular approaches in the graphics community (harmonic spheres and shape distributions) on a database of over 300 object models organized by class. The results demonstrate the ability of medial surface-based representations and their graph spectra to provide superior performance, particularly in the case of articulated models. Joint work with K. Siddiqi (McGill), D. Macrini (Toronto), A. Shokoufandeh (Drexel), S. Dickinson (Toronto).

Toronto

Alex Levinshein (oral)

Learning Decompositional Shape Models from Examples

We present an algorithm for automatically constructing a decompositional shape model from examples. Unlike current approaches to structural model acquisition, in which one-to-one correspondences among appearance-based features are used to construct an exemplar-based model, we search for many-to-many correspondences among qualitative shape features (multi-scale ridges and blobs) to construct a generic shape model. Since such features are highly ambiguous, their structural context must be exploited in computing correspondences, which are often many-to-many. The result is a Marr-like abstraction hierarchy, in which a shape feature at a coarser scale can be decomposed into a collection of attached shape features at a finer scale. We systematically evaluate all components of our algorithm, and demonstrate it on the task of recovering a decompositional model of a human torso from example images containing different subjects with dissimilar local appearance.

Divyang Masrani (poster)

Expanding Stereo-Disparity Range in an FPGA System While Keeping Resource Utilization Low

Stereo disparity estimation is a prime application for embedded computer vision systems. Since stereo can provide depth information, it has potential uses in navigation systems, robotics, object recognition and surveillance systems, amongst others. Solutions based on reconfigurable hardware have the desirable property of allowing the designer to take advantage of the parallelism inherent in many computer vision problems, not the least of which is stereo disparity estimation.

In this work, a stereo algorithm based on phase correlation [1] is extended to handle a larger disparity without significantly increasing hardware resource utilization as compared to a previous implementation [2] of the algorithm. Modifications to the original algorithm are made in a manner to produce optimal performance at frame-rate (30 fps) on an Altera Stratix S80 FPGA. We have made two changes to the original local weighted phase-correlation algorithm, which enable us to achieve a greater range of disparity calculations while keeping the hardware resource usage to slightly less than in the original implementation [2]. The first modification is the use of a tracking correlation window that uses temporal information from the previous time frame to centre the correlation window at the disparity value from the previous frame. The second alteration is the use of a roving correlation window that searches the correlation function at regularly spaced increments, searching one region per frame.

Each of these correlation windows searches an area that is 9 pixels wide at the finest scale and the system can handle a disparity of 128 pixels (or more) in comparison to the previous implementation that used a correlation window of 20 pixels at the finest scale and had the ability to handle a disparity of only 20 pixels. The hardware resource utilization is directly proportional to the correlation search area and the minimum distance of an object from the camera that a stereo algorithm can distinguish is inversely proportional to the maximum disparity range. Therefore, we have been successfull in decreasing the minimum distance (or increasing the disparity range) which the stereo system can distinguish from 2 m in the previous system to 30 cms while not increasing hardware resources.

[1] David J. Fleet. Disparity from local weighted phase correlation. In International Conference on Systems, Man and Cybernetics, volume 1, pages 48--54, 1994.

[2] Ahmad Darabiha, Jonathan Rose, and W. James MacLean. Video-rate stereo depth measurement on programmable hardware. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision & Pattern Recognition, volume 1, pages 203--210, Madison, WI, June 2003.

Sam Hasinoff (oral)

Confocal Stereo

We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. This method is specifically designed for reconstructing scenes with high geometric complexity or fine-scale texture---hair, dirty transparent surfaces, etc. To achieve this, we introduce the confocal constancy property, which states that as the lens aperture varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way; moreover, this variation can be predicted by prior radiometric lens calibration. The only requirement is that incoming radiance within the cone subtended by the largest aperture is approximately constant. Confocal constancy leads to a focus criterion that can be evaluated
separately for each pixel, without making any assumptions about depth variation in the pixel's neighborhood. To exploit this criterion for reconstruction, we develop a detailed lens model that factors out the geometric and radiometric distortions observable in very high resolution digital SLR cameras (12MP or more) with large-aperture lenses (e.g., f1.2). We present initial reconstruction results for scenes containing hair, plants, and transparent plastic. These results suggest that our method can reconstruct complex scenes with a high level of detail.

Midori Hyndman(oral)

Autoregressive Models for Dynamic Textures

not available

Ady Ecker (demo)

Tracking with directional moments

I'll present a tracking program based on trigonometric moments. These moments are used to summarize distributions of orientations, in this case color (in HIS model) and edge orientation. Trigonometric moments have an advantage over histograms of orientations: they are fast to compute and they can be steered to be relative to any fixed orientation (e.g. the mean orientation). Preliminary results suggest that the color feature can be used for tracking simple motion sequences whereas the edge orientation feature is too weak by its own.

Waterloo

Allan Caine(oral)

The Phase Method and Optical Snow

Optical snow is a particular kind of image sequence where the velocities of all of the objects in the scene are subject to constraint. The problem is for a computer to determine the direction of this optical snow. Yet, the computer does not actually track the objects themselves. We present a new method which is simpler and faster than the currently used method to accomplish this objective. The current method requires a computationally expensive 3D Fast Fourier Transform (FFT), memory to store the results, and uses all frames of the image sequence. Our method is simpler because it requires only two 2D FFT's, less memory, and uses only two frames of the image sequence. It is also faster because we have reduced the dimensionality of the problem from 3D to 2D. While the current method uses the Motion Plane Property for the determination of direction, we will demonstrate that determining the direction of optical snow can be accomplished by taking the phase of the cross power spectrum.

Université de Montréal

Catherine Proulx (oral)

A maximum flow approach to the volumetric reconstruction problem

We present a 3D reconstruction technique based on the maximum-flow formulation. Starting with a set of calibrated images, we globally search for the most probable 3D model given the photoconsistency and the spatial continuity constraints. This search is done radially from the center of the reconstruction volume; therefore imposing a radial topology. The fact that cameras are arbitrarily positioned around the scene presents challenges for managing occlusion, especially when applying global smoothing. We solve this problem by proposing an iterative occlusion management mechanism, and a new way of looking at surface smoothing and discontinuities that takes photoconsistency into account. Experiments show that our method is relatively fast and robust when dealing with simple objects, even in noisy conditions.

Mohamed Dahmane (poster)

Real-time moving object detection and shadow removing in video surveillance

In automatic video monitoring, real-time detection and in particular shadow elimination are critical to the correct moving objects segmentation since they severely affect the surveillance process. In this study, we propose a fast and flexible approach of movement detection based on an adaptive background subtraction technique with an effective model of shadow elimination based on color constancy principle in RGB color space. The results show the robustness of the model and particularly its capacity to work in a completely autonomous way. As in any modular conceptions, the test of the real performances of an algorithm must be carried out in its global context; it's why a complete automatic monitoring system was elaborated. However, in this article the emphasis will be put on the detection part.

Jean Philippe Tardif (oral)

A MRF formulation for coded structured light

Multimedia projectors and cameras make possible the use of structured light to solve problems such as 3D reconstruction, disparity map computation and camera or projector calibration. Each projector displays patterns over a scene viewed by a camera, thereby allowing automatic computation of camera-projector pixel correspondences. This paper introduces a new algorithm to establish this correspondence in difficult cases of image acquisition. A probabilistic model formulated as a Markov Random Field uses the stripe images to find the most likely correspondences in the presence of noise. Our model is specially tailored to handle the unfavorable projector-camera pixel ratios that occur in multiple- projector single-camera setups. For the case where more than one camera is used, we propose a robust approach to establish correspondences between the cameras and compute an accurate disparity map. To conduct experiments, a ground truth was first reconstructed from a high quality acquisition. Various degradations were applied to the pattern images which were then solved using our method. The results were compared to the ground truth for error analysis and showed very good performances, even near depth discontinuities.

Francois Destrempes (oral)

A Stochastic Method for Bayesian Estimation of Hidden Markov Random Field Models with Application to a Color Model.

not available

Pierre-Marc Jodoin (oral)

Markovian Segmentation and Parameter Estimation on Graphics Hardware

This contribution shows how unsupervised Markovian segmentation techniques can be accelerated when implemented on graphics hardware equipped with a Graphics Processing Unit (GPU).  Our strategy exploits the intrinsic properties of local interactions between sites of a Markov Random Field model with the parallel computation ability of a GPU.  This paper explains how classical iterative site-wise-update algorithms commonly used to optimize global Markovian cost functions can be efficiently implemented in parallel by fragment shaders driven by a fragment processor. This parallel programming strategy significantly accelerates optimization algorithms such as ICM and simulated annealing.  Good acceleration are also achieved for parameter estimation procedures such as K-means and ICE. The experiments reported in this contribution have been obtained with a mid-end, affordable graphics card available on the market.

Melissa Jourdain (poster)

3D Reconstruction of an IVUS Transducer Trajectory with a Single View in Cineangiography

During an Intravascular Ultrasound(IVUS) intervention, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. Unfortunately, there is no 3D information about the position and orientation of these cross-section planes. To position the IVUS images in space, some researchers have proposed complex stereoscopic procedures relying on biplane angiography to get two X- ray image sequences of the IVUS transducer trajectory along the catheter. We have elaborated a much simpler algorithm to recover the transducer 3D trajectory with only a single view X-ray image sequence. The known pullback distance of the transducer during the IVUS intervention is used as an a priori to perform this task. Considering that biplane system are difficult to operate and rather expensive and uncommon in hospitals; this simple pose estimation algorithm could lead to an affordable and useful tool to better assess the 3D shape of vessels investigated with IVUS .

Gaspard Petit (oral)

Solving Motion Planes by Projection and Ring Integration

We present a new method to find motion planes in energy based and spatio- temporal derivative optical flow. Because our method makes few assumptions about the motion model and the number of motions present in the sampling window, we are able to recover simple single motion as well as complex distributions involving transparency and occlusions. We also discuss the effects of spectral overlapping in the case of energy-based methods and present some results on synthetic and natural sequences

Marc-Antoine Drouin

Geo-consistency for Wide Multi-Camera Stereo

This paper presents a new model to overcome the occlusion problems coming from wide baseline multiple camera stereo. Rather than explicitly modeling occlusions in the matching cost function, it detects occlusions in the depth map obtained from regular efficient stereo matching algorithms. Occlusions are detected as inconsistencies of the depth map by computing the visibility of the map as it is reprojected into each camera. Our approach has the particularity of not discriminating between occluders and occludees. The matching cost function is modified according to the detected occlusions by removing the offending cameras from the computation of the matching cost. The algorithm gradually modifies the matching cost function according to the history of inconsistencies in the depth map, until convergence. While two graph-theoretic stereo algorithms are used in our experiments, our framework is general enough to be applied to many others. The validity of our framework is demonstrated using real imagery with different baselines.

Marc-Antoine Drouin

Fast Multiple-baseline Stereo with Occlusion

This paper presents a new and fast algorithm for multi-baseline stereo designed to handle the occlusion problem. The algorithm is a hybrid between fast heuristic occlusion overcoming algorithms that precompute an approximate visibility and slower methods that use correct visibility handling. Our approach is based on iterative dynamic programming and computes simultaneously disparity and camera visibility. Interestingly, dynamic programming makes it possible to compute exactly part of the visibility information. The remainder is obtained through heuristics. The validity of our scheme is established using real imagery with ground truth and compares favorably with other state-of-the-art multi-baseline stereo algorithms.

Ottawa

Anthony Whitehead (oral)

Feature-based cut detection with automatic threshold detection

There has been much work concentrated on creating shot boundary detection algorithms in recent years. However a truly accurate method of cut detection still eludes researchers in general. Cut detection methods can all be classified based on the various inter-frame differencing schemes that they employ. In this work we present a scheme based on stable feature tracking for inter frame differencing. Furthermore, we present a method to stabilize the differences and automatically identify a global threshold to achieve a high detection rate. We compare our scheme against other cut detection techniques on a variety of publicly available data sources that have been specifically selected because of the difficulties they present for other differencing techniques due to quick motion, many small shots and computer-generated effects.

Xiaoyong Sun (oral)

Two View-Synthesis Methods Used in Image-Based Rendering

In this talk, our recent studies on view synthesis for Image-Based Rendering applications will be presented, which include two different view synthesis approaches. The first one is a triangulation-based view interpolation method, which uses sparse matching features, and in this way is different from traditional approaches based on dense disparity. The proposed algorithms will be discussed together with our simulation results. The second approach is the column-based view synthesis method that is used in the Concentric Mosaics technique. In the Concentric Mosaics technique, a camera is moved on a circle and takes images in the outward radial direction. Information on the camera positions, where the pre-captured images are taken, is required to correctly generate new views based on these pre-captured images. In previous published work, the camera's rotation is precisely controlled at a constant velocity, and thus the camera positions which are uniformly distributed on a circle can be determined. A new approach has been studied without precise control of the camera's rotation in order to simplify the technical requirements for the implementation of the Concentric Mosaics technique. In this way, the camera positions on the circle have to be estimated. The camera-position estimation methods and the correspondent rendering algorithms with non-uniformly distributed pre- captured images will be given during this presentation.

Derek Bradley (demo)

Panorama visualization for image-based rendition of remote environments

not available

Johan Gottin (demo + poster)

Scene augmentation from online tensor estimation

not available

Herve Combe (poster)

Evaluation of different histogram comparison methods

not available

Ghislain Ferreol (demo+poster)

Online object pose estimation for rapid object modelling

not available