r64 - 27 Apr 2011 - 13:19:48 - FredericBastienYou are here: TWiki >  Public Web  > FaceDatabase

Face Database

This page is currently a placeholder for the ideas that have been tossed around in the lab (mostly in one email discussion on lisa_labo) and it doesn't attribute credit. Feel free to edit / complete.

Motivation

Dealing with faces constitutes an interesting vision problem that doesn't require solving the whole of vision. Although a lot of work has been done (with even commercial applications) it's not a solved problem, especially considering low-resolution face images and views that are not simply the standard frontal one. Basically it's a nice stepping stone from mnist to general vision.

Objective

We wish to gather a database of face sequences and face images. Yoshua mentioned something on the order of 100 million images. The sequences and other groupings of face images could be exploited by some learning algorithms to help disentangle the factors of variation.

Existing databases

Existing database with variations in pose

Methodology

Existing Code

  • openCV viola jones implementation, detect faces and provide bounding boxes. here are two: front and side views.
  • Joseph Turian's code: http://github.com/turian/donatefaces
  • http://www.robots.ox.ac.uk/~vgg/research/nface/ That page also has a link to Matlab code for aligning faces, which seems to be the most advanced open-source system for aligning general face images right now (although it's by no means perfect).

Data quality

Yoshua: "Since the goal is to train an unsupervised feature extractor, it may not be too bad to have a lot of false negatives, as long as it is not too systematic (e.g., if we miss all the woman faces, for example, it would be a pity). And even if we have a small fraction of false positives in the data, it should not hinder too much an unsupervised feature extractor." We can get an estimator of quality using mechanical turk.

Valid data

Define what valid data is. Sequences minimum length? No. Sequence coherence criteria (bounding box location, general image or bounding box properties)? Minimal resolution? Want reproducibility of dataset to high prob. Note that Neeraj Kumar seems to think links fade away quickly.

We can deal with noisy labels (e.g. the videos or images associated with a search request have that request as noisy label) but it would be good that the database keeps track of that information, because we could use it in probabilistic models (where basically we would learn the prior probability that faces from that source - e.g. those from google image at rank k - are incorrectly associated with the person name given in the search request.

Regarding image sizes, here are my thoughts:

  • we are probably going to be using convolutional architectures, for which it is ok to have images of different sizes (but the size of the face inside the image should not vary too much, e.g., up to a factor of 2 in area would be fine, I think)
  • for computational efficiency, as we learned from Dumi's talk yesterday, it is better if we can group together images of the same size so that they can go into a minibatch. Hence we should standardize image sizes to some reasonably small set of sizes, using tricks such as resizing (by some kind of interpolation) and adding blank borders, so that we can any image to one of these image formats. It looks that this may be the kind of job we can ask one of the undergraduates to tackle.
  • as I said verbally, I am not worried about mixing images from different datasets in which the crops do not always cover all of the face. Here is a way we could handle this: paste these smaller cropped face images into a bigger background (either monochrome or taken from a randomly selected image) and ask the Viola-Jones detector to put a rectangle around the face. It might want to put a rectangle that extends beyond the original image, and that would then give us a more standardized alignment wrt the rest of the image datasets.

Where to get the data from

  • random walk following links
  • list of requests
  • specific tags
  • some predictor of video quality based on tags + thumbnail (issue of traversal policy and online learning)
  • We also have our friends at Polytechnique who are collecting YouTube? videos by using as request the name of famous people, taken from http://en.wikipedia.org/wiki/Category:Living_people Chris Pal thinks this can lead to videos for 1/20. Unless you bias based on recent / showbiz people, # of images in the wikipedia article. The faces in the videos are around 50% related to the initial name.
  • pictures only: google picture searches (face filter on) with any known-enough name you can think of.

Collection and legal issues

Could we be blacklisted / throttled?

We can't make the images and videos we gather from the web directly available on a public server, but we can distribute URLs. This means imperfect reproducibility. One solution is to make sure we have a test set that is publically available (we can derive one from existing publically available datasets). Since we are going to train from huge quantities of data, it might not be such a terrible thing if a small fraction of the URLs disappeared.

How to save the data

  • want to keeps references (url, frame #, bounding box location)
  • For each frame keep its bounding box? The box's size may change.
  • format?
  • metadata? keep potential tags?

How to crop images and generate samples

  • size of the training samples
  • alignment policy (resizing/cropping)

Artificial data generation

  • Talk about generating more data from the existing data.
  • Talk about using the 3D synthetic face models from David.
  • We can generate synthetic faces (only the face patch, without hairs nor background) also with the Active Appearance Models library

Flicker

This is to document information related to the images from the '365days' flicker group: 20k users 1M photos.

The '365days' flicker group is about taking an autoportrait each day for a year. Basic info about the guidelines: some body part is expected to be in the shot, shots with only shadows or impressions in sand are allowed but should be a minority, digital editing is allowed.

There is a flicker api and it has a nice python kit (http://stuvel.eu/projects/flickrapi). You must register to get an id to use it (and state what you plan on using it for). The terms state "You shall not Cache or store any Flickr user photos other than for reasonable periods in order to provide the service you are providing to Flickr users." API terms of use: http://www.flickr.com/services/api/tos/

The api enables:

  • search for photos based on multiple criteria (date taken, uploaded, tags, location, etc.)
  • will return at most 4000 images per query.
  • downloading in various sizes (75x75 or 100, 240 or 500 on longest side, or bigger)

An attempt to get pictures from the 365days group's pool returns access denied. I could look into it, especially authentication. Another option is use the 365days tag which posters to the group were encouraged to use (660k images).

-- PierreAntoineManzagol - 13 Oct 2010

Tasks and ideas

Models for unsupervised pretraining.
  • tiled convolutional NN (Ian)
  • mcRMB -using HMC- (James)
  • ssRBM -using Gibbs sampling- (James)
  • convolutional DAE (Ian)

Supervised phase. Supervision taken from the context (i.e. consecutive frames, or same person, or same video, etc) TODO:

  • setup efficient access to positive and negative pairs from our dataset
  • write Theano code for the supervised training

Siamese model. The idea is to train a siamese model on a pair of images: for each of them unsupervised features are extracted (through some unsupervised model from the ones listed above) and a scoring function is added at the top. This could be:

  • a dot product
  • L1 norm
  • a MLP
Since we are interested in having a probability for the two images to be the same person, in the first two cases we need to convert the score in a probability and this can be done through a sigmoid: sigm( alpha * score + beta ). In the latter case we don't need more manipulation since the output coming from the last layer of the MLP can be interpreted as a probability.

Noisy label model - for recognition task. We can add an extra latent variable T representing the unknown true label in contrast with the observed noisy label Y. The idea is to estimate T through the usual deep approach, but then we convert it into a probability on Y, estimating P( Y | T, context ) through a kind of look-up table learned with gradient descent.

P( Y = 1 | T, context ) = sigm( a_01 * 1_T=0_context=1 + a_11 * 1_T=1_context=1 + a_02 * 1_T=0_context=2 + a_12 * 1_T=1_context=2 + ... ) = sigm ( a [ 2 * context + T ] )

where a = [ a_01, a_11, a_02, a_12, ..., a_0n, a_1n ] and context \in {0, 1, ..., n-1 } (i.e. all possible contexts we want to take into account). Question: how to initialize the array theta ? a_{2c} = -2, a_{2c+1}= 2, so that T is mapped to Y with Y roughly equal T. i.e. if T = 0 --> sigm(-2) \simeq 0, while, if T = 1 --> sigm(2) \simeq 1

or, equivalently, we can split the vector a in 2 parts: one for the case T=0 and one for T=1. So, we have:

P( Y = 1 | T = 1, context ) = sigm(gamma1 + a1[context])

and

P( Y = 1 | T = 0, context) = sigm(gamma0 + a0[context])

initialization: a1 = 2, a0 = -2

In general:

P( Y | x, context ) = \sum_T P( T | x ) P ( Y | T, context)

where the first term is estimated by the deep model, and the second term is taken from the "look-up table".

The supervised phase is done by minimizing this cost function (max-likelihood):

cost = - log P( Y = observed y | x, context ) = - log P( A same as B | A, B, context ) and

P ( Y = 1 | x, context ) = P( T = 1 | x) P( Y =1 | T =1, context) + ( 1 - P( T = 1 | x) ) P( Y = 1 | T = 0, context )

P ( Y = 0 | x, context ) = 1 - P( Y =1 | x, context )

Other option: train on triplets instead of pairs. A triplet of images A, B, C is built in this way:

  • B same as A (or, better, extracted from one of the positive contexts)
  • C not same as A (i.e. extracted from one of the negative contexts)
The supervision consists in requiring that P( Y = 1 | A, B, context ) > P( Y = 1 | A,C, context ) + margin

One possible supervised criterion is just the max-likelihood:

loss = - log P(A observed same as B | A,B,context) - log P ( 1 - P(A observed same as C | A,C,context) )

or a hinge loss:

loss = max(0, margin - log P(A observed same as B | A,B,context) + log P(A observed same as C | A,C, context) )

Pose estimation task.

  • idea 1: use a common unsupervised feature extractor and perform fine tuning using pose label
  • idea 2: use an autoencoder able to reconstruct a rotated version of the input image. This would have one special unit in the hidden representation, to code the \theta angle and the \delta_\theta. In the decoding part there would be at least one non-linearity in order to compute the rotated version. It could be used also as a building block for the whole siamese network: the autoencoder performs the rotation before computing the similarity score between the two images. (if you want add more details for this idea...)

Test benchmark on public dataset.

  • Labeled faces in the wild
  • rotated olivetti
  • multi-PIE
  • FiA? (I miss this...)
Note: testing on them we don't need the context anymore, we just keep the true label coming from the dataset. The noisy label model is useful only for our facetubes dataset.

List of possible context.

  • 1 next frame
  • k frames later
  • other sequence from the same video
  • other video for the same person

TODO

  • code for the context thing
  • code for siamese net
  • prepare data

-- IlariaCastelli? - 15 Dec 2010

-- IlariaCastelli? - 22 Dec 2010

YouTube? dataset doc

Starting point: wiki leaving people, 0.5 millions people. We have collected the following statistics on each wiki page:
  • wikipedia page word count, wikipedia page image count, wikipedia page image with faces count (images where the Viola-Jones detector found a face), query hit count on google, quoted query hit count on google (query done with quotes on names)

These info are stored in a sqlite database ( /data/lisatmp/data/facetubes/large/faces.db ) made of one table only, having the following schema:

  • id
  • name (person name)
  • url (url of the wikipedia page)
  • word_count
  • image_count
  • image_wface_count
  • query_hit_count
  • qq_hit_count (quoted query hit count)
  • dl_score (computed score for this person - see below)
  • dl_requests (number of videos we attempt do download from youtube for this person)

People have been ranked giving a uniform weight to word count, image count, quoted query hit count:

score = wc / max_wc + ic / max_ic + qq / max_qq

The top 1000 people ranked according to the criterion above have been choosen and we requested 30 videos for each one. Since not all of them had 30 videos available we finally had 24778 video correctly processed. We set a bound on the quality of the videos to be downloaded from youtube, allowing a maximum resolution of 640x360 (to avoid the ones with a very high resolution).

The information about the downloaded videos and the association with the people name is contained in a sqlite database (/data/lisatmp/data/facetubes/large/facetubes.db) made of 3 tables:

  • queries
    • query_id
    • terms (terms of the query)
    • wiki_url
    • desired (number of videos requested for this person)
    • retrieved (number of videos effectively taken for this person)
  • videos
    • video_id
    • youtube_id (the link to the video is then http://www.youtube.com/watch?v=YOUR_YOUTUBE_ID)
    • status (0 = inserted in the db, 1 = video downloading, 2 = video downloaded, 3 = processing video, 4 = video processed, 5 = failed processing)
    • download_attempts
  • results (link between names and videos)
    • result_id
    • query_id
    • video_id

Processing. Videos have been decomposed into frames (jpeg images) using ffmpeg (trick learned here http://electron.mit.edu/~gsteele/ffmpeg/):

ffmpeg -sameq -y -r 30 -i VIDEO_FILE OUTPUT_FOLDER


Then, for each frame:
  • On each frame both the frontal face detector and the profile face detector have passed.
  • Since sometimes the same face is detected by both, I tried to group those detections. So, for each pair frontal-profile found in the same frame, I checked if they were "near enough" to be considered the same face: I computed the distance between the bounding boxes and set a threshold on it (the x,y coordinates have been normalized by the width/height to make them independent w.r.t. the video resolution). In case of matching I kept the frontal detection, but also kept track of the fact the both detectors fired (see below).
  • The tracking is done on-line, with one single view of the video, so I kept track of a series of 'facechains' (what we later called 'facetubes'), each one corresponding to a face being tracked. Each facechains is just a sequence of detections in subsequent frames, where the detections themselves are near enough to be considered as coming from the same person.
  • At each moment, there's a list of open facechain, one for each face currently tracked.
  • For each face detected in a frame, I looked for a matching within the facechains currently open. Again, this is done using a threshold on the distances between the bounding boxes: each new detection is compared with the last detection of the already existing facechains. In so doing, each new detection is temporary associated with the facechains whose distance from it is smaller than a threshold (the threshold is small, so it's unlikely for the candidate facechains to be more than one, and it never happened in the experiments I manually checked, but in principle it would be possible).
  • Since each facechain could have more than one detection candidate to be its next frame, within the list of candidates, the nearest one is choosen.
  • In case a detection doesn't mach any of the existing chains, a new facechains is started from it.

After the processing of every frame, update the status of each facechain:
  • if it had a face matching, add it to the chain itself
  • if it didn't have a face matching, increment a counter that keeps track of how many subsequent frames have passed since the last matching detection
  • if the counter becomes grater than 3, close the chain (remove it from the ones currently open). Doing this, we allow a maximum of 3 subsequent frame without a detection, just in case the chain is matched again at the 4-th.
  • if the chain that is going to be closed is shorter than 35 frames, delete it, otherwise store it

At the end of the video:
  • withing the currently open chains, delete the ones shorter than 35 frames
  • delete chains where more than 10% of the detections are missing (i.e. if the sequences of 3 subsequent missing frames lead to a total amount of frames grater than 10% of the whole chain)
  • otherwise, interpolate the bounding boxes of the missing the detections
  • for each detection within a chain I stored a flag indicating whether:
    • it was detected by the frontal detector (type = 0)
    • it was detected by the profile detector (type = 1)
    • it was detected by both (type = 2)
    • it was not detected, but interpolated from the previous and subsequent bounding boxes (type = -1)
  • delete facechains detected twice. It happened sometimes that the same face was tracked in two different facechains, totally or partially overlapping. So, for each possible pair of facechains within a video, I checked if they had detections in the same frames and if those detections overlap for more than 70% of the area of one of the two. If the overlap grater than 70% is present for more than 50% of the video, the shorter of the two facechains is discarded.
  • an enlargement factor of the bounding boxes equal to 2.2 was applied to every detection (to make the extracted images comparable with the ones in labeled faces in the wild)

For each facechains the images have been cropped from the frames and stored in a .avi movie (facetube). In order to build a movie with all frames having the same dimension:
  • the max and min width and height of all detections belonging to the chain has been computed
  • each image within a facechain has been padded with the original background in the video, to bring it to the maximum dimension
  • a scaling factor has been applied to all the frames in order to the enlarge the face area if smaller than a threshold (ask Pierre-Antoine about this, he fixed this issue after the last chat with Chris Pal at the end of November).
The movie has been built with ffmpeg using the following command:

ffmpeg -y -r 30 -b 10000k -i IMG_PATH OUT_FILE


The original videos are stored in

PATH_ORIGINAL=/data/lisatmp/data/facetubes/large

The result of processing is stored in

PATH_PROCESS=/data/lisatmp/data/facetubes/large_detect

In both cases, a video (or the result of its processing) is stored on the file system in the following way:

  • apply a sha1 hash function to the youtube_id (hashlib.sha1(...).hexdigest() in python)
  • store data in the folder: PATH_X/sha1[:2]/sha1[2:]/youtube_id
For each video, this is what has been stored:
  • a set of .avi movies (one for each facetube)
  • a file .facechains. This is a text json file, containing the following fields:
    • width = width of the original video
    • height = height of the original video
    • videoid = youtube_id
    • chains = the list of facechains. Each facechain is stored as a dictionary having these fields:
      • numid = id number of the facechain (note that this correspond to an incremental number determined during the processing, so the numbers of final facechains are not necessarily contiguous)
      • dsf = 2.2 (scaling factor applied to this facechains - actually the same for all chains, but I decided to keep track of it here - yes, it's redundant...)
      • data = a list containing info about the frames of the chain. In particular for each frame are stored:
        • the frame number within the original video
        • x1, x2, y1, y2 = coordinates of the bounding box: (x1,y1) is the up-left corner, (x2,y2) is the bottom-right corner.
  • a file .facechains.dim. It is also a json file, with a structure similar to .facechains. It contains info about the min and max dimensions of each facetubes. These info are redundant as they could be computed again from what is stored in .facechains. They were saved only because the scaling applied by PA to the tubes makes use of them so it could be useful to have them "ready" in case we want to revert it. It contains the fields:
    • videoid = youtube_id
    • chains = the list of facechains. Each facechains contains:
      • numid = the same id number as in .facechains
      • tube_max_width, tube_max_height, tube_min_width, tube_min_height

-- IlariaCastelli? - 21 Feb 2011


Code. All the code written by me an Pierre-Antoine can be obtained here (I've already cleaned some parts, the remaining is in progress...):

hg clone ssh://hg@gershwin/faces

You might be interested in:

  • dataset_gen/video_reader.py: the class VideoReader? has some utilities to handle a video. The only thing needed to instantiate it is the youtube_id. Then, it provides methods to:
    • access the folder where the processed data is stored
    • read the .facechains file (and so extract info about all its facetubes)
    • extract a series of frames from its facetubes (through the class TubeReader? described below)
  • dataset_gen/facetube_reader.py: the class TubeReader? provides utilities to extract frames from a facetube (remove scaling factors and bounding box enlargement factor if required) and return the images.


Data extracted. (I apoligize for the verbosity, but I want to report everything that might be useful)

Data for the experiments with olivetti:

  • /data/lisa/data/faces/posedata. (olivetti*, 1G, set* 97G, posedata_235 see below)
    • olivetti_pack.pkl.gz = all the olivetti images organized as follows: from each image 240 images have been generated.
      • 230 have been obtained using the same settings used in Salakhutdinov, Hinton, NIPS07 paper: Scaling the original image with 5 scales ([1.0, 1.2, 1.4, 1.6, 1.8]) and, for each one, generating 46 rotations ( range(-90,91,4) = [-90, -86, -82, -78, -74, -70, -66, -62, -58, -54, -50, -46, -42, -38, -34, -30, -26, -22, -18, -14, -10, -6, -2, 2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42, 46, 50, 54, 58, 62, 66, 70, 74, 78, 82, 86, 90] ).
      • 5 are just the frontal pose (angle = 0) taken at all the 5 scales
      • 5 are replicas of the above (frontal pose). This has been done just to have a set of images comparable with the ones we wanted to keep from the youtube collection. Indeed, the youtube images have been aligned with the funneling algorithm (code provided by the Polythecnique group, and available in the repository) in order to obtain a (roughly) frontal alignment of them. Then, both the aligned (frontal) and the original version of the images have been kept. The original image has been labeled with minus the rotation angle that has been applied to it by the funneling algorithm in order to get it aligned. So, for each youtube image I kept the 5 scales of the original image and the 5 scales of the aligned one. Since the olivetti images were in frontal pose already, a not-aligned original version of them was not available, so I just replaced it with the frontal pose again.
    • olivetti_pack_he.pkl.gz = exactly the same thing as above, but here the images have been preprocessed with a histogram equalization (to enhance the contrast) before generating all their scaled and rotated versions. For all experiments the contrast enhanced version of the olivetti dataset has been used, since I applied the same preprocessing to the youtube images also.
    • olivetti_all_training_he.pkl.gz = training set, made of the images of the first 30 people in the dataset (there are 10 images per person, so these are 300 images, each one scaled and rotated as described above)
    • olivetti_all_test_he.pkl.gz = test set, made of the images of the last 10 people in the dataset (100 images, again scaled and rotated)
    • there are also "_blurred" versions of these data. These had been generated adding gaussian blurring to the olivetti images, because I wanted to verify whether the blurring present in some of the youtube images was the reason why the model didn't perform well on olivetti. So I added some noise to olivetti images and tried using them instead of the clean images. I didn't find significant differences in performances, so I would say that this wasn't an issue.
    • olivetti_supervised_he.pkl.gz = 1000 randomly sampled examples taken from olivetti_all_training_he. In Salakhutdinov's paper he was using 1000 supervised pattern in a gaussian process, so I did the same to reproduce his settings.
    • olivetti_test_he.pkl.gz = 1000 randomly sampled examples taken from olivetti_all_test_he. Again, for coherence with Salakhutdinov's settings.
    • olivetti_all_training_superv_he.pkl.gz and olivetti_all_training_valid_he.pkl.gz = after the tea-talk I was suggested to try doing the supervised finetuning on the olivetti training set only, so I splitted the olivetti training set in two: the images related to the first 25 people became the supervised training set (for finetuning purposes only) and the ones coming from the last 5 people became the validation set (again, during the finetuning only). In so doing I made sure no images of the same people, not even if in different poses, were in both the training and validation set, accordingly to other suggestions I got.
    • a bunch of files named "setN_ref.gz" and "setN_M.gz". These files contains the 400.000 youtube images (and their rotations) together with the 300 training olivetti (and their rotation) constituting the mixed dataset I used to pretrain the model. These data have been packed with the aim of using them also with the siamese model, so I organized them in a way that makes it possible to pick the same image in 2 different poses. Data are organized in 40 set, each one made of 16 packs plus a _ref file. Each set contains 10.000 of the original images. So, in total they are 400.000, I discarded the last few images from youtube, just to don't have a small last pack of just 300 images.
      • for each set, the _ref file contains the original images (i.e. not aligned) labeled with the angle returned by the funneling algorithm. Each image is present subsequently in its 5 scales. So each _ref file is made of 50.000 images in total.
      • for each set, the remaining 16 files (numbered from 0 to 15) contain the other 47 poses (the aligned/frontal one + the 46 obtained by manual rotations). In each one there are 3 different rotations of each of the images contained in the _ref file and, again, all of them are present subsequently in their 5 scales. So, each file is made of 150.000 images, except the last one (number 15) that only has 100.000 (15 files * 3 poses + 1 file * 2 poses = 47 poses).
      • To be more precise, file _ref contains [ i1_p_s1, i1_p_s2, i1_p_s3, i1_p_s4, i1_p_s5, i2_p_s1, i2_p_s2, i2_p_s3, i2_p_s4, i2_p_s5, ...., i10000_p_s1, i10000_p_s2, i10000_p_s3, i10000_p_s4, i10000_p_s5] where sX is the X-th scale, and p is the pose of the original not-aligned image. Then, each of the other files contains 3 subsequent series of: [ i1_pY_s1, i1_pY_s2, i1_pY_s3, i1_pY_s4, i1_pY_s5, i2_pZ_s1, i2_pZ_s2, i2_pZ_s3, i2_pZ_s4, i2_pZ_s5, ..., i10000_pT_s1, i10000_pT_s2, i10000_pT_s3, i10000_pT_s4, i10000_pT_s5 ]. Note that each image is present within a file with 3 different poses (here indicated with Y, Z, ..., T for the first series), at all its scales. So, rows from 0 to 49.999, 50.000 to 99.999 and 100.000 to 149.999 all contain all people of this set, in the same order they as the _ref file. In this way it's easy to pick the original image from the _ref file and a random rotated copy of it from one of the other files. Also, having all the scales stored subsequently makes it possible to pick the rotated copy at the same scale, if we want it.
      • All the pyoutube images have been randomly extracted across the whole collection.

Data for the experiments with LFIW:

  • /data/lisa/data/faces/posedata/posedata_235 (17G)
    • contains a set of files that have exactly the same configuration as the ones described above, except for the fact that here I kept 235 variants for each image, instead of 240. This is because I mixed the data with the funneled version of LFIW (i.e. already aligned with funneling) available on the website of the dataset. So, I didn't keep the original not-aligned image, but stored in the _ref file the aligned version, labeled with pose 0.0.
    • again, all the pyoutube images have been extracted across the whole collection. To have them exactly the same settings as the funneled LFIW, I extracted them with the 2.2 bounding box enlargement, and resized to 250x250, like LFIW images. Then, I ran the funneling, and ran the face detector on all of them to pick the face part only. I decided not to simply remove the 2.2 bounding box to crop the face part, but to run again the face detector, because funneling can also shift the pixels as well as apply a rotation, so it's not guaranteed that exactly the central part of the funneled images is the one corresponding to the face.
  • /data/lisa/data/faces/lfw_funneled/posedata (23G)
    • contains LFIW images packed with all their rotations, like the youtube ones. In this case, however, each set (from 0 to 9) corresponds to one of the set in which LFIW is splitted (read LFIW guidelines for more details). The size of the different sets is not the same for all of them, but I just used the images indicated in the files distributed together with LFIW. See the pack_set() function in dataset_gen/lfiw.py
    • the images have been obtained running the facedetector on the funneled LFIW dataset for the same reason I explained above. A mirror replica of the funneled LFIW, with images cropped to the bounding box found by the face detector, is stored in /data/lisa/data/faces/lfw_funneled/lfw_funneled_crop.

-- IlariaCastelli? - 24 Feb 2011


Experiments with olivetti. The model used was a stacked DAE make of 3 layers, each one with 1.000 hidden units. Hyper-parameters have been choosen through a random search (see models/sda_launch.py).

1) I got a baseline result training the model on olivetti only. This experiment is not optimized, i.e. I didn't do hyper-parameter selection, but just tried a fixed setting to have an idea of how it could perform. This means that training on olivetti only we could do better than this (I'll do the full experiments soon). Anyway, this is what I got:

train error = 0.0624, valid error = 0.0628, test error = 0.0628

2) I run a bunch of jobs training with the mixed dataset, finetuning with the mixed dataset and testing on olivetti only (here the search across hyper-parameters has been done), and this is the best result I got:

train error = 0.0557, valid error = 0.0570, test error = 0.476

The model does not generalize to olivetti images.

I had planned (and implemented) to run a gaussian process on the 1000 supervised images from olivetti (again following Salakhutdinov's settings), but I don't have results for that. The library for gaussian processes that I used is available here: http://sysbio.mrc-bsu.cam.ac.uk/group/index.php/Gaussian_processes_in_python

3) Another thing I tried is to oversample the olivetti images within the trainin set. So, I trained the model alternating youtube and olivetti images in the training set (i.e. 1youtube, 1olivetti, 1youtube, etc). This experiment has not been optimized, it has been done with the same fixed setting of point 1).

train error = 0.0959, valid error = 1307, test error = 0.0687

4) The last thing done is trying to pretrain on the whole mixed dataset, and then finetune only on the part of the training set made of olivetti images. This seems very promising. All jobs have not finished yet, but I'll report here the results from a random chosen model from the ones I trained. The errors I had when finetuning with the mixed dataset were:

train error = 0.0430702173302, valid error = 0.0439044918117, test error = 0.629926191634

When finetuning on olivetti only I got:

train err = 0.642707817255 (this is the same mixed dataset as above, I've compute this error just to have a comparison)

valid err = 0.0413086723034 (note that this is not the same validation set as above, this one is made of olivetti only)

test err = 0.0111506004948 (same olivetti test set as above)

The supervised finetuning has been done using /data/lisa/data/faces/posedata/olivetti_all_training_superv_he.pkl.gz as training set and /data/lisa/data/faces/posedata/olivetti_all_training_valid_he.pkl.gz as validation set (they don't contain different rotations of the same image) and running 1000 epochs. The learning rate has been randomly choosen as 10**(rng.uniform(low=-3, high=-1)). All log files I have read indicate that the epoch with lowest validation error was the last one, so this means that performances can still improve doing the supervised finetuning for more epochs.

Experiments with LFIW. Currently in progress, but the preliminary results seem to indicate much better results, i.e. the model trained on the mixed youtube+LFIW dataset, generalizes very well on the LFIW only test images. This seems to be coherent with my speculation that LFIW images could have a distribution more similar to the one of youtube images. I'm doing experiments with both stacked DAE (same architecture as above) and convolutional neural networks, and the results seem coherent.


Notes. (Just in case someone needs to use it) The code for the funneling algorithm only works with OpenCV1?.1, not with earlier releases.

-- IlariaCastelli? - 25 Feb 2011

Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r64 < r63 < r62 < r61 < r60 | More topic actions
Public.FaceDatabase moved from Main.FaceDatabase on 28 Feb 2011 - 15:32 by FredericBastien - put it back
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback