Majority Votes
par/by François Laviolette
Département d'Informatique
Université Laval
In supervised learning, what can we say about democracy? Is it
possible for a majority vote to be almost always correct, even if it is only
composed of weak voters? It is a fact that many learning algorithms are doing
so. Indeed, the popular Support Vector Machine (SVM) can be seen as a majority
vote of weak learners. Nearest neighbor, Bagging, Boosting algorithms are other
important examples. However, there are other situations where the majority vote
can be twice as bad as the average of its voters. Until now, very few
theoretical results exist for explaining whether or not a majority vote beats
the average value of its voters or not. We will here present a new bound on the
majority vote classifier that depend on this average value and also on the
variance of the error of its associated Gibbs classifier. Moreover, we will
show how this bound can be uniformly estimated on the training data for all
possible weighting of the voters. Finally, we will show how the accuracy of
this estimation can be improved by using a large sample of unlabeled data.
The presentation is a summary of a NIPS-06 paper that can be found at:
http://www2.ift.ulaval.ca/%7Elaviolette/Publications/nips06_voteMaj.pdf