Electrical Engineering
      and Computer Sciences

Electrical Engineering and Computer Sciences

COLLEGE OF ENGINEERING

UC Berkeley

   

2010 Research Summary

Properties of Maximum and Maximal VC-Classes

View Current Project Information

Benjamin I. P. Rubinstein, J. Hyam Rubinstein1 and Peter Bartlett

National Science Foundation DMS-0434383

A compression scheme of size k for concept class C consists of a compression function that maps an n-sample consistent with C to a k-subsample and a reconstruction function that maps the subsample to a concept consistent with the original n-sample. Littlestone and Warmuth showed that having a compression scheme is sufficient for learnability [2]; whether this was necessary inspired their sample compressibility conjecture. This conjecture predicts that every concept class of finite VC-dimension d has a compression scheme of size d [2]. It has been known for some time that maximum classes (concept classes meeting Sauer's Lemma with equality) can be d-labeled compressed. Recently Kuzmin and Warmuth proposed the elegant Peeling scheme and conjectured that it is a d-unlabeled compression scheme for maximum classes [1]. A correctness proof was conjectured, relying on a graph density upper bound and a minimum degree lower bound.

In [3] we prove the density bounds (Conjecture 9.1 of [1]), while in [4] we disprove the conjectured degree bound (Conjectures 3.1 and 9.2). In algebraic topology the natural generalization of a 1-dimensional simplicial complex (i.e., a graph) that is a tree, to d-dimensional complexes, is contractibility. In [4] we have generalized the characterization of VC-dimension 1 maximum classes as trees, by proving that VC-dimension d maximum classes are contractible cubical complexes. This structure is special to maximum classes and is not generally reflected by maximal classes. We are interested in finding further differences between maximum and maximal classes.

Compressing general VC classes is easily reduced to compressing maximal classes (where the addition of a single concept must increase dimension); one popular approach to d-compressing general concept classes is to embed maximal classes into maximum classes, with at most a small increase in VC-dim. In [5] we prove the correctness of a form Peeling that implies the first general, negative result on embedding maximal classes into maximum classes. These results are approached by geometric characterizations of maximum classes in the form of Euclidean, Hyperbolic and Piecewise Linear (PL) hyperplane arrangements. In particular we prove that Peeling maximum classes corresponds to sweeping their PL hyperplane arrangements, resolving Conjecture 1 of [1].

[1]
D. Kuzmin and M. K. Warmuth, "Unlabeled Compression Schemes for Maximum Classes," Journal of Machine Learning Research, Vol. 8, September 2007, pp. 2047-2081.
[2]
N. Littlestone and M. K. Warmuth, Relating Data Compression and Learnability, unpublished manuscript, 1986. http://www.cse.ucsc.edu/~manfred/pubs/lrnk-olivier.pdf.
[3]
B. I. P. Rubinstein, P. L. Bartlett, and J. H. Rubinstein, "Shifting, One-Inclusion Mistake Bounds and Tight Multiclass Expected Risk Bounds," Advances in Neural Information Processing Systems 19, 2007, pp. 1193-1200.
[4]
B. I. P. Rubinstein, P. L. Bartlett, and J. H. Rubinstein, "Shifting: One-Inclusion Mistake Bounds and Sample Compression," Journal of Computer and System Sciences Special Issue on Learning Theory 2006, invited paper in press, 2008.
[5]
B. I. P. Rubinstein and J. H. Rubinstein, "Geometric & Topological Representations of Maximum Classes with Applications to Sample Compression," 21st Annual Conference on Learning Theory (COLT'08), pp. 299-310, 2008.

1Mathematics & Statistics, Univ. Melbourne, Australia