The exact sample complexity of PAC-learning problems with unit VC dimension

Abstract

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Divisions: Aston University (General)
Uncontrolled Keywords: Vapnik-Chervonenkis (VC) dimension,machine learning,Probably Approximately Correct (PAC) framework,optimal algorithm-dependent bounds
Last Modified: 29 Oct 2024 16:23
Date Deposited: 23 Sep 2009 14:57
PURE Output Type: Working paper
Published Date: 1996-12
Authors: Goldberg, Paul W.

Download

Export / Share Citation


Statistics

Additional statistics for this record