+Classification complexity

Aka COLT/ computational learning theory, Binary classification and Queries: Theory

Themes

Consider the classification task described in the statistics survey. Computational learning theory is about binary classification problem, and getting the best classifier using 0/1 loss.

Hardness of (accurately/ approximately) learning a concept in a concept class in terms of time, space, sample size: Upper (\textbf{Efficient Learning Algorithm design}) and Lower bounds. Doing this: Agnostic or gnostic of distribution; In the presence or absence of noise (in label or in input); relative to ease of evaluation of a concept; Proper learning vs learning using a different Hypothesis concept class.

What concept classes are more general?

How many mistakes does an algorithm make whilst efficiently learning a concept class? Learning using bunch of experts.

Judge the importance of problem/ model: Consider real-life learning scenarios where they are useful. Consider if lower / upper bound for some important scenario may be obtained.

Interplay with other areas.

Observe similarities between compression and learning: learning which is more than mere memorization usually (kNN being an example to the contrary) involves learning some concise, general things from the examples.

Cryptography and learning: opposites.

Folk theorems

The only C we know to learn are polynomials (Eg PTF).

Characterization of research effort

See computational complexity theory ref and algorithms ref; emphasis is on the former.

Notation

The sample

Universal set of possible inputs X. An example := (xX, label l). Label is 1,0 or 1,1 depending on context. Bits in x: x1xn. Sample set S, |S|=m

Concepts and their sets

Target concept (classifier function) in a certain representation: c; Concept class C; Concept representation class R, representation dimension (f(input size)) or number of binary vars in x: n, sampling distribution D; hypothesis concept h, hypothesis class H.

|Hn,m|: H restricted to \ algorithms with m n-dimensional examples. \chk

Learning algorithm

Learning algorithm L, attribute/ variable list V.

The binary classifier c

For different ways of looking at c, see boolean fn ref. For VCD proofs/ measuring complexity of representation classes, it is profitable to view a hypothesis as the dichotomy it induces on X.

Representation classes of classifiers

Properties of representation classes

Error regions

Class of error regions: Δ(c)={cΔh]}.

epsilon net of examples

\textbf{ϵ net}: A set of examples which hits all error regions (Δϵ(c)) of weight ϵ under D. A bad hypothesis h: cΔhΔϵ(c).

Monotonicity

To understand monotonicity of boolean functions, see boolean functions reference.

Transformation to non-monotone R by introducing n new literals, if n finite.

Closures

Projection closure

Take partial assignment P; let P/A: an assignment to V first using P, then using A for unassigned vars. Projection of an example using P. Projection closed concept class: Take any f in C, fix values of some variables using P, get another fP=f(P/A) in C.

Embedding closure

Embedding one domain of variables into another; embedding a concept into a domain of variables. Embedding an example. Embedding closed concept class: Take f, rename variables, add irrelevant variables to the domain, f is still in C.

PEC classes

Most useful concept classes are projection and embedding \ closed (pec), or are contained in some projection and embedding closed C.

Some representation classes

Geometric classes

Also see boolean functions ref for halfspaces etc.. Axis aligned rectangles (good basic example).

Boolean functions

See boolean functions ref: k-DNF. k-CNF, Conjunctions. Disjunctions. Decision lists. Decision trees. T-augmented decision trees. Polynomial size DNF. D-PTF. Halfspaces: common benchmark for machine learning algs. Intervals in R. Polyhedra in Rn. The class of parity functions with length k: PARITY(k).

Classes of Continuous fns

See complex analysis ref.

Restricted concept classes

Cn: concepts depending on a set of n vars. Cr,n Concepts in Cn with at most r relevant variables.

Some PAC Reductions

k-CNF to Conjunctions. k-decision list to 1-decision list.

Boolean formula to Log-space turing machine: Parse the circuit, Have identifier for each node.

Log-space turing machine concept class (max klogn space) to Deterministic Finite Automaton (DFA) concept class: DFA state = <state, possible tape content, head location>, Transitions = L/R, xx.xx.

Measure complexity of C

Consider n, size(c) (log|C|). Find VCD; Plot dichotomy count DC(m): these are described in the boolean functions survey. Find sample complexity.

SQ dimension (sqd or d)

Definition

Take c: \ {1,1}n{1,1}, C={c}. sqdD(C)=maxd|SD={f1,..,fd}C with |corrxD(fifj)|=|ExD(fi(x)fj(x))|d1; \ sqd(C)=maxDsqdD(C).

Applications: Learning in the presense of noise.

Bounds on SQD

Max sqd = max |C| = 2n. Parity fn pS(x)=xiSxi; C={pS:|S|n} has max sqd: Sets S, T differ in some xd:ExiU[pS(x)pT(x)]=E[xd]E[..]=0.

sqd(C)=Ω(VCD(C)): Construct the set {fi}: Take uniform distr on the shattered set; select two fns {fi} solving it for set of 2 examples {xi}; assume for 2i; make {fi} vs {xi} sign matrix M; get matrix for 2i+1: [MM\MM]; extend proof for those between 2i1 and 2i by using matreces for powers of 2.

sqd(C)=2O(VCD(C)). \why

Other properties

All in the corr-shattered set SD={fi} linearly independent. Proof by contr: If d is sqd, cifi=0, take cm=max({ci}), so fm=imcicmfi, so 1=E[fmfm]=imcicmE[fmfi]; but cicm[1,1], E[fmfi]d1, so absurdity.

Other measures

\textbf{Unique -ve dimension}: UND(C) = Max Subclass CC with unique -ve examples.

Can use covering and packing numbers. These measures of the size of sets are described in the topology ref.

Resources used by the learner

Queries and oracles

An oracle provides examples according to D. A teacher answers membership and equivalence queries.

Example oracle EXD(c). \ Membership query (mq): mq(x) yields c(x). Equivalence query (eq): eq(h) yields counterexample. MQ(c), EQ(c) oracles.

EXU(c) can be simulated using MQ(c). In PAC setting, EQ(c) can be simulated by the EX oracle.

Probabilistic oracle

PEX(f) for fn f:{0,1}n[1,1]: produces (x, b) where xU, b is ±1 binary RV with mean f(x). For boolean f, this is UPAC oracle EX(f, U). Models random noise under U: PEX(tf) is an oracle for f(x) with random noise of rate 1/2t/2: simply see how to make binary RV b with mean tE[f].

Statistical dist: D(PEX(f),PEX(g))=Ex[|f(x)g(x)|]fg. So, upon querying, an algorithm notices difference between PEX(f) and PEX(g) with prob fg. (See probability ref.)

Non adaptive mq (namq)

Points on which mq’s are made do not depend on concept being learnt. So, many concepts can be learnt from same set of query points: like the brain. Also, easier parallizability.

Relative power of various queries

As they solve iterated products problem, polynomial sized circuits are not learnable using random examples. By modifying the circuits to encode the ckt at certain fixed points, ye get a class learnable with namq but not with random examples alone. By placing the encoding at varying locations, but encoding the location of the encodings at certain fixed points, ye get a class learnable with mq, but not with namq.

Attribute efficient (ae) learning

r: Number of relevant attributes. \ Alg is poly time, calls to oracle is p(r,|c|)h(n),h(n)=o(n).

Strong ae learning: h(n)=O((logn)k). Or, calls to oracle polynomial in |c|, not in n.