Linear Predictors

时间:2023-03-09 22:04:01
Linear Predictors

In this chapter we will study the family of linear predictors, one of the most useful families of hypothesis classes. Many learning algorithms that are being widely used in practice rely on linear predictors, first and foremost because of the ability to learn them efficiently in many cases. In addition, linear predictors are intuitive, are easy to interpret, and fit the data reasonably well in many natural learning problems.

We will introduce several hypothesis classes belonging to this family – halfspaces, linear regression predictors, and logistic regression predictors – and present relevant learning algorithms: linear programming and the Perceptron algorithm for the class of halfspaces and the Least Squares algorithm for linear regression. This chapter is focused on learning linear predictors using the ERM approach; however, in later chapters we will see alternative paradigms for leaning these hypothesis classes.

First, we define the class of affine functions as

Linear Predictors

where

Linear Predictors

It will be convenient also to use the notation

Linear Predictors

which reads as follows: Linear Predictors

The different hypothesis classes of linear predictors are compositions of a function Linear Predictors

It may be more convenient to incorporate Linear Predictors

Linear Predictors

It follows that each affine function in Linear Predictors