# (Math)Let D be the distribution over the data points (x, y), and let H be thehypothesis class, in which one would like to find a function f that has a small expected loss L(f) by minimizing the empirical loss Lˆ(f). A few definitions/terminologies:• The best function among all (measurable) functions is called Bayes hypothesis:f∗ = arg inffL(f).• The best function in the hypothesis class is denoted asfopt = arg inff∈HL(f)• The function that minimizes the empirical loss in the hypothesis class is denoted asˆfopt = arg inff∈HLˆ(f)• The function output by the algorithm is denoted as ˆf. (It can be different from ˆfopt since the optimization may not find the best solution.)• The difference between the loss of f∗ and fopt is called approximation error:xapp = L(fopt) − L(f∗)which measures the error introduced in building the model/hypothesis class.• The difference between the loss of fopt and ˆfopt is called estimation error:xest = L(ˆfopt) − L(fopt)which measures the error introduced by using finite data to approximate the distribution D.• The difference between the loss of ˆfopt and ˆf is called optimization error:xopt = L(ˆf) − L(ˆfopt)which measures the error introduced in optimization.• The difference between the loss of f∗ and ˆf is called excess risk:xexc = L(ˆf) − L(f∗)which measures the distance from the output of the algorithm to the best solution possible.(1) Show that xexc = xapp + xest + xopt.Comments: This means that to get better performance, one can think of: 1) building a hypothesis class closer to the ground truth; 2) collecting more data; 3) improving the optimization.(2) Typically, when one has enough data, the empirical loss concentrates around the expected loss: there exists xcon > 0, such that for any f ∈ H, |Lˆ(f) − L(f)| ≤ xcon. Show thatin this case, xest ≤ 2 xcon.Comments: This means that to get small estimation error, the number of data points should be large enough so that concentration happens. The number of data points needed to get concentration xcon is called sample complexity, which is an important topic in learning theory and statistics.

Question
111 views

(Math)

Let D be the distribution over the data points (x, y), and let H be the
hypothesis class, in which one would like to find a function f that has a small expected loss L(f) by minimizing the empirical loss Lˆ(f). A few definitions/terminologies:
• The best function among all (measurable) functions is called Bayes hypothesis:
f = arg inffL(f).
• The best function in the hypothesis class is denoted as
fopt = arg inff∈HL(f)
• The function that minimizes the empirical loss in the hypothesis class is denoted as
ˆfopt = arg inff∈HLˆ(f)
• The function output by the algorithm is denoted as ˆf. (It can be different from ˆfopt since the optimization may not find the best solution.)
• The difference between the loss of f and fopt is called approximation error:
xapp = L(fopt) − L(f)
which measures the error introduced in building the model/hypothesis class.
• The difference between the loss of fopt and ˆfopt is called estimation error:
xest = L(ˆfopt) − L(fopt)
which measures the error introduced by using finite data to approximate the distribution D.
• The difference between the loss of ˆfopt and ˆf is called optimization error:
xopt = L(ˆf) − L(ˆfopt)
which measures the error introduced in optimization.
• The difference between the loss of f and ˆf is called excess risk:
xexc = L(ˆf) − L(f)
which measures the distance from the output of the algorithm to the best solution possible.
(1) Show that xexc = xapp + xest + xopt.

Comments: This means that to get better performance, one can think of: 1) building a hypothesis class closer to the ground truth; 2) collecting more data; 3) improving the optimization.

(2) Typically, when one has enough data, the empirical loss concentrates around the expected loss: there exists xcon > 0, such that for any f ∈ H, |Lˆ(f) − L(f)| ≤ xcon. Show that
in this case, xest ≤ 2 xcon.
Comments: This means that to get small estimation error, the number of data points should be large enough so that concentration happens. The number of data points needed to get concentration xcon is called sample complexity, which is an important topic in learning theory and statistics.

check_circle

Step 1

Hello! As you have posted 2 different questions, we are answering the first question. In case you require the unanswered question also, kindly re-post them as separate question.

Step 2

(1)

From the given information,

f*=arg inffL(f)

fopt = arg inffHL(f)

ˆfopt = arg inffHLˆ(f)

xapp = L(fopt) − L(f*)

xest = L(ˆfopt) − L(fopt)

xopt = L(ˆf) − L(ˆfopt)

xexc = L(ˆf) − L(f*)

Step 3

Consider...

### Want to see the full answer?

See Solution

#### Want to see this answer and more?

Solutions are written by subject experts who are available 24/7. Questions are typically answered within 1 hour.*

See Solution
*Response times may vary by subject and question.
Tagged in

### Statistics 