Bootstrap

Ace your studies with our custom writing services! We've got your back for top grades and timely submissions, so you can say goodbye to the stress. Trust us to get you there!


Order a Similar Paper Order a Different Paper

Bootstrap

Bias Correction with Bootstrap Bootstrap Confidence Intervals Bootstrap for Regression

  • Sampling distribution of θˆis hard to derive.

)Or, we just don’t feel like taking the time to.

  • Our sample size is not tiny but is too small for “asymptopia”.

Nonparametric bootstrap: Let Y= [Y, . . . , YiidF

  1. ) (some unknown CDF).

Y

    • Fˆ(y)

    n i=1 Let Fˆn(y) = P^r(Y≤ y) = 1ΣnI(Yi≤ y) (fraction of the sample ≤ y)

⇒ = The distribution of the sample approximates the distribution of the population as nincreases.

⇒ = Sampling from a sufficiently large sample approximates sampling from the population.

Parametricbootstrap:► Ifanestimatorθˆnisconsistentandthemodeliscorrect,

p(·|θˆn)approximatesp(·|θ).

=Samplingfromp(·|θˆn)approximatessamplingfromthepopulation.

Notation: = [, . . . , ] the bth “resample” with replacement from observed

bbn ˆ

ˆ vector or p(·|θn)

≡ θ(FY) θ the true parameter value or some population quantity function of

FY

Bootstrap

Bias Correction with Bootstrap Bootstrap Confidence Intervals Bootstrap for Regression

Let Y1, . . . , Y

iid

fY

(y; θ) = θeθy, for y> 0 and θ > 0.

Σn

l(θ) = (log θ − θYi) = nlog θ − θnY¯

=1

Σ But,

i=1

Yi∼ Ga(anb= θ)

θˆ= (Y¯)1

  • Y¯∼ Ga(anbnθ)
  • 1 E(θˆ) = E(1/Y¯) = EInvGa(anbnθ) = nθ > θ

Also, by Jensen’s Inequality, −

  • E(Y¯) = 1/θ (a convex function)

∴ E(θˆ) = E(1/Y¯) > 1/ E(Y¯) = θ.

Recall,

Fpopulation from which are drawn

θˆ(FY) population quantity

θˆn(Y) some estimator of θˆ(FY)

biasFY[θˆn(Y)] = EFY[θˆn(Y)] − θˆ(FY)

  • An estimator θˆn(Y) is biasedif biasFY[θˆn(Y)] ƒ= 0
  • Note that Wakefield (2013, Sec. 2.7.1) implicitly defines it the other way, as

θˆ(FY) − EFY[θˆn(Y)].

  • Jensen’s inequality: If it happens that E(Y) = g(θ) for some g, and θˆg1(Y¯), and g(y) is concave or convex, θˆwill be biased for θ.

0.5

n <- 20 # Samplesize

theta.true <- 1 #Exponentialdistributiontruerateparameter

S <- 1000 # Number of replications for the simulation

B <- 999 #Bootstrapresamplesize

#Afunctiontogenerateadataset

mk.data <- function(n, theta=theta.true) rexp(n,theta)

# MLE

theta.mle <- function(y) 1/mean(y)

# MLE sampling distribution

theta.mle.dist <- replicate(S, y.obs <- mk.data(n) theta.mle(y.obs)

)

mean(theta.mle.dist) # Mean of the sampling distribution

## [1] 1.047768

mean(theta.mle.dist)-theta.true #Bias

## [1] 0.04776839

1.0 1.5 2.0 * = mean

b

b

  • Suppose we have a biased estimator:

biasFY[θˆn(Y)] = E[θˆn(Y)] − θˆ(FY) =ƒ 0.

We don’t know either expectation.

  • Idea:treatFˆnasF.
  • YsFˆn,soθˆ(Ys)willbebiasedrelativetoθˆ(Y)θˆ(Fˆn)byapproximatelythe

b b

same amount:

biasˆn[θˆn(Ys)] = Eˆn[θˆn(Ys)] − θˆn(Y) ≈ EF[θˆn(Y)] − θˆ(FY) = biasF[θˆn(Y)]

Y

B

B

b

b

cancel some of the bias:

B

b=1

b

b=1
1Σθˆn(Ys)θˆn(Y)

=Subtractingoffθˆn(Ys)θˆn(Y)=1ΣBθˆn(Ys)θˆn(Y)fromθˆn(Y)can

θˆn,u(Y)=θˆn(Y).θˆn(Ys)θˆn(Y)Σ =2θˆn(Y)θˆn(Ys)

# Nonparamametric bootstrap sampling

myboot.np <- function(B, y.obs, theta.f)

replicate(B,

y.b <- sample(y.obs,replace=TRUE) theta.f(y.b)

)

* = mean

 

# Sampling distribution of the mean of the bootstrap samples

2.0 theta.boot.dist <- replicate(S, y.obs <- mk.data(n)

mean(myboot.np(B, y.obs, theta.mle))

)

1.5mean(theta.boot.dist)-theta.true #Biasedbyabouttwiceasmuch

## [1] 0.1177986

# Debiased estimate and simulation

1.0 theta.bc <- function(theta.obs, theta.boot) theta.obs – (mean(theta.boot) – theta.obs)

0.5

theta.bc.dist <- replicate(S, y.obs <- mk.data(n)

theta.obs <- theta.mle(y.obs)

theta.boot <- myboot.np(B, y.obs, theta.mle)

theta.bc(theta.obs,theta.boot)

)

mean(theta.bc.dist)-theta.true #Muchlessbiased.

## [1] -0.001975362

MLE

Bootstrap

Bias Correction with Bootstrap Bootstrap Confidence Intervals Bootstrap for Regression

Bootstrap

Bias Correction with Bootstrap Bootstrap Confidence Intervals

Basic approaches

Refined approaches Demonstration

Bootstrap for Regression

Wakefield (2013, Sec. 2.7.1); Carpenter and Bithell (2000)

Given

Ys, . . . , Ys

resamples of the dataset

B

b θˆ(Ys), b= 1..Bunivariate parameter estimates of interest

b θˆqsqth quantile of θˆ(Ys)s,

α = 1 − CL

.θ − (θ ˆ ˆs

1α/2

− θˆ), θˆ− (θˆs

  • θˆ)Σ

α/2

− − − Assumes that distribution of θˆsθˆis the same as the distribution of θˆθ.

0.975 Intuition: 97.5% of the time, θˆwill overestimate θ by less than θˆs

− θˆ,

0.975 so θ will be less than θˆ− (θˆs− θˆ) only 2.5% of the time.

− Often has poor coverage.

s, . . . , resamples of the dataset

B

b θˆ(Ys), b= 1..Bunivariate parameter estimates of interest

b θˆqsqth quantile of θˆ(Ys)s,

α = 1 − CL

θˆ(Y) − .θˆ(Ys) − θˆ(Y)Σ ± zs

b

B

b=1

b

 

v^arF(θˆ),

b

where θˆ(Ys) = 1ΣBθˆ(Ys) and

1α/2

^

θ

B

b=1

θ

b

− B

b=1 θ

)

. var (ˆ) = 1Σ.ˆ(Ys1Σˆ(sΣ2

− Assumes sampling distribution of θˆis symmetric and not too weird.

Intuition: Use bootstrap to debias and estimate the variance.

+ Has good coverage if it is.

^ + Can be useful if var(θˆ) is hard to get analytically.

s, . . . , resamples of the dataset

B

b θˆ(Ys), b= 1..Bunivariate parameter estimates of interest

b θˆqsqth quantile of θˆ(Ys)s,

α = 1 − CL

+ Invariant to transformation.

.θˆsα/2, θˆsΣ

1α/2 − Effectively assumes that there exists a monotone transformation (θ) s.t.

gˆ) ∼ N(g(θ), σ2)

gˆs) ∼ N(gˆ), σ2)

Intuition: If it exists we could get a 1 − α CI by taking

g1gˆ) ± zsσ, but gˆ) + zsσ ≡ gˆs

), so we

1α/2 ˆs

1α/2

1α/2

can just use quantiles of θ directly.

)Performs poorly if θˆdoesn’t behave that way (e.g., mean–variance relationship).

Bootstrap

Bias Correction with Bootstrap Bootstrap Confidence Intervals

Basic approaches

Refined approaches Demonstration

Bootstrap for Regression

s, . . . , resamples of the dataset

B

b θˆ(Ys), b= 1..Bunivariate parameter estimates of interest

b θˆqsqth quantile of θˆ(Ys)s,

α = 1 − CL

  • A refinement of pivotal interval
  • √ Adjusts for difference between var(θˆs− θˆ) and var(θˆ− θ).
  1. Estimate σˆ ≈ varθˆ).
  2. For each θˆs, estimate σˆs

.var ˆs (θˆ)

b

 

b b θb)E.g., run bootstrap for every θˆs.

)Very computationally intensive, unless a formula exists for σˆ.

  1. Calculate tsˆs) = (θˆs− θˆ)/σˆsfor b= 1..Btsits qth quantile.
  1. 1α/2 α/2 The CI is then .θˆ+ σˆts, θˆ− σˆtsΣ.

s, . . . , resamples of the dataset

B

b θˆ(Ys), b= 1..Bunivariate parameter estimates of interest

b θˆqsqth quantile of θˆ(Ys)s,

α = 1 − CL

  • A refinement of percentile interval

+ Adjusts for skewness in θˆand in how it changes with θ.

Loosely, assumes that a transformation (θ) exists s.t.

gˆ) ∼ N[g(θ) − b1 + ag(θ), 1 + ag(θ)2]

gˆs) ∼ N[gˆ) − b1 + agˆ), 1 + agˆ)2]

for and b.

+ Works very well

− Can misbehave badly if α very small.

  1. Then,qu

= Φ ,b

s α/2

z 1+a(zs

b

b)

,, ql = Φ ,b

zs

1α/2 1+a(zs

b

b)

,, and the CI is

ˆ)More complicated afor parametric.

l

u.θˆqs, θˆqsΣ.

α/2

1α/2

Bootstrap

Bias Correction with Bootstrap Bootstrap Confidence Intervals

Basic approaches

Refined approaches Demonstration

Bootstrap for Regression

Writerbay.net

Looking for top-notch essay writing services? We've got you covered! Connect with our writing experts today. Placing your order is easy, taking less than 5 minutes. Click below to get started.


Order a Similar Paper Order a Different Paper