【Stats300A】L4

From Data Reduction to Risk Reduction

f(E(X))E(f(X))

Rao-Blackwell 定理

Rao-Blackwell Theorem

Assume that T is a sufficient statistic.
Assume that we have a loss function L(θ,d) which is strictly convex in d, and that δ(X) is an estimator of g(θ) with finite risk R(θ,δ)
Let η(t)=E[δ(X)|T(X)=t], then

R(θ,η)<R(θ,δ),θ

unless δ=η with probability 1 (i.e. δ was a function of T)

UMVU Estimators

无偏估计下的 Rao-Blackwell 定理证明

全期望公式,可知该统计量无偏

Eθ{η(T)}=Eθ[E{δ(X)T}]=Eθ{δ(X)}=g(θ)

下面证明方差减小

Varθ{δ}=Eθ{δEθδ}2=Eθ{δEθ(δ|T)+Eθ(δ|T)Eθδ}2=Eθ{δη(T)}2+Varθ{η(T)}+2Eθ{[δEθ(δ|T)][Eθ(δ|T)Eθδ]}

下证 Eθ{[δEθ(δ|T)][Eθ(δ|T)Eθη]}0 即可得证 Varθδ(X)Varθη(T)

Eθ{[δEθ(δT)][Eθ(δT)Eθ(δ)]}=ET(Eθ{[δEθ(δT)][Eθ(δT)Eθ(δ)]T})=ET(Eθ{[δEθ(δT)]T}[Eθ(δT)Eθ(δ)])=ET([Eθ(δT)Eθ(δT)][Eθ(δT)Eθ(δ)])=0

主要通过第一个等式利用全期望公式,定理得证[1]


或者可直接证 E{(ηθ)2}E{(δθ)2} [2],由于 (E(X2)E(X)2)

E{(ηθ)2}=E{(E[δT]θ)2}=E{E[(δθ)T]2}E{E[(δθ)2T]}=E{(δθ)2}

尽管不一定存在一致最优估计,但可以通过增加限制使得该限制下最优,比如下述 UMRUE,UMVUE 即考虑无偏估计下 uniformly minimize risk

UMVU, UMRU estimator

An estimator δ is UMVU (uniformly minimum variance unbiasedestimator) or UMRU (uniformly minimum risk unbiased estimator) if for any other unbiased estimator δ,

R(θ,η)R(θ,δ),θ

for all θ. It is uniquely UMVU if the above inequality is strict for some θ.

admissable

  1. 数理统计第12讲(Rao-blackwell定理,零无偏估计法,完备性) - 知乎 ↩︎

  2. Rao-Blackwell定理的快速简单证明 - 知乎 ↩︎


© 2024 LiQ :) 由 Obsidian&Github 强力驱动