[n x NftaM

where nenc = 1 — f (0|a) as before. As we noted in Section 3.7.2, this can also be maximized to obtain the MLE's of the 3 parameters N, a and ft. Alternatively, we can remove N from the likelihood, by summation, to obtain [n|M, ft, a], thus formally replacing N as an unknown parameter by ft. This is essentially the objective of data augmentation, as now the data set is of fixed size, having M elements (and some values of x that are unknown) instead of a variable dimension parameter space. So what does the joint distribution of the observations and augmented data look like, after N is removed? This result is obtained by the compounding of two binomial distributions (see the results described in Section 5.1), model {

sigma~dunif(0,10)

psi~dunif(0,1)

sigma2<-sigma*sigma for(i in 1:(nind+nz)){ w[i]~dbern(psi)

x[i]~dunif(0,Bx) # Bx = strip width input as data logp[i]<- -((x[i]*x[i])/sigma2)

Panel 7.1. WinBUGS specification of distance sampling model for the impala data, with the half-normal detection function.

Figure 7.3. Posterior of n (left panel) and a (right panel) for the impala data.

which produces

which is a function only of 0 and a. This simplifies slightly to yield the joint likelihood:

This can be maximized easily using standard numerical methods.

Was this article helpful?

0 0

Post a comment