## [yi Ui XinNg22 expression, the part with N in it, is similar to that in Eq. (7.1.1). In this case the probability of encounter, nenc, is computed as under the ordinary distance sampling model, i.e.,

(1 - nenc) = Pr(y = 0) = f (1 - p(x; a))J(1/Bx) dx.

The [w|x] component of the model is irrelevant for uncaptured individuals.

We could do a Bayesian analysis of this model using data augmentation with only slight modifications to Panel 7.1, or by devising an MCMC algorithm based on Gibbs sampling. The full-conditional distributions for the model parameters have convenient forms (except for a). Classical analysis based on likelihood is somewhat more difficult, but we pursue aspects of that analysis here in order to clarify the consequences of measurement error in this particular problem. We provide the conceptual outline of the approach here, absent mathematical details and some formal argument.

The motivation is that the true distances x are latent variables and, as it stands, the model (the detection function) is expressed in terms of x, whereas we observe w. We need to get x out of the likelihood. Recall that the detection function is

Thinking about this like any other latent variable problem, we can think about computing the marginal probabilities obtained by integrating x2 out of this conditional-on-x detection function. That is, we need:

This is conceptually straightforward, if we only had the distribution of x2. Note that we observe some information about a particular xj, in the form of (one or more) imperfect measurements, Wj. This suggests that what we should do is take this expectation over the posterior distribution of x2 given the observations of Wj. We can motivate this more formally by noting that a typical strategy (Basu, 1977) in this situation is to condition on a sufficient statistic for xj (the latent variable). That is, we need to express the model for y conditional on a sufficient statistic for xj. The sufficient statistic for xj is the mean of observations Uj, provided that is known. We will assume that is known, but the added complexity of dealing with unknown is not too great. So we need to derive the conditional distribution x2|Uj which will allow us to evaluate

Pr(y = 1|Uj) = Ex2|S{exp(-x2/a2)} = exp(-x2/a2)[x2|Uj ]dx2.