Dogs: loglinear model for
binary data
Lindley (19??) analyses data from Kalbfleisch (1985) on the Solomon-Wynne experiment on dogs, whereby they learn to avoid an electric shock. A dog is put in a compartment, the lights are turned out and a barrier is raised, and 10 seconds later an electric shock is applied. The results are recorded as success (Y = 1 ) if the dog jumps the barrier before the shock occurs, or failure (Y = 0) otherwise.
Thirty dogs were each subjected to 25 such trials. A plausible model is to suppose that a dog learns from previous trials, with the probability of success depending on the number of previous shocks and the number of previous avoidances. Lindley thus uses the following model
p
j
= A
x
j
B
j-x
j
for the probability of a shock (failure) at trial j, where x
j
= number of success (avoidances) before trial j and j - x
j
= number of previous failures (shocks). This is equivalent to the following log linear model
log
p
j
=
a
x
j
+ b
( j-x
j
)
Hence we have a generalised linear model for binary data, but with a log-link function rather than the canonical logit link. This is trivial to implement in BUGS:
model
{
for (i in 1 : Dogs) {
xa[i, 1] <- 0; xs[i, 1] <- 0 p[i, 1] <- 0
for (j in 2 : Trials) {
xa[i, j] <- sum(Y[i, 1 : j - 1])
xs[i, j] <- j - 1 - xa[i, j]
log(p[i, j]) <- alpha * xa[i, j] + beta * xs[i, j]
y[i, j] <- 1 - Y[i, j]
y[i, j] ~ dbern(p[i, j])
}
}
alpha ~ dflat()T(, -0.00001)
beta ~ dflat()T(, -0.00001)
A <- exp(alpha)
B <- exp(beta)
}
Data
( click to open )
Inits for chain 1
Inits for chain 2
( click to open )
Results