Fisher’s tea tasting experiment. There are 8 cups of milky tea, 4 with milk poured first, 4 with tea poured first. The lady guesses which is which, guessing 3 milk first out of 4 actual milk first, and 1 milk first out of 4 actual tea first. She is not told the total number with milk, so we don’t assume the column margins are fixed here.
Several priors are compared in Example 7.1.1. Comment out the appropriate lines in the model code to specify the desired prior.
model {
for (i in 1:2) {
y[i] ~ dbin(p[i], n[i])
# Independent priors
# a) standard uniform
p[i] ~ dunif(0,1)
# b) Jeffreys
# p[i] ~ dbeta(0.5, 0.5)
# c) uniform priors on the logit scale - note that the equivalent p[i] ~ dbeta(0,0) is not allowed in BUGS.
# logit(p[i]) <- alpha[i]
# alpha[i] ~ dflat()
}
# One-parameter priors
# p[2] <- 1 - p[1]
# a) uniform
# One-parameter uninformative
# p[1] ~ dunif(0, 1)
# b) Jeffreys
# p[1] ~ dbeta(0.5, 0.5)
# c) logit-uniform
# logit(p[1]) <- alpha
# alpha ~ dflat()
# One parameter, sceptical prior
# p[1] <- theta[pick]
# pick ~ dcat(q[])
# q[1] <- 0.8
# q[2] <- 0.2
# theta[1] <- 0.5
# theta[2] ~ dunif(0.5, 1)
post <- step(p[1] - p[2])
# Dependent priors
# a) uniform
# p[1] ~ dbeta(a, b)
# a <- 1
# b <- 1
# b) Jeffreys
# p[1] ~ dbeta(a, b)
# a <- 0.5
# b <- 0.5
# c) logit-uniform
# a <- 0
# b <- 0
# logit(p[1]) <- alpha
# alpha ~ dflat()
# with rho = 6/(1+1+6) = 0.75
# n.corr <- 6
# or with rho = 14/(1+1+14) = 0.875
# n.corr <- 14
# x ~ dbin(p[1], n.corr)
# a.post <- a + x
# b.post <- b + n.corr - x
# p[2] ~ dbeta(a.post, b.post)
# Altham's priors to mimic Fisher's exact test
# p[1] ~ dbeta(0.00001, 1)
# p[2] ~ dbeta(1,0.00001)
}
Data: base case where she gets six out of eight cups right.
list(n=c(4,4), y=c(3,1))
Alternative scenario where she gets all of them right
list(n=c(4,4), y=c(4,0))
Initial values for independent logit-uniform priors
list(alpha=c(0,0))
Initial values for one-parameter logit-uniform priors
list(alpha = 0)
Initial values for independent probabilities (though gen.inits is sufficient)
list(p=c(0.5, 0.5))
Results under various alternative priors / data (see Table 7.1)
Independent priors...
a) Uniform prior
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.66650161 0.17778962 0.0018975653 0.28463471 0.68438667 0.94341445 1001 10000
p[2] 0.33360452 0.18010351 0.002108919 0.053647123 0.31323254 0.72196412 1001 10000
post 0.8932 0.3088588 0.0029877866 0.0 1.0 1.0 1001 10000
b) Jeffreys prior
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.69740281 0.18827005 0.0020573002 0.27831656 0.7256484 0.97026229 1001 9000
p[2] 0.29954893 0.1871326 0.0021184392 0.027567156 0.2717855 0.7211079 1001 9000
post 0.92088889 0.2699121 0.0028216392 0.0 1.0 1.0 1001 9000
c) Improper prior
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.74714953 0.19415083 0.0018142211 0.29149258 0.78997564 0.99164355 1 11000
p[2] 0.24880955 0.19235911 0.0017871968 0.0082612708 0.20534833 0.70963168 1 11000
post 0.948 0.22202703 0.0019912558 0.0 1.0 1.0 1 11000
One-parameter priors...
a) Uniform
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.7002 0.1381 1.506E-4 0.4003 0.7139 0.925 1001 999000
p[2] 0.2998 0.1381 1.506E-4 0.07498 0.2861 0.5997 1001 999000
post 0.9104 0.2856 3.402E-4 0.0 1.0 1.0 1001 999000
b) Jeffreys
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.7225 0.1414 1.583E-4 0.4094 0.7394 0.9439 1001 999000
p[2] 0.2775 0.1414 1.583E-4 0.05608 0.2606 0.5906 1001 999000
post 0.9238 0.2653 3.234E-4 0.0 1.0 1.0 1001 999000
c) Logit-uniform
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.7501 0.1445 1.441E-4 0.4207 0.7719 0.9633 1001 999000
p[2] 0.2499 0.1445 1.441E-4 0.03667 0.2281 0.5793 1001 999000
post 0.9372 0.2427 2.817E-4 0.0 1.0 1.0 1001 999000
Dependent priors (correlation rho=0.75)...
a) Uniform marginals - more conservative
node mean sd MC error 2.5% median 97.5% start sample
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.58266004 0.17349451 9.5315737E-4 0.23762344 0.58895528 0.89507329 1000 99001
p[2] 0.41556825 0.17350337 9.3965684E-4 0.10561779 0.4087936 0.76134837 1000 99001
post 0.81131504 0.39125816 0.0012614202 0.0 1.0 1.0 1000 99001
b) Jeffreys marginals
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.59004264 0.18108007 0.003135864 0.22698459 0.59578747 0.91531867 1 11000
p[2] 0.4070392 0.18107114 0.0031624301 0.090794004 0.39627135 0.77008152 1 11000
post 0.82309091 0.38159175 0.0038845023 0.0 1.0 1.0 1 11000
c) Logit-uniform marginals
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.59353729 0.18372906 0.002949552 0.23098724 0.5999245 0.91906869 1 11000
p[2] 0.39754323 0.18313325 0.0030759782 0.082836621 0.39027834 0.75957733 1 11000
post 0.82581818 0.37926575 0.0034908887 0.0 1.0 1.0 1 11000
Dependent priors (stronger correlation of rho=0.875)...
a) Uniform marginals - even more sceptical
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.54125337 0.16566914 0.0036355836 0.22194588 0.54222608 0.85637683 1001 10000
p[2] 0.44025826 0.16436046 0.0036480698 0.13291137 0.43601546 0.7621212 1001 10000
post 0.7519 0.43191016 0.004336118 0.0 1.0 1.0 1001 10000
b) Jeffreys marginals
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.55089547 0.17372895 0.0039213773 0.21746254 0.55151242 0.8760317 1001 10000
p[2] 0.44616234 0.17193022 0.0040303594 0.12749501 0.44292298 0.77953666 1001 10000
post 0.7543 0.43050146 0.0044090243 0.0 1.0 1.0 1001 10000
c) Logit-uniform marginals
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.55080573 0.17856533 0.0043292671 0.21199025 0.55101222 0.88647336 1001 10000
p[2] 0.44044199 0.17853148 0.0042646071 0.1160427 0.43589464 0.78923684 1001 10000
post 0.7625 0.42555111 0.0037480635 0.0 1.0 1.0 1001 10000
Sceptical prior - prior 0.5 updated to posterior of 0.65...
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.6475 0.1409 1.453E-4 0.5 0.633 0.9151 1001 999000
p[2] 0.3525 0.1409 1.453E-4 0.08489 0.367 0.5 1001 999000
pick 1.649 0.4772 5.122E-4 1.0 2.0 2.0 1001 999000
post 1.0 0.0 1.001E-13 1.0 1.0 1.0 1001 999000
theta[2] 0.7352 0.1246 1.287E-4 0.5171 0.7354 0.9671 1001 999000
Extra analyses discussed in the text, not in Table 7.1
Altham's priors give posterior probability 0.76, equivalent to Fisher's exact test p-value...
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.59968786 0.20023934 2.0484602E-4 0.19387588 0.61406273 0.93226862 1 1000000
p[2] 0.39958976 0.20008744 2.0444927E-4 0.06730821 0.38534006 0.80567467 1 1000000
post 0.757277 0.42872899 4.1943318E-4 0.0 1.0 1.0 1000001 1000000
Sceptical prior with 100% success rate out of 4: 98% posterior prob...
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.8941 0.1019 1.594E-4 0.5987 0.9243 0.9971 501 999500
p[2] 0.1059 0.1019 1.594E-4 0.002879 0.07568 0.4013 501 999500
pick 1.983 0.1304 1.417E-4 2.0 2.0 2.0 501 999500
theta[2] 0.8984 0.09167 1.456E-4 0.6544 0.9249 0.9971 501 999500
... or 93% posterior prob if prior prob 0.8 that p[1] is 0.5...
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.875 0.1307 2.126E-4 0.5 0.919 0.997 501 999500
p[2] 0.125 0.1307 2.126E-4 0.003022 0.08102 0.5 501 999500
pick 1.935 0.247 3.209E-4 1.0 2.0 2.0 501 999500
theta[2] 0.8914 0.09994 1.616E-4 0.616 0.9214 0.997 501 999500
Independent uniform with 100% success rate out of 4...
node mean sd MC error 2.5% median 97.5% start sample
p[1] 0.8333 0.1407 1.364E-4 0.4794 0.8703 0.9949 1001 999000
p[2] 0.1666 0.1407 1.367E-4 0.005026 0.1293 0.5215 1001 999000
post 0.9961 0.06263 6.302E-5 1.0 1.0 1.0 1001 999000