# practical characterisation of the correspondence between lmer and jags / bugs

When the number of groups — over which random effects vary — gets large, estimates obtained from the `lmer`

function in the `lme4`

`R`

package will approximate estimates from a class of Bayesian models. Here I explicitly describe the correspondence between `lmer`

and a generic package for Bayesian inference called jags (or bugs). The following assumes that `jags`

and `R`

(with the `arm`

package and its dependencies) are set up to work on your system. I will use this `lmer`

model,

`lmer.y <- lmer(y ~ x + (x|grp))`

and this `jags`

model,

```
model{
for(i in 1:n){
y[i] ~ dnorm(y.hat[i], tau.y)
y.hat[i] <- a[grp[i]] + b[grp[i]]*x[i]
}
tau.y <- sigma.y^-2
sigma.y ~ dunif(0, 100)
for(j in 1:m){
a[j] ~ dnorm(mu.a, tau.a)
b[j] ~ dnorm(mu.b, tau.b)
}
mu.a ~ dnorm(0, 0.0001)
mu.b ~ dnorm(0, 0.0001)
tau.a <- sigma.a^-2
tau.b <- sigma.b^-2
sigma.a ~ dunif(0, 100)
sigma.b ~ dunif(0, 100)
}
```

and another file for running (controlling) the jags model,

```
model in "model.R"
data in "datadump.R"
compile, nchains(1)
initialize
update 1000
monitor a
monitor b
monitor sigma.y
monitor sigma.a
monitor sigma.b
monitor mu.a
monitor mu.b
update 5000
coda *
exit
```

In an `R`

script, set some preliminaries,

```
library(arm) # load arm package
set.seed(1) # set the random number seed for reproducibility
setwd('the directory with the jags model and control files')
```

Now simulate some grouped data,

```
m <- 100 # number of groups
n <- m*8 # sample size
x <- rep(rnorm(8), n/8)
grp <- sort(rep(1:m, 8))
A <- rnorm(1)
a <- rnorm(m)
b <- rnorm(m)
y <- A + a[grp] + b[grp]*x + rnorm(n)
```

and store the data in a format that `jags`

likes,

```
dat <- as.environment(list(x = x, y = y, grp = grp, n = n, m = m))
dump(
ls(dat),
file = 'datadump.R',
envir = dat
)
```

Now run the `jags`

model and store the resulting posterior simulations,

```
system("jags control.R")
op <- read.coda("CODAchain1.txt","CODAindex.txt")
```

Now fit the `lmer`

model,

```
lmer.y <- lmer(y ~ x + (x|grp))
```

OK…phew…now we can point to where the two approaches **should** be very similar. Begin with the correspondence between the fixed effect estimates (in `lmer`

-speak) and the posterior mean effects that the level about the group level (in Bayesian-speak),

```
> fixef(lmer.y)
(Intercept) x
0.65554871 -0.02004891
> apply(op, 2, mean)[c('mu.a','mu.b')]
mu.a mu.b
0.65790942 -0.02102375
```

which is pretty good! To get the standard errors (`lmer`

) and posterior standard deviations (Bayes) we use,

```
> apply(op, 2, sd)[c('mu.a','mu.b')]
mu.a mu.b
0.09918014 0.11892642
> se.fixef(lmer.y)
(Intercept) x
0.09816011 0.11709929
```

Now for the random effects (`lmer`

) and the group level effects (`Bayes`

) and their standard errors (note that the author of lmer, douglas bates, does not like to refer to these lmer group-level uncertainty estimates as standard errors),

```
ranef_bayes <- structure(mean.bayes[1:(2*m)],
dim = c(m, 2),
dimnames = list(1:m, c('a','b')),
names = NULL)
ranef_lmer <- ranef(lmer.y)$grp
ranef_lmer <- sweep(ranef_lmer, 2, fixef(lmer.y), '+') # this step is required to make the lmer and bayes definitions comparable (see * below).
se.ranef_bayes <- structure(sd.bayes[1:(2*m)],
dim = c(m, 2),
dimnames = list(1:m, c('a','b')),
names = NULL)
se.ranef_lmer <- se.ranef(lmer.y)$grp
```

Although I’m not going to print out the 100 random effects and their standard errors for both `lmer`

and Bayes, suffice it to say that they are very similar.

* The Bayes definition is to allow the grp-level effects have a non-zero mean, but the lmer definition is to force the random-effects to be zero and accommodate non-zero means by the fixed-effects.

NOTE: I will be editing this…its just a rough draft now!

Great post. I have always hoped for a package to convert any lmer model into a jags model, if for no other reason than to better understand the lmer model and to identify differences.