11H2. The data contained in library(MASS);data(eagles) are records of salmon pirating attempts by Bald Eagles in Washington State. See ?eagles for details. While one eagle feeds, sometimes another will swoop in and try to steal the salmon from it. Call the feeding eagle the “victim” and the thief the “pirate.” Use the available data to build a binomial GLM of successful pirating attempts.
yi~Binomial(ni,pi) logit(pi)=α+βPPi+βVVi+βAAi α~Normal(0,1.5) βP,βV,βA~Normal(0,0.5)
where y is the number of successful attempts, n is the total number of attempts, P is a dummy variable indicating whether or not the pirate had large body size, V is a dummy variable indicating whether or not the victim had large body size, and finally A is a dummy variable indicating whether or not the pirate was an adult. Fit the model above to the eagles data, using both quap and ulam. Is the quadratic approximation okay?
First, we need to make some 0/1 variables for the categorical variables P, A, and V in the data frame. Name them whatever you like, but here are my choices:
library(rethinking)
## Loading required package: cmdstanr
## This is cmdstanr version 0.8.0
## - CmdStanR documentation and vignettes: mc-stan.org/cmdstanr
## - CmdStan path: /Users/brianbeckage/.cmdstan/cmdstan-2.36.0
## - CmdStan version: 2.36.0
## Loading required package: posterior
## This is posterior version 1.6.0
##
## Attaching package: 'posterior'
## The following objects are masked from 'package:stats':
##
## mad, sd, var
## The following objects are masked from 'package:base':
##
## %in%, match
## Loading required package: parallel
## rethinking (Version 2.42)
##
## Attaching package: 'rethinking'
## The following object is masked from 'package:stats':
##
## rstudent
library(MASS)
data(eagles)
d <- eagles
d$pirateL <- ifelse( d$P=="L" , 1 , 0 )
d$victimL <- ifelse( d$V=="L" , 1 , 0 )
d$pirateA <- ifelse( d$A=="A" , 1 , 0 )
Now to fit the model both ways. I’m going use the standard intercept prior here, and a only slighter narrow prior for each coefficient. This allows for large effects.
f <- alist(
y ~ dbinom( n , p ),
logit(p) <- a + bP*pirateL + bV*victimL + bA*pirateA ,
a ~ dnorm(0,1.5),
bP ~ dnorm(0,1),
bV ~ dnorm(0,1),
bA ~ dnorm(0,1) )
m1 <- quap( f , data=d )
m1_stan <- ulam( f , data=d , chains=4 , log_lik=TRUE )
## Running MCMC with 4 sequential chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 finished in 0.0 seconds.
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 finished in 0.0 seconds.
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 finished in 0.0 seconds.
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 4 finished in 0.0 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.0 seconds.
## Total execution time: 0.5 seconds.
precis(m1)
precis(m1_stan)
Those are reasonably similar. Note that the ulam model does have more extreme coefficient values for bP and bV. So likely the quadratic approximation is having a little trouble with ceiling/floor effects. Note that if you make the priors weaker, these models will not be so similar. Try Normal(0,10) for example:
f<- alist(
y ~ dbinom( n , p ),
logit(p) <- a + bP*pirateL + bV*victimL + bA*pirateA ,
a ~ dnorm(0,10),
bP ~ dnorm(0,10),
bV ~ dnorm(0,10),
bA ~ dnorm(0,10) )
Refit the models and then you should see:
precis(m1)
precis(m1_stan)
Again, with GLMs the existence of ceiling and floor effects makes quadratic approximation risky.
Now for interpreting these estimates. We’ll use the ulam estimates here, because the quadratic approximation was not entirely satisfactory. Remember, your estimates will be slightly different, due to Monte Carlo error. But the effective predictions should be indistinguishable. First, the intercept log-odds, a, indicates the probability of a successful attempt for a pirate when all of the predictors are at zero. So that means a small immature pirate against a small victim has probability:
post <- extract.samples( m1_stan )
quantile( logistic( post$a ) , probs=c(0.055,0.5,0.945) )
## 5.5% 50% 94.5%
## 0.4139684 0.5894072 0.7439756
A little under 60% of attempts by immature small pirates on small victims are expected to succeed. Note the way I did the calculation above. First the logistic transform is performed for every sample, and then the summary is made. Always summarize last. The mean of a function is not the same as the function of the mean! So what about the slope (β) estimates? These estimates just change the intercept for the log-odds. So for example, the probability that a large immature pirate succeeds against a small victim would be:
quantile( logistic( post$a + post$bP ) , probs=c(0.055,0.5,0.945) )
## 5.5% 50% 94.5%
## 0.9013936 0.9527193 0.9795026
That is to say, the large pirate is almost certain to succeed, according to the model. You can generate such predictions for all of the combinations, if you wish. And indeed, that’s how the plotting of predictions works. There are many ways to plot the posterior predictions. I’ll use a straightforward format with cases on the horizontal and probability/count on the vertical. Here’s the code for the first plot, showing proportion success on the vertical axis:
d$psuccess <- d$y / d$n
p <- link(m1_stan)
y <- sim(m1_stan)
p.mean <- apply( p , 2 , mean )
p.PI <- apply( p , 2 , PI )
y.mean <- apply( y , 2 , mean )
y.PI <- apply( y , 2 , PI )
# plot raw proportions success for each case
plot( d$psuccess , col=rangi2 ,
ylab="successful proportion" , xlab="case" , xaxt="n" ,
xlim=c(0.75,8.25) , pch=16 )
# label cases on horizontal axis
axis( 1 , at=1:8 ,
labels=c( "LAL","LAS","LIL","LIS","SAL","SAS","SIL","SIS" ) )
# display posterior predicted proportions successful
points( 1:8 , p.mean )
for ( i in 1:8 ) lines( c(i,i) , p.PI[,i] )
And here’s the second plot, showing number of successes on the vertical:
# plot raw counts success for each case
plot( d$y , col=rangi2 ,
ylab="successful attempts" , xlab="case" , xaxt="n" ,
xlim=c(0.75,8.25) , pch=16 )
# label cases on horizontal axis
axis( 1 , at=1:8 ,
labels=c( "LAL","LAS","LIL","LIS","SAL","SAS","SIL","SIS" ) )
# display posterior predicted successes
points( 1:8 , y.mean )
for ( i in 1:8 ) lines( c(i,i) , y.PI[,i] )
So what’s different? The biggest difference is probably that the left plot (proportions) makes the probabilities more comparable, because it ignores the sample size for each case on the horizontal axis. This has the advantage of showing, for example, that SIS attempts are predicted to be somewhat successful, even though only 4 of them were observed. The right plot (counts), in contrast, makes it hard to see the differing probabilities, because samples size varies so much across cases. On the other hand, the count plot (right) has the advantage of showing additional uncertainty that arises from the binomial process. Inspect the SIS case again, for example. The observed proportion of SIS successes is outside and below the probability interval (left plot). However, in the bottom plot, the model can accommodate the SIS cases, due to additional uncertainty arising from the binomial process. In other words, an additional level of stochasticity is factored into the bottom plot, and so in total looking also at counts might be necessary to seriously critique the model.
The two models to compare are the one you fit before, with the three main effects, and the new one, including the interaction P ×A. Here is the code to fit the models and compare them:
m2 <- ulam(
alist(
y ~ dbinom( n , p ),
logit(p) <- a + bP*pirateL + bV*victimL +
bA*pirateA + bPA*pirateL*pirateA ,
a ~ dnorm(0,1.5),
bP ~ dnorm(0,1),
bV ~ dnorm(0,1),
bA ~ dnorm(0,1),
bPA ~ dnorm(0,1)
) , data=d , chains=4 , log_lik=TRUE )
## Running MCMC with 4 sequential chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 finished in 0.0 seconds.
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 finished in 0.0 seconds.
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 finished in 0.0 seconds.
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 4 finished in 0.0 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.0 seconds.
## Total execution time: 0.5 seconds.
compare( m1_stan , m2 )
This seems to be a tie. Let’s look at the estimates for the interaction model:
precis(m2)
So when the pirate is both large and adult, the log-odds are smaller. This seems like a weird result. Shouldn’t being large and adult help? It does. But the main effects for size and age are also turned on, when an individual is both large and adult. So it’s hard to understand what the model is saying, without computing predictions case by case. However, the predictions are essentially the same as the previous model. There just isn’t enough data here to support any confident inference about an interaction.
12H2. Counts are nearly always over-dispersed relative to Poisson. So fit a gamma-Poisson (aka negative-binomial) model to predict deaths using femininity. Show that the over-dispersed model no longer shows as precise a positive association between femininity and deaths, with an 89% interval that overlaps zero. Can you explain why the association diminished in strength?
To deal with the over-dispersion seen in the previous problem, now we’ll fit a Poisson model with varying rates, a gamma-Poisson model. This code will set up the data (just as in the previous problem) and fit the gamma-Poisson model:
library(rethinking)
data(Hurricanes)
d <- Hurricanes
d$fmnnty_std <- ( d$femininity - mean(d$femininity) )/sd(d$femininity)
dat <- list( D=d$deaths , F=d$fmnnty_std)
m2 <- ulam(
alist(
D ~ dgampois( lambda , scale ),
log(lambda) <- a + bF*F,
a ~ dnorm(1,1),
bF ~ dnorm(0,1),
scale ~ dexp(1)
), data=dat , chains=4 , log_lik=TRUE )
## Running MCMC with 4 sequential chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: neg_binomial_2_lpmf: Location parameter[1] is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc6b4d3954.stan', line 19, column 4 to column 41)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 1 finished in 0.1 seconds.
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: neg_binomial_2_lpmf: Location parameter[37] is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc6b4d3954.stan', line 19, column 4 to column 41)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: neg_binomial_2_lpmf: Location parameter[37] is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc6b4d3954.stan', line 19, column 4 to column 41)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: neg_binomial_2_lpmf: Location parameter[1] is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc6b4d3954.stan', line 19, column 4 to column 41)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 finished in 0.1 seconds.
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: neg_binomial_2_lpmf: Location parameter[1] is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc6b4d3954.stan', line 19, column 4 to column 41)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 finished in 0.1 seconds.
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: neg_binomial_2_lpmf: Location parameter[2] is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc6b4d3954.stan', line 19, column 4 to column 41)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 finished in 0.1 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.1 seconds.
## Total execution time: 0.5 seconds.
Inspect the marginal posterior distributions of the parameters:
precis(m2)
Note that the 89% interval for bF now overlaps zero, because it has more than 5 times the standard deviation as the analogous parameter in model m1. For a head-to-head comparison, let’s do a graphical coeftab:
plot(coeftab(m1,m2))
Model m2 has nearly the same posterior means for the intercept and slope bF, but much more uncertainty. How does this translate into predictions? To find out, let’s do the plotting from the previous problem over again, using model m3 now:
# plot raw data
plot( dat$F , dat$D , pch=16 , lwd=2 ,
col=rangi2 , xlab="femininity (std)" , ylab="deaths" )
# compute model-based trend
pred_dat <- list( F=seq(from=-2,to=1.5,length.out=30) )
lambda <- link(m2,data=pred_dat)
lambda.mu <- apply(lambda,2,mean)
lambda.PI <- apply(lambda,2,PI)
# superimpose trend
lines( pred_dat$F , lambda.mu )
shade( lambda.PI , pred_dat$F )
# compute sampling distribution
deaths_sim <- sim(m2,data=pred_dat)
deaths_sim.PI <- apply(deaths_sim,2,PI)
# superimpose sampling interval as dashed lines
lines( pred_dat$F , deaths_sim.PI[1,] , lty=2 )
lines( pred_dat$F , deaths_sim.PI[2,] , lty=2 )
There is more uncertainty now about the relationship, and the prediction interval is wider. But the predictions are still terrible. Now for the conceptual part of this problem: Why does including varying rates, via the gamma distribution, result in greater uncertainty in the relationship? The gamma-Poisson model allows each hurricane to have its own unique expected death rate, sampled from a common distribution that is a function of the femininity of hurricane names. We can actually plot this distribution from the posterior distribution, for any given femininity value. I’ll produce three examples, plotting 100 randomly sampled gamma distributions of the rate of deaths for three different femininity values:
post <- extract.samples(m2)
fem <- (-1) # 1 stddev below mean
for ( i in 1:100 )
curve( dgamma2(x,exp(post$a[i]+post$bF[i]*fem),post$scale[i]) ,
from=0 , to=70 , xlab="mean deaths" , ylab="Density" ,
ylim=c(0,0.19) , col=col.alpha("black",0.2) ,
add=ifelse(i==1,FALSE,TRUE) )
mtext( concat("femininity = ",fem) )
post <- extract.samples(m2)
fem <- (0) # 1 stddev below mean
for ( i in 1:100 )
curve( dgamma2(x,exp(post$a[i]+post$bF[i]*fem),post$scale[i]) ,
from=0 , to=70 , xlab="mean deaths" , ylab="Density" ,
ylim=c(0,0.19) , col=col.alpha("black",0.2) ,
add=ifelse(i==1,FALSE,TRUE) )
mtext( concat("femininity = ",fem) )
post <- extract.samples(m2)
fem <- (1) # 1 stddev below mean
for ( i in 1:100 )
curve( dgamma2(x,exp(post$a[i]+post$bF[i]*fem),post$scale[i]) ,
from=0 , to=70 , xlab="mean deaths" , ylab="Density" ,
ylim=c(0,0.19) , col=col.alpha("black",0.2) ,
add=ifelse(i==1,FALSE,TRUE) )
mtext( concat("femininity = ",fem) )
Each gray curve above is a gamma distribution of mean death rates, sampled from the posterior distri- bution of the model. This is an inherently confusing thing: a distribution sampled from a distribution. So let’s take it again, slowly. A gamma distribution is defined by two parameters: a mean and a scale. The mean in this model is controlled by the linear model and its two parameters, a and bf. The code above takes single values of a and bf from the posterior distribution and builds a single linear model. It’s exponentiated in the code, because this model uses a log link. And the parameter scale is the scale, so you can see that in the code as well. 100 gamma distributions are drawn for each given value of femininity. This visualizes the uncertainty in the posterior about the variation in death rates. Read that again, slowly. It is a bit weird, but with a little time, it makes plenty of sense. Just like a simple parameter like an intercept has uncertainty, and the posterior distribution measures it (given a model and data), a function of parameters like a gamma distribution will also have uncertainty. Essentially there are an infinite number of gamma distributions that are possible, the the Bayesian model has con- sidered all of them and ranked them by their plausibility. Each plot above shows 100 such gamma distributions, sampled from the posterior distribution in proportion to their plausibilities. So now back to the explanation of why the parameters a and bF have wider posterior distributions. Once we allow any given values of a and bF to produce many different death rates, because they feed into a gamma distribution that produces variation, then many more distinct values of a and bF can be consistent with the data. This results in wider posterior distributions. The same phenomenon will reappear when we arrive at multilevel models in Chapter 13. You might be curious how m2 compares to m1, in terms of PSIS/WAIC. If so, take a look. You’ll find that the effective number of parameters for m1 is very very large. Adding a parameter to m2 actually makes the model less prone to overfitting. Can you explain why?
13H4. Revisit the Reed frog survival data, data(reedfrogs), and add the predation and size treatment variables to the varying intercepts model. Consider models with either predictor alone, both predictors, as well as a model including their interaction. What do you infer about the causal influence of these predictor variables? Also focus on the inferred variation across tanks (the σ across tanks). Explain why it changes as it does across models with different predictors included.
This problem is very similar to 13M1, but it asks for interpretation of the posterior distribution. The same code is needed. Run the models from that solution again. Then let’s inspect the posterior distributions of the coefficients. First the model with only predation:
Running the models for 13M1
library(rethinking)
data(reedfrogs)
d <- reedfrogs
dat <- list(
S = d$surv,
n = d$density,
tank = 1:nrow(d),
pred = ifelse( d$pred=="no" , 0L , 1L ),
size_ = ifelse( d$size=="small" , 1L , 2L )
)
m1.1 <- ulam(
alist(
S ~ binomial( n , p ),
logit(p) <- a[tank],
a[tank] ~ normal( a_bar , sigma ),
a_bar ~ normal( 0 , 1.5 ),
sigma ~ exponential( 1 )
), data=dat , chains=4 , cores=4 , log_lik=TRUE )
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 finished in 0.1 seconds.
## Chain 2 finished in 0.1 seconds.
## Chain 3 finished in 0.1 seconds.
## Chain 4 finished in 0.1 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.1 seconds.
## Total execution time: 0.2 seconds.
# pred
m1.2 <- ulam(
alist(
S ~ binomial( n , p ),
logit(p) <- a[tank] + bp*pred,
a[tank] ~ normal( a_bar , sigma ),
bp ~ normal( -0.5 , 1 ),
a_bar ~ normal( 0 , 1.5 ),
sigma ~ exponential( 1 )
), data=dat , chains=4 , cores=4 , log_lik=TRUE )
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: normal_lpdf: Scale parameter is 0, but must be positive! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc20692eed.stan', line 19, column 4 to column 32)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 finished in 0.1 seconds.
## Chain 2 finished in 0.1 seconds.
## Chain 3 finished in 0.1 seconds.
## Chain 4 finished in 0.1 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.1 seconds.
## Total execution time: 0.2 seconds.
# size
m1.3 <- ulam(
alist(
S ~ binomial( n , p ),
logit(p) <- a[tank] + s[size_],
a[tank] ~ normal( a_bar , sigma ),
s[size_] ~ normal( 0 , 0.5 ),
a_bar ~ normal( 0 , 1.5 ),
sigma ~ exponential( 1 )
), data=dat , chains=4 , cores=4 , log_lik=TRUE )
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: normal_lpdf: Scale parameter is 0, but must be positive! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc6be3642c.stan', line 19, column 4 to column 32)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 finished in 0.1 seconds.
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 finished in 0.2 seconds.
## Chain 2 finished in 0.1 seconds.
## Chain 4 finished in 0.1 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.1 seconds.
## Total execution time: 0.3 seconds.
# pred + size
m1.4 <- ulam(
alist(
S ~ binomial( n , p ),
logit(p) <- a[tank] + bp*pred + s[size_],
a[tank] ~ normal( a_bar , sigma ),
bp ~ normal( -0.5 , 1 ),
s[size_] ~ normal( 0 , 0.5 ),
a_bar ~ normal( 0 , 1.5 ),
sigma ~ exponential( 1 )
), data=dat , chains=4 , cores=4 , log_lik=TRUE )
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 finished in 0.2 seconds.
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 finished in 0.2 seconds.
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3 finished in 0.2 seconds.
## Chain 4 finished in 0.2 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.2 seconds.
## Total execution time: 0.3 seconds.
# pred + size + interaction
m1.5 <- ulam(
alist(
S ~ binomial( n , p ),
logit(p) <- a_bar + z[tank]*sigma + bp[size_]*pred + s[size_],
z[tank] ~ normal( 0 , 1 ),
bp[size_] ~ normal( -0.5 , 1 ),
s[size_] ~ normal( 0 , 0.5 ),
a_bar ~ normal( 0 , 1.5 ),
sigma ~ exponential( 1 )
), data=dat , chains=4 , cores=4 , log_lik=TRUE )
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
##
## Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1 finished in 0.2 seconds.
## Chain 3 finished in 0.2 seconds.
## Chain 4 finished in 0.2 seconds.
## Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2 finished in 0.3 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 0.2 seconds.
## Total execution time: 0.4 seconds.
precis( m1.2 )
## 48 vector or matrix parameters hidden. Use depth=2 to show them.
Predation has a very strong negative effect on survival, which makes sense. Now consider the model that omits predation but includes size:
precis( m1.3 , 2 , pars="s" )
Not such a clear effect of size. Now let’s consider the models that include both. First the model without an interaction:
precis( m1.4 , 2 , pars=c("bp","s") )
The agrees with the previous models. Predation has a clear and large impact. Size not so much. Now the interaction model:
precis( m1.5 , 2 , pars=c("bp","s") )
The effect of predation does seem to vary by size. Let’s compute the contrast:
post <- extract.samples( m1.5 )
quantile( post$bp[,2] - post$bp[,1] , c(0.055,0.5,0.945) )
## 5.5% 50% 94.5%
## -1.6749201 -0.8508550 -0.0663939
So the contrast is reliably negative. Seems like size does matter, but only as it influences predation.
16H5. Population dynamic models are typically very difficult to fit to empirical data. The lynx-hare example in the chapter was easy, partly because the data are unusually simple and partly because the chapter did the difficult prior selection for you. Here’s another data set that will impress upon you both how hard the task can be and how badly Lotka-Volterra fits empirical data in general. The data in data(Mites) are numbers of predator and prey mites living on fruit. Model these data using the same Lotka-Volterra ODE system from the chapter. These data are actual counts of individuals, not just their pelts. You will need to adapt the Stan code in data(Lynx_Hare_model). Note that the priors will need to be rescaled, because the outcome variables are on a different scale. Prior predictive simulation will help. Keep in mind as well that the time variable and the birth and death parameters go together. If you rescale the time dimension, that implies you must also rescale the parameters.
Let’s begin by just loading the data and plotting it:
library(rethinking)
data(Mites)
plot( Mites$day , Mites$prey )
points( Mites$day , Mites$predator , pch=16 )
The horizontal axis is day in the experiment and the vertical axis is count of individuals. The open circles are prey and the filled circles are predators. There does seem to be some kind of cycle there, maybe. We need to build the model and we need priors. The model is actually simpler than the one in the book, because there won’t be a measurement layer. We have actual mite counts here, not some error- prone proxy like pelts. Let’s start with the priors, simulating the true counts as we did in the chapter. If you play around, you can find some priors that will produce cycles. We need the simulation function from the chapter:
sim_mites <- function( n_steps , init , theta , dt=0.002 ) {
L <- rep(NA,n_steps)
H <- rep(NA,n_steps)
L[1] <- init[1]
H[1] <- init[2]
for ( i in 2:n_steps ) {
L[i] <- L[i-1] + dt*L[i-1]*( theta[3]*H[i-1] - theta[4] )
H[i] <- H[i-1] + dt*H[i-1]*( theta[1] - theta[2]*L[i-1] )
}
return( cbind(L,H) )
}
And here are some priors with code to simulate and plot:
N <- 16
theta <- matrix( NA , N , 4 )
theta[,1] <- rnorm( N , 1.5 , 1 )
theta[,2] <- rnorm( N , 0.005 , 0.1 )
theta[,3] <- rnorm( N , 0.0005 , 0.1 )
theta[,4] <- rnorm( N , 0.5 , 1 )
par(mfrow=c(4,4),cex=1.05)
par(mgp = c(1, 0.2, 0), mar = c(1, 2, 1, 1) + 0.1,
tck = -0.02, cex.axis = 0.8, bty = "l" )
for ( i in 1:N ) {
z <- sim_mites( n_steps=1e4 , init=as.numeric(Mites[1,3:2]) , theta=theta[i,] )
# preds in blue, prey in black
ymax <- max(z)
if ( ymax == Inf ) ymax <- 1.79e+300
plot( NULL , ylim=c(0,ymax) , xlim=c(0,1e4) , xaxt="n" , ylab="" , xlab="" )
lines( z[,2] , col="black" , lwd=2 )
lines( z[,1] , col=rangi2 , lwd=2 )
mtext( "time" , 1 )
}
These priors can produce cycles, but they very often produce unregulated growth of either the prey or predator. Try repeating the the code above a few times, to get a sense of the range of behavior. The subtle trick is going to be getting the time units in the data and implied by these parameters on the same scale. I’ll do this by dividing the days by 7 to make them into fractions of a week. But you could also rescale the parameters. Now we need the model. We need the lynx-hare model, but without the measurement error part. If you use the measurement error part, it’ll work very similarly however. I’m going to display the model code here. I made a separate text file, named Mites.stan, with this model code, instead of storing it in an R character vector.
Here is the Stan code as an R character vector:
mites_model_code <- "
functions {
vector dpop_dt(real t,
vector pop_init,
array[] real theta) {
real L = pop_init[1];
real H = pop_init[2];
real bh = theta[1];
real mh = theta[2];
real ml = theta[3];
real bl = theta[4];
vector[2] dpop;
dpop[1] = (bl * H - ml) * L; // dL_dt
dpop[2] = (bh - mh * L) * H; // dH_dt
return dpop;
}
}
data {
int<lower=0> N;
array[N, 2] int<lower=0> mites;
array[N] real<lower=0> days;
}
parameters {
array[4] real<lower=0> theta;
array[2] real<lower=0> pop_init;
array[2] real<lower=0> sigma;
}
transformed parameters {
array[N, 2] real pop;
pop[1, 1] = pop_init[1];
pop[1, 2] = pop_init[2];
if (N > 1) {
array[N-1] vector[2] pop_integrated;
pop_integrated = ode_rk45(dpop_dt, to_vector(pop_init), 0, days[2:N], theta);
for (i in 2:N) {
pop[i, 1] = pop_integrated[i-1][1];
pop[i, 2] = pop_integrated[i-1][2];
}
}
}
model {
theta[1] ~ normal(1.5, 1);
theta[2] ~ normal(0.005, 0.1);
theta[3] ~ normal(0.0005, 0.1);
theta[4] ~ normal(0.5, 1);
sigma ~ exponential(1);
pop_init[1] ~ normal(mites[1, 1], 50);
pop_init[2] ~ normal(mites[1, 2], 50);
for (t in 1:N) {
for (k in 1:2) {
mites[t, k] ~ lognormal(log(pop[t, k]), sigma[k]);
}
}
}
generated quantities {
array[N, 2] real mites_pred;
for (t in 1:N) {
for (k in 1:2) {
mites_pred[t, k] = lognormal_rng(log(pop[t, k]), sigma[k]);
}
}
}
"
Now we can prepare the data and run the model:
dat_mites <- list(
N = nrow(Mites),
mites = as.matrix( Mites[,3:2] ),
days = Mites[,1]/7 )
m16H5.1 <- stan( model_code=mites_model_code , data=dat_mites , chains=4 , cores=4 , iter=2000 ,control=list( adapt_delta=0.99 ) )
## Running MCMC with 4 parallel chains...
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Scale parameter is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: ode_rk45: ode parameters and data[1] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: ode_rk45: ode parameters and data[1] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: lognormal_lpdf: Scale parameter is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: lognormal_lpdf: Scale parameter is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: lognormal_lpdf: Scale parameter is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 1 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: ode_rk45: initial state[1] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 4 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: ode_rk45: Failed to integrate to next output time (3.28571) in less than max_num_steps steps (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 4 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: ode_rk45: Failed to integrate to next output time (3.28571) in less than max_num_steps steps (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 2 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: ode_rk45: initial state[2] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 3 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: ode_rk45: Failed to integrate to next output time (3.28571) in less than max_num_steps steps (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: ode_rk45: initial state[1] is inf, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 41, column 4 to column 81)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: lognormal_lpdf: Scale parameter is inf, but must be positive finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 1 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 4 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 3 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 3
## Chain 3 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 2 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 2 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 2 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 2 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 2
## Chain 4 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 1 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 3 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 1 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 1 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 1 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 1 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 1
## Chain 3 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 4 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 3 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 1 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 4 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 3 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 4 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
## Chain 4 Exception: lognormal_lpdf: Location parameter is nan, but must be finite! (in '/var/folders/4f/_h6ql5191cl7bc390kh2xd380000gn/T/Rtmp5J4if2/model-3bfc64eb21a2.stan', line 61, column 6 to column 56)
## Chain 4 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
## Chain 4 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
## Chain 4
## Chain 3 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 1 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 3 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 3 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 1 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 4 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 1 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 3 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 4 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 4 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 3 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 1 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 4 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 3 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 1 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 4 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1 finished in 57.6 seconds.
## Chain 3 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 4 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 3 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4 finished in 61.4 seconds.
## Chain 3 finished in 59.6 seconds.
## Chain 2 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 2 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 2 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 2 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 2 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 2 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 2 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 2 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 2 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 2 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 2 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2 finished in 1332.3 seconds.
##
## All 4 chains finished successfully.
## Mean chain execution time: 377.7 seconds.
## Total execution time: 1335.6 seconds.
## Warning: 1000 of 4000 (25.0%) transitions hit the maximum treedepth limit of 11.
## See https://mc-stan.org/misc/warnings for details.
# If using an external text file:
# m16H5.1 <- stan( file="Mites.stan" , data=dat_mites , chains=4 , cores=4 , iter=2000 ,control=list( adapt_delta=0.99 ) )
precis(m16H5.1,2)
## 140 matrix parameters hidden. Use depth=3 to show them.
Some of those parameters are on such a small scale, they look to be equal to zero. Let’s look at the posterior predictions:
post <- extract.samples(m16H5.1)
mites <- dat_mites$mites
plot( dat_mites$days , mites[,2] , pch=16 , ylim=c(0,3000) ,
xlab="time (week)" , ylab="mites" )
points( dat_mites$days , mites[,1] , col=rangi2 , pch=16 )
for ( s in 1:21 ) {
lines( dat_mites$days , post$pop[s,,2] , col=col.alpha("black",0.2) , lwd=2 )
lines( dat_mites$days , post$pop[s,,1] , col=col.alpha(rangi2,0.3) , lwd=2 )
}
There are some clear problems here. Ecologists know that plain Lotka-Volterra models have trouble with realistic data. And these data are even an experiment. Better models have stochasticity in them, such that stochasticity in birth/death processes propagates through the dynamics. These models are deterministic, and so in small populations and spatially divided populations (like these mite experiments—go read the original paper), they can have clear problems. The important thing is to understand the structural features of each model that produce different dynamics. Real biology is too complicated to ever get the model right. But models are still essential for their study.