BCal help system:   The Tutorial

Show the corresponding software page in the BCal work area

Step D13:  Convergence issues

When considering reproducibility, the most important single issue is whether or not the MCMC simulation has converged.  In writing BCal, we have built in lots of internal checking and sensible defaults to try to ensure that the results you obtain have converged and are thus reliable.  However, at present, there is no automatic checking facility that is fool proof and you must take responsibility for checking convergence for yourself.  We usually do this by recalibrating the same project several times and comparing the results between the runs.  If each run of the MCMC simulation has converged, the samples obtained should be from identical distributions and as such (if large enough) should give rise to results that are reproducible within the tolerance of the radiocarbon dating methodology itself.

If you select "Show me" now, you will see our summary of the results from three different runs of BCal with different seeds for the pseudo random number generator.  We can see that, in general, the HPD regions produced from each run are very similar to one another.  In fact, there is only one parameter for which the variability in the posterior estimates cannot reasonably be explained by the large errors on the radiocarbon determinations themselves (there are several at this site greater than 100) and the variability in the calibration curve.  The rogue parameter is alpha 1 and we will focus on this parameter and why we are not surprised about this result in the next page of the tutorial.  First, we would like to encourage you to think about the other parameters which are the ones of greatest interest to the archaeologists at the fishpond site.

Taking a close look at the 95% HPD regions in the frame above, you will see that for all the thetas, phi 1 and beta 1/alpha 2 the three runs have produced results which are very similar.  Indeed, given the large errors on the determinations from this site (some of them are over 100) and the nature of the calibration curve, we consider these results to be extremely similar.  Since they are not identical, however, it is important that we report the results from just one run (don't be tempted to do any kind of averaging or to `pick and choose' the results you like best!)  and acknowledge the level of between run variability in any report or paper we write that includes them.


Go back to the previous step Go to the first step Go back to the previous step