## Forecasting Environmental Immigration to the UK

A couple of months ago, a paper I worked on with co-authors from the Centre of Population Change was published in Population and Environment. It summarised work we did as part of the UK Government Office for Science Foresight project on Migration and Global Environmental Change. Our aim was to build expert based forecasts of environmental immigrants to the UK. We conducted a Delphi survey of nearly 30 migration experts from academia, the civil service and non-governmental organisations to obtain estimates on the future levels of immigration to the UK in 2030 and 2060 with uncertainty. We also asked them what proportion of current and future immigration are/will be environmental migrants. The results were incorporated into a set of model averaged Bayesian time series models through prior distributions on the mean and variance terms.

The plots in the journal article got somewhat butchered during the publication process. Below is the non-butchered version for the future immigration to the UK alongside the past immigration data from the Office of National Statistics.

At first, I was a bit taken aback with this plot. A few experts thought there were going to be some very high levels of future immigration which cause the rather striking large upper tail. However, at a second glance, the central percentiles show a gentle decrease where these is only (approximately) a 30% chance of an increase in future migration from the 2010 level throughout the forecast period.

The expert based forecast for total immigration was combined with the responses to questions on the proportion of environmental migrants, to obtain an estimate on both the current level of environmental migration (which is not currently measured) and future levels:

As is the way with these things, we came across some problems in our project. The first, was with the definition of an environmental migrant, which is not completely nailed on in the migration literature. As a result the part of the uncertainty in the expert based forecasts are reflective of not only the future level but also of the measure itself. The second was with the elicitation of uncertainty. We used a Likert type scale, which caused some difficulties even during the later round of the Delphi survey. If I was to do over, then this I reckon problem could be much better addressed by getting experts to visualise their forecast fans in an interactive website, perhaps creating a shiny app with the fanplot package. Such an approach would result in smoother fans than those in the plots above, which were based on interpolations from expert answers at only two points of time in the future (2030 and 2060).

Publication Details:

Abel, G.J., Bijak, J., Findlay, A.M., McCollum, D. and Wiśniowski, A. (2013). Forecasting environmental migration to the United Kingdom: An exploration using Bayesian models. Population and Environment. 35 (2), 183–203

Over the next 50 years, the potential impact of environmental change on human livelihoods could be considerable, with one possible consequence being increased levels of human mobility. This paper explores how uncertainty about the level of immigration to the United Kingdom as a consequence of environmental factors elsewhere may be forecast using a methodology involving Bayesian models. The conceptual understanding of forecasting is advanced in three ways. First, the analysis is believed to be the first time that the Bayesian modelling approach has been attempted in relation to environmental mobility. Second, the paper considers the expediency of this approach by comparing the responses to a Delphi survey with conventional expectations about environmental mobility in the research literature. Finally, the values and assumptions of the expert evidence provided in the Delphi survey are interrogated to illustrate the limited set of conditions under which forecasts of environmental mobility, as set out in this paper, are likely to hold.

## The tsbugs package for R

I have/will update this post as I expanded the tsbugs package.

My tsbugs package has gone up on CRAN. I decided not to write a vignette for the submission, as it would have involved doing some estimation in BUGS via R2WinBUGS or R2OpenBUGS and running into some problems when submitted the package. Instead I thought I would post a quick guide below. Note, the package code is on my github if you would like to contribute more models to the package…

The functions in the tsbugs package are aimed to automate the writing of time series models to run in WinBUGS or OpenBUGS. I created these functions a while back when I was doing some work on model averaging for time series models. I found it a lot easier to build R functions to write the BUGS models than the more error-inducing process of copy and pasting BUGS scripts, and then making slight alterations to create new models. It also allowed me to add arguments to specify different lag lengths, prior distributions, variance assumptions and data lengths. Below are examples for three types of time series models; autorgressive models with constant variance, stochastic volatility and random variance shift models.

Autoregressive Models

The ar.bugs command builds a BUGS script for autoregressive (AR) models ready to use in R2OpenBUGS. For example, consider the LakeHuron data.

LH <- LakeHuron
par(mfrow=c(2,1))
plot(LH, main="Level (ft)")
plot(diff(LH), main="Differenced Level")


We can construct a AR(1) model for this data (after differencing the data to obtain a stationary mean) as such:

library("tsbugs")
ar1 <- ar.bugs(y=diff(LH), ar.order=1)
print(ar1)


The ar.bugs function allows for alternative specifications for prior distributions, forecasts and the inclusion of mean term:

ar2 <- ar.bugs(y=diff(LH), ar.order=2, ar.prior="dunif(-1,1)",
var.prior="dgamma(0.001,0.001)", k = 10,
mean.centre = TRUE)
print(ar2)


The tsbugs objects can be used with R2OpenBUGS to easily run models from R. This is made even easier using the inits and nodes functions (also in the tsbugs package). For example:

writeLines(ar2$bug, "ar2.txt") library("R2OpenBUGS") ar2.bug <- bugs(data = ar2$data,
inits = list(inits(ar2)),
param = c(nodes(ar2, "prior")$name, "y.new"), model = "ar2.txt", n.iter = 11000, n.burnin = 1000, n.chains = 1)  Note, 1) the model is written to a .txt file (as required by R2OpenBUGS), 2) the data used is part of the tsbugs object. The ar.bugs command cleans the data and adds missing values at the end of the series for foretasted values, 3) the initial values offered by the inits function are very crude, and with more complicated data or models, users might be better off specifying there own list of initial values. The parameter traces and posterior distributions can be plotted using the coda package: library("coda") param.mcmc <- as.mcmc(ar2.bug$sims.matrix[,nodes(ar2, "prior")$name]) plot(param.mcmc[,1:4])  The fanplot package can be used to plot the entire series of posterior predictive distributions. We may also plot (after deriving using the diffinv function) the posterior predictive distributions of the lake level: # derive future level ynew.mcmc <- ar2.bug$sims.list$y.new lhnew.mcmc <- apply(ynew.mcmc, 1, diffinv, xi = tail(LH,1)) lhnew.mcmc <- t(lhnew.mcmc[-1,]) # plot differenced par(mfrow=c(2,1)) plot(diff(LH) ,xlim=k0+c(-50,10), main="Differenced Level") # add fan library("fanplot") k0 <- end(LH)[1] fan(ynew.mcmc, start=k0+1, rcex=0.5) # plot undifferenced plot(LH, xlim=k0+c(-50,10), main="Level") fan(lhnew.mcmc, start=k0+1, rcex=0.5)  Stochastic Volatility Models The sv.bugs command builds a BUGS script for stochastic volatility SV models ready to use in R2OpenBUGS. For example, consider the svpdx data. # plot plot(svpdx$pdx, type = "l",
main = "Return of Pound-Dollar exchange rate data
from 2nd October 1981 to 28th June 1985",
cex.main = 0.8)


We can construct a AR(0)-SV model for this data, and also obtain posterior simulations using the sv.bugs command:

y <- svpdx$pdx sv0 <- sv.bugs(y, sim=TRUE) print(sv0)  This model closely matches those presented in Meyer and Yu (2002). There are further options in the tsbugs package to incorporate different priors that do not involve transformations such as those for psi1 above. Using R2OpenBUGS we can fit the model, # decent initial value for variance in first period init <- inits(sv0, warn=FALSE) init$psi0 <- log(var(y))
# write bug
writeLines(sv0$bug, "sv0.txt") # might take a while to compile sv0.bug <- bugs(data = sv0$data,
inits = list(init),
param = c(nodes(sv0, "prior")$name,"y.sim","h"), model = "sv0.txt", n.iter = 11000, n.burnin = 1000, n.chains = 1)  The volatility and estimates can be easily extracted, h.mcmc <- sv0.bug$sims.list$h  Which allows us to directly view the estimated volatility process or the time-dependent standard deviation using the fanplot package, # plot plot(NULL, xlim = c(1, 945)+c(0,40), ylim = c(-4,2), main="Estimated Volatility from SV Model") # fan fan(h.mcmc, type = "interval")  We can also plot the posterior simulations from the model: # derive percentiles y.mcmc <- sv0.bug$sims.list$y.sim # plot plot(NULL, type = "l", xlim = c(1, 945)+c(0,20), ylim = range(y), main = "Posterior Model Simulations and Data") fan(y.mcmc) lines(y)  Random Variance Shift Models The rv.bugs command builds a BUGS script for random variance (RV) shift models, similar to that of McCulloch and Tsay (1993) ready to use in R2OpenBUGS. Consider the ew data. r <- ts(ew[2:167]/ew[1:166]-1, start=1841) y <- diff(r) plot(y, main="Difference in England and Wales Population Growth Rate")  We can create a BUGS script to fit a RV model to this data, including posterior simulations, using the rv.bugs command: rv0<-rv.bugs(y, sim=TRUE) print(rv0)  and then run the script in R2OpenBUGS (this can take a couple of hours): # decent inital value for variance in first period init <- inits(rv0, warn=FALSE) init$isig02<-sd(y)^-2
# write bug
writeLines(rv0$bug,"rv0.txt") # might take a while to compile rv0.bug <- bugs(data = rv0$data,
inits = list(init),
param = c(nodes(rv0, "prior")$name,"y.sim", "h","delta","beta"), model = "rv0.txt", n.iter = 11000, n.burnin = 1000, n.chains = 1)  We can plot the posterior simulations from the model using the fanplot package: # derive percentiles y0 <- tsp(y)[1] y.mcmc <- rv0.bug$sims.list$y.sim # plot plot(NULL, xlim=tsp(y)[1:2]+c(-5,5), ylim = range(y), main="Posterior Simulations") fan(y.mcmc, start = y0, rlab=c(10,50,90), llab=TRUE) lines(y)  Alongside the posterior distributions of the standard deviations, # derive sigma h.mcmc <- rv0.bug$sims.list$h sigma.mcmc <- sqrt(exp(h.mcmc)) # plots plot(NULL, xlim =tsp(y)[1:2]+c(-5,5), ylim = c(0,0.008), main="Standard Deviation") fan(sigma.mcmc, start = y0, rlab=c(5,50,95), llab = c(5,50,95))  The posterior distributions of the probability of a variance shift and multiplier effect of the shift in variance (delta[t] and beta[t] in the BUGS model) can also be plotted. Note, when there is no variance shift, the posterior of the beta[t] is similar to the prior distribution. #extract data delta.mcmc <- rv0.bug$sims.list$delta beta.mcmc <- rv0.bug$sims.list$beta # plots par(mfrow=c(2,1)) plot(NULL, xlim = tsp(y)[1:2]+c(-5,5), ylim = c(0,1), main="Probability of Variance Change Point") fan(delta.mcmc, start=y0, ln = NULL, rlab = NULL) plot(NULL, xlim = tsp(y)[1:2]+c(-5,5), ylim = c(-2,2), main="Variance Multiplier") fan(beta.mcmc, start=y0)  ## The fanplot package for R I have/will update this post as I expanded the fanplot package. My fanplot package has gone up on CRAN. Below is a online version of the package vignette… Visualising Time Series Model Results The fanplot package can also be used to display uncertainty in estimates from time series models. To illustrate, the packages th.mcmc data frame object contains posterior density distributions of the estimated volatility of daily returns $y_t$ from the Pound/Dollar exchange rate from 02/10/1981 to 28/6/1985. These distributions are from a MCMC simulation from a stochastic volatility model given in Meyer and Yu (2002) where it assumed; $y_t | \theta_t = \exp\left(\frac{1}{2}\theta_t\right)u_t \qquad u_t \sim N(0, 1) \qquad t=1,\ldots,n.$ The latent volatilities $\theta_t$, which are unknown states in a state-space model terminology, are assumed to follow a Markovian transition over time given by the state equations: $\theta_t | \theta_{t-1}, \mu, \phi, \tau^2 = \mu + \phi \log \sigma^2_{t-1} + v_t \qquad v_t \sim N(0, \tau^2) \qquad t=1,\ldots,n$ with $\theta_0 \sim N(\mu, \tau^2)$. The th.mcmc object consists of (1000) rows corresponding to MCMC simulations and (945) columns corresponding to each $t$. A fan chart of the evolution of the distribution of $\theta_t$ can be visualised using the fanplot package via, library("fanplot") # empty plot plot(NULL, main="Percentiles", xlim = c(1, 965), ylim = c(-2.5, 1.5)) # add fan fan(data = th.mcmc)  The fan function calculates the values of 100 equally spaced percentiles of each future distribution when the default data.type = "simulations" is set. This allows 50 fans to be plotted from the heat.colours colour palette, providing a fine level of shading. In addition, lines and labels are provided along each decile. Prediction Intervals When argument type = "interval" is set, the probs argument corresponds to prediction intervals. Consequently, the fan chart comprises of 3 different shades, running from the darkest shade for the 50th prediction interval to the lightest for the 95th prediction interval. # empty plot plot(NULL, main="Prediction Intervals", xlim = c(-20, 965), ylim = c(-2.5, 1.5)) # add fan fan(data = th.mcmc, type = "interval", llab=TRUE, rcex=0.6)  Contour lines are overlayed for the upper and lower bounds of each prediction intervals, as set using the ln command. A further line is plotted along the median of $\theta_t$, which is controlled by the med.ln argument (set to TRUE by default when type="interval"). The default labels on the right hand side correspond to the upper and lower bounds of each plotted line. The left labels are added by setting llab = TRUE. Note, some extra room is created for the labels by setting the xlim = c(-20, 965) argument of plotting area to a wider range than the original data (945 observations). The text size of the right labels are controlled using the rcex argument. The left labels, by default, take the same text size as rcex although they can be separately controlled using the lcex argument. Alternative Colours Alternative colour schemes to the default heat.colors, can be obtained by supplying a colorRampPalette to the fan.col argument. For example, a new palette running from blue to white, via grey can be passed using; # empty plot plot(NULL, main="Alternative Colour Scheme", xlim = c(-20, 965), ylim = c(-2.5, 1.5)) # add fan fan(data = th.mcmc, rlab=seq(20,80,15), llab=c(10,50,90), fan.col=colorRampPalette(c("royalblue", "grey", "white")))  Alternative labels are specified using the rlab and llab arguments. Spaghetti Plots Spaghetti plots can be used to represent uncertainty shown by a range of possible future trajectories or past estimates. For example using the th.mcmc object, 20 random sets of $\theta_t$ can be plotted when setting the argument style = "spaghetti"; # empty plot plot(NULL, main="Spaghetti Plot", xlim = c(-20, 965), ylim = range(th.mcmc)) # transparent fan with visible lines fan(th.mcmc, ln=c(5, 50, 95), llab=TRUE, alpha=0, ln.col="orange") # spaghetti lines fan(th.mcmc, style="spaghetti", n.spag=20)  The spaghetti lines are superimposed on a fan chart in order to illustrate some underlying probabilities. The initial fan chart is completely transparent from setting the transparency argument alpha to 0. In order for the percentile lines to be visible a non-transparent colour is used for the ln.col argument. Forecast Fans The fanplot package can also be used to illustrate probabilistic forecasts. For example, using the auto.arima function in the forecast package a model for the time series for net migration to the United Kingdom (contained in the ips data frame of the fanplot package) can be fitted. #create time series net <- ts(ips$net, start=1975)
#fit model
library("forecast")
m <- auto.arima(net)
m
Series: net
ARIMA(1,1,2) with drift

Coefficients:
ar1      ma1      ma2   drift
-0.2301  -0.0851  -0.6734  6.7625
s.e.   0.3715   0.3620   0.1924  1.4154

sigma^2 estimated as 1231:  log likelihood=-179.3
AIC=368.6   AICc=370.54   BIC=376.66


We may then simulate 1000 values from the selected model using the simulate.Arima function, and plot the results.

mm <- matrix(NA, nrow=1000, ncol=5)
for(i in 1:1000)
mm[i,] &amp;lt;- simulate(m, nsim=5)

# empty plot
plot(net, main="UK Net Migration", xlim=c(1975,2020), ylim=c(-100,300))

fan(mm, start=2013)


Users might want to connect the fan with the past data. This can be achieved by providing the last value to the anchor argument.

# empty plot
plot(net, main="UK Net Migration",
xlim=c(1975,2020), ylim=c(-100,300))

fan(mm, start=2013, anchor=net[time(net)==2012],
type="interval", probs=seq(5, 95, 5), ln=c(50, 80))


More shades for the fan are added to the plot (over the default 3 used for a interval fans) by supplying a sequence to the probs argument. Alternative contour lines (from the default median, 50th, 80th and 95th percentiles for interval fans) are added using the ln argument.

## A comparison of official population projections with Bayesian time series forecasts for England and Wales

A paper based on my some work I did with colleagues in the ESRC Centre for Population Change was published in the the Population Trends. We fitted a range of time series models (including some volatility models) to population change data for England and Wales, calculated the posterior model probabilities and then projected from the model averaged posterior predictive distributions. We found our volatility models were heavily supported. Our median matches very closely the Office of National Statistics mid scenario. It’s a tad surprising that projections based on forecasts of a single annual growth rate per year give a similar forecast to the ONS cohort component projection which are based on hundreds of future age-sex specific fertility, mortality and net migration rates. The ONS do not provide any form probabilistic uncertainty, instead the give a expert based high and low scenario, which roughly calibrated to our 50% prediction interval in 2033. I ran all the models in BUGS and did the fan chart plots in R.

Publication Details:

Abel, G.J., Bijak, J. and Raymer J. (2010). A comparison of official population projections with Bayesian time series forecasts for England and Wales. Population Trends, 141, 95–114.

We compare official population projections with Bayesian time series forecasts for England and Wales. The Bayesian approach allows the integration of uncertainty in the data, models and model parameters in a coherent and consistent manner. Bayesian methodology for time-series forecasting is introduced, including autoregressive (AR) and stochastic volatility (SV) models. These models are then fitted to a historical time series of data from 1841 to 2007 and used to predict future population totals to 2033. These results are compared to the most recent projections produced by the Office for National Statistics. Sensitivity analyses are then performed to test the effect of changes in the prior uncertainty for a single parameter. Finally, in-sample forecasts are compared with actual population and previous official projections. The article ends with some conclusions and recommendations for future work.