Between some of the games I had a go at replicating a plot from liberation.fr on the connections between Euro 2016 players and the country of birth using the circlize package in R. As with the previous post, the colours are based on the home shirt of each team and data scraped from Wikipedia. The values in the parenthesis represent the total number of players born in the respective country, which dictates their ordering around the circle. It is interesting to see just how many players represent countries that they were not born in. Only Romania has a 23 man squad completely full of players born in the country and no other Romanian born players representing other nations.
This weekend I was having fun in France watching some Euro 2016 matches, visiting friends and avoiding Russian hooligans. Before my flight over I scraped some tables on the tournaments Wikipedia page with my newly acquired rvest skills, with the idea to build up a bilateral database of Euro 2016 squads and their players clubs.
On the flight I managed to come up with some maps showing these connections. First off I used ggplot2 to plot lines connecting the location of every players club teams to their national squads base in France. The path of the lines were calculated using the gcIntermediate function in the geospherepackage. The lines colour is based on the national teams jersey, which I obtained via R using the amazing extract_colours function in the rPlotterpackage.Continue reading “Euro 2016 Squads”
I have had a few emails recently regarding plots from my new working paper on global migration flows, which has received some media coverage here, here and here. The plots were created using Zuguang Gu’s excellent circlize package and are a modified version of those discussed in an earlier blog post. In particular, I have made four changes:
I have added arrow heads to better indicate the direction of flows, following the example in Scientific American.
I have reorganized the sectors on the outside of the circle so that in each the outflows are plotted first (largest to smallest) followed by the inflows (again, in size order). I prefer this new layout (previously the inflows were plotted first) as it allows the time sequencing of migration events (a migrant has to leave before they can arrive) to match up with the natural tendency for most to read from left to right.
I have cut out the white spaces that detached the chords from the outer sector. To my eye, this alteration helps indicate the direction of the flow and gives a cleaner look.
I have kept the smallest flows in the plot, but plotted their chords last, so that the focus is maintained on the largest flows. Previously smaller flows were dropped according to an arbitrary cut off, which meant that the sector pieces on the outside of the circle no longer represented the total of the inflows and outflows.
Combined, these four modifications have helped me when presenting the results at recent conferences, reducing the time I need to spend explaining the plots and avoiding some of the confusion that occasionally occurred with the direction of the migration flows.
If you would like to replicate one of these plot, you can do so using estimates of the minimum migrant transition flows for the 2010-15 period and the demo R script in my migest package;
The code in the demo script uses the chordDiagram function, based on a recent update to the circlize package (0.3.7). Most likely you will need to either update or install the package (uncomment the install.packages lines in the code above).
If you want to view the R script in detail to see which arguments I used, then take a look at the demo file on GitHub here. I provide some comments (in the script, below the function) to explain each of the argument values.
Save and view a PDF version of the plot (which looks much better than what comes up in my non-square RStudio plot pane) using:
A few weeks ago a new version of the the Wittgenstein Centre Data Explorer was launched. The data explorer is intended to disseminate the results of a recent global population projection exercise which uniquely incorporates level of education (as well as age and sex) and the scientific input of more than 500 population experts around the world. Included are the projected populations used in the 5th assessment report of the Intergovernmental Panel on Climate Change (IPCC).
Over the past year or so I have been working (on and off) with the data lab team to create a shiny app, on which the data explorer is based. All the code and data is available on my github page. Below are notes to summarise some of the lessons I learnt:
1. Large data
We had a pretty large amount of data to display (31 indicators based on up to 7 scenarios x 26 time periods x 223 geographical areas x 21 age groups x 2 genders x 7 education categories)… so somewhere over 8 million rows for some indicators. Further complexity was added by the fact that some indicators were by definition not available for some dimensions of the data, for example, population median age is not available by age group. The size and complexity meant that data manipulations were a big issue. Using read.csv to load the data didn’t really cut the mustard, taking over 2 minutes when running on the server. The fantastic saves package and ultra.fast=TRUE argument in the loads function came to the rescue, alongside some pre-formatting to avoid as much joining and reshaping of the data on the server as possible. This cut load times to a couple of seconds at most, and allowed the app to work with the indicator variables on the fly as demanded by the user selections. Once the data was in, the more than awesome dplyr functions finished the data manipulations jobs in style. I am sure there is some smarter way to get everything running a little bit quicker than it does now, but I am pretty happy with the present speed, given the initial waiting times.
2. googleVis and gvisMerge
It’s a demographic data explorer, which means population pyramids have to pop-up somewhere. We needed pyramids that illustrate population sizes by education level, on top of the standard age and sex breakdown. Static versions of the education pyramids in the explorer have previously been used by my colleagues to illustrate past and future populations. For the graphic explorer I created some interactive versions, for side-by-side comparisons over time and between countries, and which also have some tool tip features. These took a little while to develop. I played with ggvis but couldn’t get my bar charts to go horizontal. I also took a look at some other functions for interactive pyramids but I couldn’t figure out a way to overlay the educational dimension. I found a solution by creating gender specific stacked bar charts from gvisBarChart in the googleVis package and then gvisMerge to bring them together in one plot. As with the data tables, they take a second or so render, so I added a withProgress bar to try and keep the user entertained.
3. The shiny user community
I asked questions using the shiny tag on stackoverflow and the shiny google group a number of times. A big thank you to everyone who helped me out. Browsing through other questions and answers was also super helpful. I found this question on organising large shiny code particularly useful. Making small changes during the reviewing process became a lot easier once I broke the code up across multiple .R files with sensible names.
4. Navbar Pages
When I started building the shiny app I was using a single layout with a sidebar and tabbed pages to display data and graphics (using tabsetPanel()), adding extra tabs as we developed new features (data selection, an assumption data base, population pyramids, plots of population size, maps, FAQ’s, etc, etc.). As these grew, the switch to the new Navbar layout helped clean up the appearance and provide a better user experience, where you can move between data, graphics and background information using the bar at the top of page.
5. Shading and link buttons
I added some shading and buttons to help navigate through the data selection and between different tabs. For the shading I used cssmatic.com to generate the colour of a fluidRow background. The code generated there was copy and pasted into a tags$style element for my defined row myRow1, as such;
I added some buttons to help novice users switch between tabs once they had selected or viewed their data. It was a little tougher to implement than the shading, and in the end I need a little help. I used bootsnipp.com to add some icons and define the style of the navigation buttons (using the tags$style element again).
Any comments or suggestions for improving website are welcome.
I have been having a go in R at visualising player movements for the World Cup. I wanted to use similar plots to those used to visualise international migration flows in the recent Science paper that I co-authored. In the end I came up with two plots. The first, and more complex one, is based on a non-square matrix of leagues system of players clubs by their national team.
You can zoom in and out if you click on the image.
Colours are based on the shirt of each team in the 2014 World Cup. Lines represent the connections between the country in which players play their club football (at the lines base) and their national teams (at the arrow head). Line thickness represent number of players. It’s a little cluttered, but shows nicely how many players in the English, Italian, Spanish and French leagues are involved in the world cup. It also highlights well some countries where almost all the players are at clubs abroad, for example most of the players in the African squads.
Whilst the first plot gave a lot of detail, I wanted to visualise the broader interactions, so I aggregated over leagues systems and national squads by regional confederations. This gives a square matrix:
league AFC CONCACAF CONMEBOL CAF UEFA
AFC 49 2 1 3 1
CONCACAF 0 13 0 0 0
CONMEBOL 2 0 54 11 0
CAF 0 0 0 36 0
UEFA 41 99 37 86 296
The plot of which looks like:
This type of aggregation works really well to show how few European national players play elsewhere (only Zvjezdan Misimovic in all the European World Cup squads). It also provides a way to compare the share of non-European players plying their trade in the European leagues to those in more local leagues within their confederation.
I scraped the data from the provisional squads on Wikipedia, and then created the images with the circlize package. All the code to reproduce the plots + scraping the Wikipedia squad pages are on the my github.
I have added a demo file to the latest version of the fanplot package. It has lots of examples of different plotting styles to represent uncertainty in time series data. In the updated package I have added functionality to plot fan charts based on irregular time series objects from the zoo package, plus the use of alternative colour palettes from the RColorBrewer and colorspace packages. All plots are based on the th.mcmc object, the estimated posterior distributions of the volatility in daily returns from the Pound/Dollar exchange rate from 02/10/1981 to 28/6/1985. To run the demo file from your R console (ensure fanplot, zoo, tsbugs, RColorBrewer and colorspace packages are all installed beforehand);
# if you want plots in separate graphic devices
# do not run this first line...
par(mfrow = c(10,2))
# run demo
demo(sv_fan, package = "fanplot", ask = FALSE)
The demo script should output this set of plots:
If you wish, click on the image above and take a closer look in your browser. In R, you can save the PDF version of all the plots on one graphics device (which looks much better than what comes up in my R graphics device):
I have/will update this post as I expanded the fanplot package.
I managed to catch David Spiegelhalter’s Tails You Win on BBC iplayer last week. I missed it the first time round, only for my parents on my last visit home to tell me about a Statistician jumping out of a plane on TV. It was a great watch. Towards the end I spotted some fan charts used by the Bank of England to illustrate uncertainty in their forecasts, similar to this one:
They discussed how even in the tails of their GDP predictive distribution they missed the financial crisis by a long shot. This got me googling, trying and find you how they made the plots, something that (also) completely passed me by when I put together my fanplot package for R. As far as I could tell they did them in Excel, although (appropriately) I am not completely certain. There are also MATLAB files that can create fan charts. Anyhow, I thought I would have a go at replicating a Bank of England fan chart in R….
Split Normal (Two-Piece) Normal Distribution.
The Bank of England produce fan charts of forecasts for CPI and GDP in their quarterly Inflation Reports. They also provide data, in the form of mode, uncertainty and a skewness parameters of a split-normal distribution that underlie their fan charts (The Bank of England predominately refer to the equivalent, re-parametrised, two-piece normal distribution). The probability density of the split-normal distribution is given by Julio (2007) as
where represents the mode parameter, and the two standard deviations and can be derived given the overall uncertainty parameter, and skewness parameters, , as;
As no split normal distribution existed in R, I added routines for a density, distribution and quantile function, plus a random generator, to a new version (2.1) of the fanplot package. I used the formula in Julio (2007) to code each of the three functions, and checked the results against those from the fan chart MATLAB code.
Fan Chart Plots for CPI.
Once I had the qsplitnorm function working properly, producing the fan chart plot in R was pretty straight-forward. I added two data objects to the fanplot package to help readers reproduce my plots below. The first, cpi, is a time series object with past values of CPI index. The second, boe, is a data frame with historical details on the split normal parameters for CPI inflation between Q1 2004 to Q4 2013 forecasts by the Bank of England.
The first column time0 refers to the base year of forecast, the second, time indexes future projections, whilst the remaining three columns provide values for the corresponding projected mode (), uncertainty () and skew () parameters:
Users can replicate past Bank of England fan charts for a particular period after creating a matrix object that contains values on the split-normal quantile function for a set of user defined probabilities. For example, in the code below, a subset of the Bank of England future parameters of CPI published in Q1 2013 are first selected. Then a vector of probabilities related to the percentiles, that we ultimately would like to plot different shaded fans for, are created. Finally, in a for loop the qsplitnorm function, calculates the values for which the time-specific (i) split-normal distribution will be less than or equal to the probabilities of p.
# select relevant data
y0 <- 2013
boe0 <- subset(boe, time0==y0)
k <- nrow(boe0)
# guess work to set percentiles the BOE are plotting
p <- seq(0.05, 0.95, 0.05)
p <- c(0.01, p, 0.99)
# quantiles of split-normal distribution for each probability
# (row) at each future time point (column)
cpival <- matrix(NA, nrow = length(p), ncol = k)
for (i in 1:k)
cpival[, i] <- qsplitnorm(p, mode = boe0$mode[i],
sd = boe0$uncertainty[i],
skew = boe0$skew[i])
The new object cpival contains the values evaluated from the qsplitnorm function in 6 rows and 13 columns, where rows represent the probabilities used in the calculation p and columns represent successive time periods.
The object cpival can then used to add a fan chart to the active R graphic device. In the code below, the area of the plot is set up when plotting the past CPI data, contained in the time series object cpi. The xlim arguments are set to ensure space on the right hand side of the plotting area for the fan. Following the Bank of England style for plotting fan charts, the background for future values is set to a gray colour, y-axis are plotted on the right hand side, a horizontal line are added for the CPI target and a vertical line for the two-year ahead point.
# past data
plot(cpi, type = "l", col = "tomato", lwd = 2,
xlim = c(y0 - 5, y0 + 3), ylim = c(-2, 7),
xaxt = "n", yaxt = "n", ylab="")
rect(y0 - 0.25, par("usr") - 1, y0 + 3, par("usr") + 1,
border = "gray90", col = "gray90")
# add fan
fan(data = cpival, data.type = "values", probs = p,
start = y0, frequency = 4,
anchor = cpi[time(cpi) == y0 - 0.25],
fan.col = colorRampPalette(c("tomato", "gray90")),
ln = NULL, rlab = NULL)
# boe aesthetics
axis(2, at = -2:7, las = 2, tcl = 0.5, labels = FALSE)
axis(4, at = -2:7, las = 2, tcl = 0.5)
axis(1, at = 2008:2016, tcl = 0.5)
axis(1, at = seq(2008, 2016, 0.25), labels = FALSE, tcl = 0.2)
abline(h = 2) #boe cpi target
abline(v = y0 + 1.75, lty = 2) #2 year line
The fan chart itself is outputted from the fan function, where arguments are set to ensure a close resemblance of the R plot to that produced by the Bank of England. The first three arguments in the fan function called in the above code, provide the cpival data to plotted, indicate that the data are a set of calculated values (as opposed to simulations) and provide the probabilities that correspond to each row of cpival object. The next two arguments define the start time and frequency of the data. These operate in a similar fashion to those used when defining time series in R with the ts function. The anchor argument is set to the value of CPI before the start of the fan chart. This allows a join between the value of the Q1 2013 observation and the fan chart. The fan.col argument is set to a colour palette for shades between tomato and gray90. The final two arguments are set to NULL to suppress the plotting of contour lines at the boundary of each shaded fan and their labels, as per the Bank of England style.
Default Fan Chart Plot.
By default, the fan function treats objects passed to the data argument as simulations from sequential distributions, rather than user-created values corresponding probabilities provided in the probs argument (as above). An alternative plot below, based on simulated data and default style settings in the fan function produces a fan chart with a greater array of coloured fans with labels and contour lines alongside selected percentiles of the future distribution. To illustrate we can simulate 10,000 values from the future split-normal distribution parameters from Q1 2013 in the boe0 data frame using the rsplitnorm function
#simulate future values
cpisim <- matrix(NA, nrow = 10000, ncol = k)
for (i in 1:k)
cpisim[, i] <- rsplitnorm(n=10000, mode = boe0$mode[i],
sd = boe0$uncertainty[i],
skew = boe0$skew[i])
The fan chart based on the simulations in cpisim can then be added to the plot;
# truncate cpi series
cpi0 <- ts(cpi[time(cpi)<2013], start=start(cpi),
# past data
plot(cpi0, type = "l", lwd = 2,
xlim = c(y0 - 5, y0 + 3.25), ylim = c(-2, 7))
# add fan
fan(data = cpisim, start = y0, frequency = 4)
The fan function calculates the values of 100 equally spaced percentiles of each future distribution when the default data.type = "simulations" is set. This allows 50 fans to be plotted from the heat.colours colour palate, providing a finer level of shading in the representation of future distributions. In addition, lines and labels are provided along each decile. The fan chart does not connect to the last observation as anchor = NULL by default.