Tuesday, 29 July 2014

Hit and run. Think Bayes!

At the R in Insurance conference Arthur Charpentier gave a great keynote talk on Bayesian modelling in R. Bayes' theorem on conditional probabilities is strikingly simple, yet incredibly thought provoking. Here is an example from Daniel Kahneman to test your intuition. But first I have to start with Bayes' theorem.

Bayes' theorem

Bayes' theorem states that given two events \(D\) and \(H\), the probability of \(D\) and \(H\) happening at the same time is the same as the probability of \(D\) occurring, given \(H\), weighted by the probability that \(H\) occurs; or the other way round. As a formula it can be written as:
\[
P(H \cap D) = P(H|D) \, P(D) = P(D|H) \, P(H)
\]
Or if I rearrange it:
\[
P(H|D) = \dfrac{P(D|H) \, P(H)}{P(D)}
\]
Imagine \(H\) is short for hypothesis and \(D\) is short for data, or evidence. Then Bayes' theorem states that the probability of a hypothesis given data is the same as the likelihood that we observe the data given the hypothesis, weighted by the prior belief of the hypothesis, normalised by the probability that we observe the data regardless of the hypothesis.

The tricky bit in real life is often to figure out what the hypothesis and data are.

Hit and run accident

This example is taken from Daniel Kahneman's book Thinking, fast and slow [1].
A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue. A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colours 80% of the time and failed 20% of the time.

What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?

What is here the data and what is here the hypothesis? Intuitively you may think that the proportion of Blue and Green cabs is the data at hand and the witness accusation that a Blue cab was involved in the accident is the hypothesis. However, after some thought I found the following assignment much more helpful, as then \(P(H|D)\) matches the above question:

\(H =\) Accident caused by Blue cab. \(D =\) Witness said the cab was Blue.

With this it is straightforward to get the probabilities of \(P(H)=15\%\) and \(P(D|H)=80\%\). But what is \(P(D)\)? Well, when would the witness say that the cab was Blue? Either, when the cab was Blue and so the witness is right, or when the cab was actually Green and the witness is incorrect. Thus, following the law of total probability:
$$\begin{align}
P(D) & = P(D|H) P(H) + P(D | \bar{H}) P(\bar{H})\\
& = 0.8 \cdot 0.15 + 0.2 \cdot 0.85 = 0.29
\end{align}$$Therefore I get \(P(H|D)=41\%\). Thus, even if the witness states that the cab involved in the accident was Blue, the probability of this being true is only \(41\%\).

An alternative way to think about this problem is via a Bayesian Network. The colour of the cab will influence the statement of the witness. In R I can specify such a network using the gRain package [2], which I discussed in an earlier post. Here I provide the distribution of the cabs and the conditional distribution of the witness as an input. After I compile the network, I can again read off the probabilities that a Blue cab was involved, when the witness said so.

R code

Tuesday, 22 July 2014

Notes from the 2nd R in Insurance Conference

The 2nd R in Insurance conference took place last Monday, 14 July, at Cass Business School London.

This one-day conference focused once more on applications in insurance and actuarial science that use R. Topics covered included reserving, pricing, loss modelling, the use of R in a production environment and more.

In the first plenary session, Montserrat Guillen (Riskcenter, University of Barcelona) and Leo Guelman (Royal Bank of Canada, RBC Insurance) spoke about the rise of uplift models. These predictive models are used for improved targeting of policyholders by marketing campaigns, through the use of experimental data. The presenters illustrated the use of their uplift package (available on CRAN), which they have developed for such applications.


Thereafter, the programme consisted of a combination of contributed presentations and lightning talks, as well as a panel discusson on R at the interface of practitioner / academic interraction. The panel, drawn from academia and practice, discussed the efforts made in bridging through the use of R cultural and communication divides, as well as the challenges of developing collaborative business models that respond to market needs and the incentives of academic researchers.


In the closing plenary, Arthur Charpentier (Professor of Actuarial Science at UQAM, Canada) gave a non-Bayesian's account of Bayesian modelling in R. While many are sympathetic to the Bayesian paradigm, it is easy access to computational tools that makes its wider application a realistic prospect. The presenter demonstrated how Bayesian methods can be used to offer alternative analyses of standard actuarial problems.


The audience of the conference included both practitioners (70%) and academics (30%) who are active or interested in the applications of R in Insurance. It was a truly international event with speakers and delegates from many different countries, including USA, Canada, Belgium, Netherlands, Switzerland, Germany, Ireland, Argentina, France, Spain and of course the UK. The coffee breaks and conference dinner at Ironmongers Hall offered great networking opportunities.


All conference presentations are available on request.

Finally, we are grateful to our sponsors Mango Solutions, CYBAEA, PwC and RStudio. This conference would not have been possible without their generous support.

R in Insurance 2015


We are delighted to announce next year's event already. Following two years in London at Cass Business School, the conference will travel across the Channel to Amsterdam, 29 June 2015.


We are looking forward to seeing you there. Further details will be published on www.rininsurance.com.

Tuesday, 15 July 2014

Simple user interface in R to get login details

Occasionally I have to connect to services from R that ask for login details, such as databases. I don't like to store my login details in the R source code file, instead I would prefer to enter the my login details when I execute the code.


Fortunately, I found some old code in a post by Barry Rowlingson that does just that. It uses the tcltk package in R to create a little window in which the user can enter her details, without showing the password. The tcltk package is part of base R, which means the code will run on any operating system. Nice!



Session Info

R version 3.1.1 (2014-07-10)
Platform: x86_64-apple-darwin13.1.0 (64-bit)

locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] tcltk stats graphics grDevices utils datasets methods base

Tuesday, 8 July 2014

googleVis 0.5.3 released

Recently we released googleVis 0.5.3 on CRAN. The package provides an interface between R and Google Charts, allowing you to create interactive web charts from R.

Screen shot of some of the Google Charts

Although this is mainly a maintenance release, I'd like to point out two changes:
  • Default chart width is set to 'automatic' instead of 500 pixels.
  • Intervals for columns roles have to end with the suffix ".i", with "i" being an integer. Several interval columns are allowed, see the roles demo and vignette for more details.
Those changes were required to fix the following issues:
  • The order of y-variables in core charts wasn't maintained. Thanks to John Taveras for reporting this bug.
  • Width and height of googleVis charts were only accepted in pixels, although the Google Charts API uses standard HTML units (for example, '100px', '80em', '60', 'automatic'). If no units are specified the number is assumed to be pixels. This has been fixed. Thanks to Paul Murrell for reporting this issue.
New to googleVis? Review the demo on CRAN.

Tuesday, 1 July 2014

Last chance to register for the R in Insurance conference


The registration for the 2nd R in Insurance conference at Cass Business School London will close this Friday, 4 July.

The programme includes talks from international practitioners and leading academics, see below. For more details and registration visit: http://www.rininsurance.com.

Still unsure? Review some impressions and presentations from last year's conference.

On behalf of the committee and sponsors, Mango Solutions, Cybaea, RStudio and PwC, we look forward to seeing you in London on 14 July!

Tuesday, 24 June 2014

Generating and visualising multivariate random numbers in R

This post will present the wonderful pairs.panels function of the psych package [1] that I discovered recently to visualise multivariate random numbers.

Here is a little example with a Gaussian copula and normal and log-normal marginal distributions. I use pairs.panels to illustrate the steps along the way.

I start with standardised multivariate normal random numbers:
library(psych)
library(MASS)
Sig <- matrix(c(1, -0.7, -.5,
                -0.7, 1, 0.6,
                -0.5, 0.6, 1), 
                nrow=3)
X <- mvrnorm(1000, mu=rep(0,3), Sigma = Sig, 
             empirical = TRUE)  
pairs.panels(X)

Next, I map the random figures into the interval [0,1] using the distribution function pnorm, so I end up with multivariate uniform random numbers:
U <- pnorm(X)
pairs.panels(U)

Finally, I transform the uniform numbers into the desired marginals:
Z <- cbind(
  A=qlnorm(U[,1], meanlog=-2.5, sdlog=0.25),
  B=qnorm(U[,2], mean=1.70, sd=0.1),
  C=qnorm(U[,3], mean=0.63,sd=0.08)
  )
pairs.panels(Z)

Those steps can actually be shorten with functions of the copula package [2].

Unfortunately, I struggled to install the copula package on my local Mac, running Mavericks. No CRAN binaries are currently available and gfortran is playing up on my system to install it from source. Thus, I quickly fired up an Ubuntu virtual machine on Amazon's EC2 Cloud and installed R. Within 15 minutes I was back in business - that is actually pretty amazing.
library(copula)
myCop=normalCopula(param=c(-0.7,-.5,0.6), dim = 3, dispstr = "un")
myMvd <- mvdc(copula=myCop, margins=c("lnorm", "norm", "norm"),
              paramMargins=list(list(meanlog=-2.5, sdlog=0.25),
                                list(mean=1.70, sd=0.1), 
                                list(mean=0.63,sd=0.08)) )
Z2 <- rmvdc(myMvd, 1000)
colnames(Z2) <- c("A", "B", "C")
pairs.panels(Z2)

References

[1] Revelle, W. (2014) psych: Procedures for Personality and Psychological Research, Northwestern University, Evanston, Illinois, USA, http://CRAN.R-project.org/package=psych Version = 1.4.5.

[2] Marius Hofert, Ivan Kojadinovic, Martin Maechler and Jun Yan (2014). copula: Multivariate Dependence with Copulas. http://CRAN.R-project.org/package=copula Version = 0.999-10

Session Info Local

R version 3.1.0 (2014-04-10)
Platform: x86_64-apple-darwin13.1.0 (64-bit)

locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets methods base     

other attached packages:
[1] psych_1.4.5 MASS_7.3-31

loaded via a namespace (and not attached):
[1] tools_3.1.0

Session Info EC2

R version 3.0.2 (2013-09-25)
Platform: x86_64-pc-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base     

other attached packages:
[1] psych_1.4.5 copula_0.999-10

loaded via a namespace (and not attached):
[1] ADGofTest_0.3     grid_3.0.2        gsl_1.9-10        lattice_0.20-24  
[5] Matrix_1.1-2      mvtnorm_0.9-99992 pspline_1.0-16    stabledist_0.6-6 
[9] stats4_3.0.2

Tuesday, 17 June 2014

Who will win the World Cup and which prediction model?

The World Cup has finally kicked off last Thursday and I have seen some fantastic games already. Perhaps the Netherlands appears to be the strongest side so far, following their 5-1 victory over Spain.

To me the question is not only which country will win the World Cup, but also which prediction model will come closest to the actual results. Here I present three teams, FiveThirtyEight, a polling aggregation website, Groll & Schauberger, two academics from Munich and finally Lloyd's of London, the insurance market.

The guys around Nate Silver at FiveThirtyEight have used the ESPN’s Soccer Power Index to predict the outcomes of games and continue to update their model. Brazil is their clear favourite.

Andreas Groll & Gunther Schauberger from the LMU Munich developed a model, which like the approach from FiveThirtyEight aims to estimate the probability of a team winning the world cup. But unlike FiveThirtyEight, they see Germany to take the trophy home.

Lloyd's chose a different approach for predicting the World Cup final. The insurance market looked at the risk aspect of the teams and ranked the teams by their insured value. Arguably the better a team the higher their insured value. As a result Lloyd's predicts Germany to win the World Cup.

Quick reminder; what's the difference between insurance and gambling? Gambling introduces risk, where none exists. Insurance mitigates risk, where risk exists.

Source: Research carried out by Cebr for Lloyd’s, June 2014.

Here are the three rankings combined in one table. None of the three predictions have put the Netherlands into the top 5, despite the fact that they played against Spain in the final four years ago. Interestingly, as of this morning the bookmakers favourites are Brazil, Argentina, Germany and the Netherlands with implied probabilities of about 25%, 20%, 18% and 7% respectively.



R Code