Saturday, August 22, 2015

Go for Monte-Carlo

I have looked a few months ago already at Julia, Dart, Rust and Scala programming languages to see how practical they could be for a simple Monte-Carlo option pricing.

I forgot the Go language. I had tried it 1 or 2 years ago, and at that time, did not enjoy it too much. Looking at Go 1.5 benchmarks on the computer language shootout, I was surprised that it seemed so close to Java performance now, while having a GC that guarantees pauses of less 10ms and consuming much less memory.

I am in general a bit skeptical about those benchmarks, some can be rigged. A few years ago, I tried my hand at the thread ring test, and found that it actually performed fastest on a single thread while it is supposed to measure the language threading performance. I looked yesterday at one Go source code (I think it was for pidigits) and saw that it just called a C library (gmp) to compute with big integers. It's no surprise then that Go would be faster than Java on this test.

So what about my small Monte-Carlo test?
Well it turns out that Go is quite fast on it:
Multipl. Rust    Go
1        0.005  0.007
10       0.03   0.03
100      0.21   0.29

1000     2.01   2.88


It is faster than Java/Scala and not too far off Rust, except if one uses FastMath in Scala, then the longest test is slighly faster with Java (not the other ones).

There are some issues with the Go language: there is no operator overloading, which can make matrix/vector algebra more tedious and there is no generic/template. The later is somewhat mitigated by the automatic interface implementation. And fortunately for the former, complex numbers are a standard type. Still, automatic differentiation would be painful.

Still it was extremely quick to grasp and write code, because it's so simple, especially when compared to Rust. But then, contrary to Rust, there is not as much safety provided by the language. Rust is quite impressive on this side (but unfortunately that implies less readable code). I'd say that Go could become a serious alternative to Java.

I also found an interesting minor performance issue with the default Go Rand.Float64, the library convert an Int63 to a double precision number this way:

func (r *Rand) Float64() float64 {
  f := float64(r.Int63()) / (1 << 63)
  if f == 1 {
    f = 0
  }
  return f
 }

I was interested in having a number in (0,1) and not [0,1), so I just used the conversion pattern from MersenneTwister 64 code:

f := (float64(r.Int63() >> 11) + 0.5) * (1.0/4503599627370496.0)
 
The reasoning behind this later code is that the mantissa is 52 bits, and this is the most accuracy we can have between 0 and 1. There is no need to go further, this also avoids the issue around 1. It's also straightforward that is will preserve the uniform property, while it's not so clear to me that r.Int63()/2^63 is going to preserve uniformity as double accuracy is higher around 0 (as the exponent part can be used there) and lesser around 1: there is going to be much more multiple identical results near 1 than near 0.

It turns out that the if check adds 5% performance penalty on this test, likely because of processor caching issues. I was surprised by that since there are many other ifs afterwards in the code, for the inverse cumulative function, and for the payoff.

Saturday, July 25, 2015

Bumping Correlations

In his book "Monte Carlo Methods in Finance", P. J├Ąckel explains a simple way to clean up a correlation matrix. When a given correlation matrix is not positive semi-definite, the idea is to do a singular value decomposition (SVD), replace the negative eigenvalues by 0, and renormalize the corresponding eigenvector accordingly.

One of the cited applications is "stress testing and scenario analysis for market risk" or "comparative pricing in order to ascertain the extent of correlation exposure for multi-asset derivatives", saying that "In many of these cases we end up with a matrix that is no longer positive semi-definite".

It turns out that if one bumps an invalid correlation matrix (the input), that is then cleaned up automatically, the effect can be a very different bump. Depending on how familiar you are with SVD, this could be more or less obvious from the procedure,

As a simple illustration I take the matrix representing 3 assets A, B, C with rho_ab = -0.6, rho_ac = rho_bc = -0.5.

   1.00000  -0.60000  -0.50000
  -0.60000   1.00000  -0.50000
  -0.50000  -0.50000   1.00000

For those rho_ac and rho_bc, the correlation matrix is not positive definite unless rho_ab in in the range (-0.5, 1). One way to verify this is to use the fact that positive definiteness is equivalent to a positive determinant. The determinant will be 1 - 2*0.25 - rho_ab^2 + 2*0.25*rho_ab.

After using P. Jaeckel procedure, we end up with:

   1.00000  -0.56299  -0.46745
  -0.56299   1.00000  -0.46745
  -0.46745  -0.46745   1.00000

If we bump now rho_bc by 1% (absolute), we end up after cleanup with:

   1.00000  -0.56637  -0.47045
  -0.56637   1.00000  -0.46081
  -0.47045  -0.46081   1.00000

It turns out that rho_bc has changed by only 0.66% and rho_ac by -0.30%, rho_ab by -0.34%. So our initial bump (0,0,1) has been translated to a bump (-0.34, -0.30, 0.66). In other words, it does not work to compute sensitivities.

One can optimize to obtain the nearest correlation matrix in some norm. Jaeckel proposes a hypersphere decomposition based optimization, using as initial guess the SVD solution. Higham proposed a specific algorithm just for that purpose. It turns out that on this example, they will converge to the same solution (if we use the same norm). I tried out of curiosity to see if that would lead to some improvement. The first matrix becomes

   1.00000  -0.56435  -0.46672
  -0.56435   1.00000  -0.46672
  -0.46672  -0.46672   1.00000

And the bumped one becomes

   1.00000  -0.56766  -0.46984
  -0.56766   1.00000  -0.46002
  -0.46984  -0.46002   1.00000

We find back the same issue, rho_bc has changed by only 0.67%, rho_ac by -0.31% and rho_ab by -0.33%. We also see that the SVD correlation or the real near correlation matrix are quite close, as noticed by P. Jaeckel.

Of course, one should apply the bump directly to the cleaned up matrix, in which case it will actually work as expected, unless our bump produces another non positive definite matrix, and then we would have correlation leaking a bit everywhere. It's not entirely clear what kind of meaning the risk figures would have then.

Monday, July 13, 2015

Andreasen Huge extrapolation

There are not many arbitrage free extrapolation schemes. Benaim et al. extrapolation is one of the few that claims it. However, despite the paper's title, it is not truely arbitrage free. The density might be positive, but the forward is not preserved by the implied density. It can also lead to wings that don't obey Lee's moments condition.

On a Wilmott forum, P. Caspers proposed the following counter-example based on extrapolating SABR: \( \alpha=15\%, \beta=80\%, \nu=50\%, \rho=-48\%, f=3\%, T=20.0 \). He cut this smile at 2.5% and 6% and used the BDK extrapolation scheme with mu=nu=1.

A truly arbitrage free extrapolation can be obtained through Andreasen Huge volatility interpolation, making sure the grid is wide enough to allow extrapolation. Their method is basically a one step finite difference implicit Euler scheme applied to a local volatility parameterization that has as many parameters than prices. The method is presented with piecewise constant local volatility, but actually used with piecewise linear local volatility in their example.
Smile

Density with piecewise linear local volatility
There is still a tiny oscillation that makes the density negative, but one understands why typical extrapolations fail on the example: the change in density must be very steep.
Note that moving the left extrapolation point even closer to the forward might fix BDK negative density, but we are already very close, and we can really wonder if going closer is really a good idea since we would effectively use a somewhat arbitrary extrapolation in most of the interpolation zone.

It turns out that we can also use a cubic spline local volatility with linear extrapolation, and the density would look then:
Density with cubic spline local volatility
Interestingly, the right side of the density is much better captured.
The wiggle persists, although it is smaller. This is likely due to the fact that I am using a cubic spline on top of the finite difference prices (in order to have a C2 density). Using a better C2 convexity preserving interpolation would likely remove this artefact.

Those figures also show why relying just on extrapolation to fix SABR is not necessarily a good idea: even a real arbitrage free extrapolation will make a not so meaningful density. The proper solution is to really use Hagan's arbitrage free SABR PDE, which would be as nearly fast in this case.