Wednesday, September 30, 2015

Crank-Nicolson and Rannacher Issues with Touch options

I just stumbled upon this particularly illustrative case where the Crank-Nicolson finite difference scheme behaves badly, and the Rannacher smoothing (2-steps backward Euler) is less than ideal: double one touch and double no touch options.

It is particularly evident when the option is sure to be hit, for example when the barriers are narrow, that is our delta should be around zero as well as our gamma. Let's consider a double one touch option with spot=100, upBarrier=101, downBarrier=99.9, vol=20%, T=1 month and a payout of 50K.
Crank-Nicolson shows big spikes in the delta near the boundary
Rannacher shows spikes in the delta as well
Crank-Nicolson spikes are so high that the price is actually a off itself.

The Rannacher smoothing reduces the spikes by 100x but it's still quite high, and would be higher had we placed the spot closer to the boundary. The gamma is worse. Note that we applied the smoothing only at maturity. In reality as the barrier is continuous, the smoothing should really be applied at each step, but then the scheme would be not so different from a simple Backward Euler.

In contrast, with a proper second order finite difference scheme, there is no spike.
Delta with the TR-BDF2 finite difference method - the scale goes from -0.00008 to 0.00008.
Delta with the Lawson-Morris finite difference scheme - the scale goes from -0.00005 to 0.00005
Both TR-BDF2 and Lawson-Morris (based on a local Richardson extrapolation of backward Euler) have a very low delta error, similarly, their gamma is very clean. This is reminiscent of the behavior on American options, but the effect is magnified here.

Wednesday, September 02, 2015


I was wondering how to generate some nice cloudy like texture with a simple program. I first thought about using the Brownian motion, but of course if one uses it raw, with one pixel representing one movement in time, it's just going to look like a very noisy and grainy picture like this:
Normal noise

There is however a nice continuous representation of the Brownian motion : the Paley-Wiener representation

This can produce an interesting smooth pattern, but it is just 1D. In the following picture, I apply it to each row (the column index being time), and then for each column (the row index being time). Of course this produces a symmetric picture, especially as I reused the same random numbers
If I use new random numbers for the columns, it is still symmetric, but destructive rather than constructive.

It turns out that spatial processes are something more complex than I first imagined. It is not a simple as using a N-dimensional Brownian motion, as it would produce a very similar picture as the 1-dimensional one. But this paper has a nice overview of spatial processes. Interestingly they even suggest to generate a Gaussian process using a Precision matrix (inverse of covariance matrix). I never thought about doing such a thing and I am not sure what is the advantage of such a scheme.

There is a standard graphic technique to generate nice textures, originating from Ken Perlin for Disney, it is called simply Perlin Noise. It turns out that several web pages in the top Google results confuse simple Perlin noise with fractal sum of noise that Ken Perlin also helped popularize (see his slides: standard Perlin noise, fractal noise). Those pages also believe that the later is simpler/faster. But there are two issues with fractal sum of noise: the first one is that it relies on an existing noise function - you need to first build one (it can be done with a random number generator and an interpolator), and the second one is that it ends up being more complex to program and likely to evaluate as well, see for example the code needed here. The fractal sum of noise is really a complementary technique.

The insight of Perlin noise is to not generate random color values that would be assigned to shades of grey as in my examples, but to generate random gradients, and interpolate on those gradient in a smooth manner. In computer graphics they like the cosine function to give a little bit of non-linearity in the colors. A good approximation, usually used as a replacement in this context is 3x^2 - 2x^3. It's not much more complicated than that, this web page explains it in great details. It can be programmed in a few lines of code.

very procedural and non-optimized Go code for Perlin noise

Saturday, August 22, 2015

Go for Monte-Carlo

I have looked a few months ago already at Julia, Dart, Rust and Scala programming languages to see how practical they could be for a simple Monte-Carlo option pricing.

I forgot the Go language. I had tried it 1 or 2 years ago, and at that time, did not enjoy it too much. Looking at Go 1.5 benchmarks on the computer language shootout, I was surprised that it seemed so close to Java performance now, while having a GC that guarantees pauses of less 10ms and consuming much less memory.

I am in general a bit skeptical about those benchmarks, some can be rigged. A few years ago, I tried my hand at the thread ring test, and found that it actually performed fastest on a single thread while it is supposed to measure the language threading performance. I looked yesterday at one Go source code (I think it was for pidigits) and saw that it just called a C library (gmp) to compute with big integers. It's no surprise then that Go would be faster than Java on this test.

So what about my small Monte-Carlo test?
Well it turns out that Go is quite fast on it:
Multipl. Rust    Go
1        0.005  0.007
10       0.03   0.03
100      0.21   0.29

1000     2.01   2.88

It is faster than Java/Scala and not too far off Rust, except if one uses FastMath in Scala, then the longest test is slighly faster with Java (not the other ones).

There are some issues with the Go language: there is no operator overloading, which can make matrix/vector algebra more tedious and there is no generic/template. The later is somewhat mitigated by the automatic interface implementation. And fortunately for the former, complex numbers are a standard type. Still, automatic differentiation would be painful.

Still it was extremely quick to grasp and write code, because it's so simple, especially when compared to Rust. But then, contrary to Rust, there is not as much safety provided by the language. Rust is quite impressive on this side (but unfortunately that implies less readable code). I'd say that Go could become a serious alternative to Java.

I also found an interesting minor performance issue with the default Go Rand.Float64, the library convert an Int63 to a double precision number this way:

func (r *Rand) Float64() float64 {
  f := float64(r.Int63()) / (1 << 63)
  if f == 1 {
    f = 0
  return f

I was interested in having a number in (0,1) and not [0,1), so I just used the conversion pattern from MersenneTwister 64 code:

f := (float64(r.Int63() >> 11) + 0.5) * (1.0/4503599627370496.0)
The reasoning behind this later code is that the mantissa is 52 bits, and this is the most accuracy we can have between 0 and 1. There is no need to go further, this also avoids the issue around 1. It's also straightforward that is will preserve the uniform property, while it's not so clear to me that r.Int63()/2^63 is going to preserve uniformity as double accuracy is higher around 0 (as the exponent part can be used there) and lesser around 1: there is going to be much more multiple identical results near 1 than near 0.

It turns out that the if check adds 5% performance penalty on this test, likely because of processor caching issues. I was surprised by that since there are many other ifs afterwards in the code, for the inverse cumulative function, and for the payoff.