Thursday, April 30, 2015

Matching Hagan PDE SABR with the one-step Andreasen-Huge SABR

I looked nearly two years ago already at the arbitrage free SABR of Andreasen-Huge in comparison to the arbitrage free PDE of Hagan and showed how close the ideas were: Andreasen-Huge relies on the normal Dupire forward PDE using a slightly simpler local vol (no time dependent exponential term) while Hagan works directly on the Fokker-Planck PDE (you can think of it as the Dupire Forward PDE for the density) and uses an expansion of same order as the original SABR formula (which leads to an additional exponential term in the local volatility).

One clever idea from Andreasen-Huge is the use of a single step. It turns out that their idea is not completely new. Daniel Duffy sent me some old papers from Shishkin around fitted schemes (here is one). This is very much the same thing, except Shishkin concern is about a good handling of discontinuity in the initial condition, and therefore makes the association step function => cumulative density to fit the diffusion parameter. Andreasen-Huge work directly with the call prices as this is what they solve.

One drawback of Andreasen-Huge one step method is the inability to match the standard SABR smile: the parameters don't have exactly the same meaning. It turns out that by just shifting proportionally the local volatility by a constant factor, Andreasen Huge matches Hagan PDE vols quite closely.

Implied Black volatilities


Probability density

While this is interesting in itself, it's still not so simple to backup this factor without solving for it (and then the method looses appeal).

Matching Hagan PDE SABR with the one-step Andreasen-Huge SABR

I looked nearly two years ago already at the arbitrage free SABR of Andreasen-Huge in comparison to the arbitrage free PDE of Hagan and showed how close the ideas were: Andreasen-Huge relies on the normal Dupire forward PDE using a slightly simpler local vol (no time dependent exponential term) while Hagan works directly on the Fokker-Planck PDE (you can think of it as the Dupire Forward PDE for the density) and uses an expansion of same order as the original SABR formula (which leads to an additional exponential term in the local volatility).

One clever idea from Andreasen-Huge is the use of a single step. It turns out that their idea is not completely new. Daniel Duffy sent me some old papers from Shishkin around fitted schemes (here is one). This is very much the same thing, except Shishkin concern is about a good handling of discontinuity in the initial condition, and therefore makes the association step function => cumulative density to fit the diffusion parameter. Andreasen-Huge work directly with the call prices as this is what they solve.

One drawback of Andreasen-Huge one step method is the inability to match the standard SABR smile: the parameters don't have exactly the same meaning. It turns out that by just shifting proportionally the local volatility by a constant factor, Andreasen Huge matches Hagan PDE vols quite closely.

Implied Black volatilities


Probability density

While this is interesting in itself, it's still not so simple to backup this factor without solving for it (and then the method looses appeal).

Saturday, April 18, 2015

Modern Programming Language for Monte-Carlo

A few recent programming languages sparked my interest:
  • Julia: because of the wide coverage of mathematical functions, and great attention to quality of the implementations. It has also some interesting web interface.
  • Dart: because it's a language focused purely on building apps for the web, and has a supposedly good VM.
  • Rust: it's the latest fad. It has interesting concepts around concurrency and a focus on being low level all the while being simpler than C.
I decided to see how well suited they would be on a simple Monte-Carlo simulation of a forward start option under the Black model. I am no expert at all in any of the languages, so this is a beginner's test. I compared the runtime for executing 16K simulations times a multiplier.

Multipl. Scala  Julia  JuliaA  Dart  Python  Rust
1        0.03   0.08    0.09   0.03     0.4  0.004
10       0.07   0.02    0.06   0.11     3.9  0.04
100      0.51   0.21    0.40   0.88          0.23
1000     4.11   2.07    4.17   8.04          2.01


About performance

I am quite impressed at Dart performance versus Scala (or vs. Java, as it has the same performance as Scala) given that it is much less strict about types and its focus is not at all on this kind of stuff.

Julia performance is great, that is if one is careful about types. Julia is very finicky about casting and optimizations, fortunately @time helps spotting the issues (often an inefficient cast will lead to copy and thus high allocation). JuliaA is my first attempt, with an implicit badly performing conversion of MersenneTwister to AbstractRNG. It is slower first, as the JIT costs is reflected on the first run, very much like in Java (although it appears to be even worse).

Rust is the most impressive. I had to add the --release flag to the cargo build tool to produce a properly optimized binary, otherwise the performance is up to 7x worse.

About the languages

My Python code is not vectorized, just like any of the other implementations. While the code looks relatively clean, I made the most errors compared to Julia or Scala. Python numpy isn't always great: norm.ppf is very slow, slower than my hand coded python implementation of AS241.

Dart does not have fixed arrays: everything is a list. It also does not have strict 64 bit int: they can be arbitrarily large. The dev environment is ok but not great.

Julia is a bit funny, not very OO (no object method) but more functional, although many OO concepts are there (type inheritence, type constructors). It was relatively straightforward, although I do not find intuitive the type conversion issues (eventual copy on conversion).

Rust took me the most time to write, as it has quite new concepts around mutable variables, and "pointers" scope. I relied on an existing MersenneTwister64 that worked with latest Rust. It was a bit disappointing to see that some dSFMT git project did not compile with the latest Rust, likely because Rust is still a bit too young. This does not sound so positive, but I found it to be the language the most interesting to learn.

I was familiar with Scala before this exercise. I used a non functional approach, with while loops in order to make sure I had maximum performance. This is something I find a bit annoying in Scala, I always wonder if for performance I need to do a while instead of a for, when the classic for makes much more sense (that and the fact that the classic for leads to some annoying delegation in runtime errors/on debug).

I relied on the default RNG for Dart but MersenneTwister for Scala, Julia, Python, Rust. All implementations use a hand coded AS241 for the inverse cumulative normal.

Update 

Using FastMath.exp instead of Math.exp leads a slightly better performance for Scala:

1 0.06
10 0.05
100 0.39
1000 2.66

I did not expect that this would still be true in 2015 with Java 8 Oracle JVM.

Modern Programming Language for Monte-Carlo

A few recent programming languages sparked my interest:
  • Julia: because of the wide coverage of mathematical functions, and great attention to quality of the implementations. It has also some interesting web interface.
  • Dart: because it's a language focused purely on building apps for the web, and has a supposedly good VM.
  • Rust: it's the latest fad. It has interesting concepts around concurrency and a focus on being low level all the while being simpler than C.
I decided to see how well suited they would be on a simple Monte-Carlo simulation of a forward start option under the Black model. I am no expert at all in any of the languages, so this is a beginner's test. I compared the runtime for executing 16K simulations times a multiplier.

Multipl. Scala  Julia  JuliaA  Dart  Python  Rust
1        0.03   0.08    0.09   0.03     0.4  0.004
10       0.07   0.02    0.06   0.11     3.9  0.04
100      0.51   0.21    0.40   0.88          0.23
1000     4.11   2.07    4.17   8.04          2.01


About performance

I am quite impressed at Dart performance versus Scala (or vs. Java, as it has the same performance as Scala) given that it is much less strict about types and its focus is not at all on this kind of stuff.

Julia performance is great, that is if one is careful about types. Julia is very finicky about casting and optimizations, fortunately @time helps spotting the issues (often an inefficient cast will lead to copy and thus high allocation). JuliaA is my first attempt, with an implicit badly performing conversion of MersenneTwister to AbstractRNG. It is slower first, as the JIT costs is reflected on the first run, very much like in Java (although it appears to be even worse).

Rust is the most impressive. I had to add the --release flag to the cargo build tool to produce a properly optimized binary, otherwise the performance is up to 7x worse.

About the languages

My Python code is not vectorized, just like any of the other implementations. While the code looks relatively clean, I made the most errors compared to Julia or Scala. Python numpy isn't always great: norm.ppf is very slow, slower than my hand coded python implementation of AS241.

Dart does not have fixed arrays: everything is a list. It also does not have strict 64 bit int: they can be arbitrarily large. The dev environment is ok but not great.

Julia is a bit funny, not very OO (no object method) but more functional, although many OO concepts are there (type inheritence, type constructors). It was relatively straightforward, although I do not find intuitive the type conversion issues (eventual copy on conversion).

Rust took me the most time to write, as it has quite new concepts around mutable variables, and "pointers" scope. I relied on an existing MersenneTwister64 that worked with latest Rust. It was a bit disappointing to see that some dSFMT git project did not compile with the latest Rust, likely because Rust is still a bit too young. This does not sound so positive, but I found it to be the language the most interesting to learn.

I was familiar with Scala before this exercise. I used a non functional approach, with while loops in order to make sure I had maximum performance. This is something I find a bit annoying in Scala, I always wonder if for performance I need to do a while instead of a for, when the classic for makes much more sense (that and the fact that the classic for leads to some annoying delegation in runtime errors/on debug).

I relied on the default RNG for Dart but MersenneTwister for Scala, Julia, Python, Rust. All implementations use a hand coded AS241 for the inverse cumulative normal.

Update 

Using FastMath.exp instead of Math.exp leads a slightly better performance for Scala:

1 0.06
10 0.05
100 0.39
1000 2.66

I did not expect that this would still be true in 2015 with Java 8 Oracle JVM.