OpenGamma first glance

OpenGamma released a public version of their Risk Management software at the end of April[0] and I've had a few weeks to read the docs, look at the code, run the tests, and basically play around with it. What I haven't done yet is generate a risk report.

Here are my first thoughts.

Disclosure: I am a London based contractor who has maintained Risk systems for several years. My aim in this is to help other London companies evaluate and use open source software for risk. I have no relationship to OpenGamma other than being interested in using their software.

On the whole I am impressed. It looks to be a nice clean set of code which I'd be happy to work with long term. What is less obvious is whether this commercial company will succeed in creating an open source community. They have nailed their flag to their mast and basically said that they will sell you useful "optional" components to save you developer time, and basically make things easier to implement. For this to work the open source system must be usable without those commercial components.

So far I've basically been getting the feel of the system. There are a number of Core Concepts[1] which I am fairly comfortable with - and I've been seeing how these are represented in code. I see most of this as a big software integration project. For each of the main data structures (portfolio trees, securities, positions, etc) there is the concept of a "Master" which could be implemented by OpenGamma in a crude web UI + database, or alternatively fed from some other existing system - a "Source".[2] (I assume the latter will be the case almost everywhere). This sounds pretty straight forward but of course this plumbing can get quite complicated for a large organisation trading in many product types.

The fly in the ointment is that, as an individual, I cannot conjure up good sample data. I really need commercial trading partners for that. The worst example of this is in the provision of Market Data Snapshots and Historical Data. I am asking around my contacts to see if they have old data I can use, preferably with matching risk reports.

My first suggestion to OG would be to set up a pack of dummy data so that we can do end to end testing rather than just unit testing.

There is a fairly extensive analytics library with, of course, enough greeks to hold an Olympics. I'm not really sure how we test this. In the past where I've had to deal with valuation libraries like this we had an extensive set of tests which the Risk Managers had approved. Yes - there are a lot of unit tests, but I don't want to be the person who signs off on them.

Another developer recently asked what he could do to help. My suggestion is to come up with more unit tests!

The minimum requirements are pretty scary - a 64bit OS and the main program requires at least 4Gb heap. But is that so surprising? I see that you can spread out compute nodes onto remote boxes, but I am not sure I understand the technology yet.

They seem to be using Apache ActiveMQ for JMS messaging, and something called Fudge for encoding of objects for transfer. I'm not terribly happy about coming up with yet another serialization/deserialization format, but I guess they must need it.

As for other code components - Spring for config files and (I guess) Web MVC. They make heavy use of Joda Beans which I suppose must link up to Fudge somehow. Hibernate for ORM, and they are supporting PostgreSQL since they needed some Open Source database. (By default it uses in memory Hypersonic). PostgreSQL may be a stumbling block for some organisations - but perhaps they wont be the ones considering an Open Source Risk system.



It has been suggested to me that the unit test coverage is quite good.

this wasn't quite what I meant, but it is good to see. :-)