This is the first article in Big Data Profiles, a series profiling individuals and teams who have successfully delivered large-volume, data-driven systems.
Martin Thompson is a high-performance and low-latency specialist, and one of the creators of the Disruptor. While at LMAX and Betfair, Martin built and tested systems that are capable of handling hundreds of thousands of transactions per second with response times in the microseconds. The business domains of these systems (betting, trading) meant that complete continuation of service under extreme load was cruicial.
Every transaction counts
When dealing with financial systems, even a single lost transaction can create bad publicity, put the viability of the whole system in question, and cause customers to seek legal recourse through regulators. Repeating failed transactions at a later point may not be possible as the business opportunity has already been missed. The ability to explain and prove system behaviour (for instance, why a series of transactions was carried out in a particular order) and performant reporting on historical data (stretching into the distant past) are concerns that have to be baked into from the very start. Verifying that specific invariants are maintained is the central part of the functional testing.
Pipelines
One of Martin’s teams had great success using build pipelines (as described in Dave Farley and Jez Humble’s Continuous Delivery book) - a series of stages (build and unit tests, integration tests and acceptance tests, tests of cross-functional requirements, exploratory tests) through which the software travels and is exercised under increasingly production-like configurations and environments. The pipeline was geared towards providing feedback to the team as quickly as possible; for instance, certain acceptance tests were moved into earlier test stages and acted as canaries to catch critical regressions (that were only exhibited during complex interaction between components) as soon as they were introduced.
Dogfooding
The team also institutionalised ‘dogfooding’ by holding an internal competition after every iteration using the latest production version. The friendly contest between users pushed the system in novel and unexpected ways and uncovered problems (such as bottlenecks and exploits) that were not caught even during the exploratory testing stages.
Martin has a word of warning: teams that undertake building high-throughput/low-latency systems need to have the appropriate architectural skills/experience (in order to, for instance, pick an appropriate database technology or avoid obvious performance bottlenecks), as well as the ability to write code that doesn’t go “against the grain” of the underlying hardware. Applying the YAGNI principle to such decisions may box the team in and could lead to expensive rework or embarassing failure.
The Rails app that I’m working on has a dependency on ImageMagick. In fact, if ImageMagick is either missing or broken in the app’s environment, the app is beyond repair and Rails shouldn’t even start.
To achieve this, I have added an initializer under config/initializers/dependency_tests.rb:
Note : this particular example is specific to ruby 1.9.3 (in the way that minitest is used) but can be easily adapted to other versions of ruby.
If the test fails, then its error messages are rethrown on the console. This approach has already saved me quite a few minutes of troubleshooting (most recently yesterday when I was bringing everything up to scratch post the Mountain Lion upgrade).
On one of my current projects, the system receives a web request from a consumer, makes further requests to upstream services and collates a response. For the sake of illustration, let’s say that one of these upstream systems was IMDB and its response looked like this:
Let’s assume that we want to verify that our system handles the above XML correctly. Because we want to avoid external dependencies in our integration tests, we fake IMDB out and build the fake’s response inline within the test:
(here film() is just a method that invokes new FilmBuilder())
We need some code which satisfies this API and produces the appropriate XML.
The Java Version
The original, (Java) implementation looks like this:
This satisfies the requirements but is very meat and two veg (and ain’t really much of a looker). Perhaps we can improve on it.
The Scala port of the FilmBuilder looks like this:
Interesting features and gotchas to mention:
Your XML needs to be valid markup. Your IDE’s compiler will be an angry red until you put that closing tag in. This is a nice contrast to the Java solution’s string building approach, which is very susceptible to leaving a dangling tag open somewhere accidentally.
You can use conditionals in your template. This is particularly useful when you have optional tags in your XML.
Collections get flattened correctly inline (which is pretty awesome). This means that you only really have to generate an enumeration of XML nodes and let Scala do the rest.
Inline strings get helpfully escaped. This does mean though that if you have an inline string <tag>...</tag>, it will in fact be rendered as & lt;tag & gt;...& lt;tag & gt;, which may not be exactly what you wanted. To prevent this from happening, you need to wrap your string in an Unparsed (I’ve done this above for the actor tag).
I like how you can recognise the XML structure in the Scala version. This isn’t really possible (with all that noise) in the Java variant.