Fantasecond response time
Here’s a fascinating in-depth study of one second of market data for a single stock:
HFT [High Frequency Trading] Breaks Speed-of-Light Barrier, Sets Trading Speed World Record.
Adds a new unit of time measurement to the lexicon: fantasecond.
On September 15, 2011, beginning at 12:48:54.600, there was a time warp in the trading of Yahoo! (YHOO) stock. HFT has reached speeds faster than the speed-of-light, allowing time travel into the future. Up to 190 milliseconds into the future, or 0.19 fantaseconds is the record so far. It all happened in just over one second of trading, the evidence buried under an avalanche of about 19,000 quotes and 3,000 individual trade executions. The facts of the matter are indisputable. Based on official UQDF/UTDF exchange timestamps, there is unmistakable proof that YHOO trades were executed on quotes that didn’t exist until 190 milliseconds later!Millions of traders depend on the accuracy of exchange timestamps – especially after bad timestamps were found to be a key factor in the disastrous market crash known as the flash crash of May 2010. …
The lowly time-stamp. Not rocket science. Not crucial like price, right? What’s to get wrong? An exchange with unlimited resources couldn’t mess up like that, could it? (Assuming the analysis is correct.)
As a software builder, I wonder how did it happen?
It probably wasn’t buggy code: unit tests are good, but more of them probably won’t catch this one.
Was it sheer load? It wasn’t the highest overall traffic they experienced:
The chart below shows quote message rates for UQDF and multicast line #6 which is the line that carries YHOO quotes. … Although traffic from YHOO was a significant percentage of all traffic on UQDF, it was not high enough to indicate any problems. Note the much higher surge on the right side of the chart; there weren’t any known problems at that time in YHOO.
So the system would pass a generic stress test. Yet still allow this.
Maybe the system was stressed in some unexpected way. This stress’ profile wasn’t predicted, characterized, mitigated. The system design didn’t account for it.
A multi-tasking operating system is likely part of the picture. Maybe another task was started that tied up crucial resources, even just CPU. Maybe not even a lot of resources, or not for very long. A multi-tasking operating system is geared to never say “enough.” Another task in the CPU’s ready queue? No problem. Out of RAM? Swap some to disk: user programs can’t even tell!
Information hiding gets us off the “honors system” with memory usage. But in the time domain, you’re still very much on the honors system: no one can make you complete your task in a given time. In fact, any innocuous-looking call can have any amount of code hiding deep inside it.
You can make a user wait for a tenth of a second or more and still seem really responsive. (Think “garbage collection.”) But a real-time data feed won’t wait. It’s a different ball-game.
Maybe we’re just seeing a hardware limitation hit. Maybe the best machines money can buy just can’t keep up with certain kinds of volume. There’s no guarantee they can.
Interesting stuff. Hat’s off to the author for such a detailed analysis.
Via ZeroHedge.