Downsides of Collaboration

Here’s an outstanding video on how collaboration can not only kill creativity, but dupe our very perceptions. Steve Wozniak:

Most inventors and engineers I have met are like me: they’re shy and they live in their heads. They’re almost like artists. In fact, the very best of them are artists. And artists work best alone where they can control an invention’s design without a lot of other people designing it for marketing or some other committee. I don’t believe anything revolutionary has ever been invented by committee. If you’re that rare engineer who is an inventor and also an artist, I’m going to give you some advice that might be hard to take. That advice is: work alone. You’re going to be best able to design revolutionary products and features if you’re working on your own. Not on a committee. Not on a team.

And it gets better from here.

You’ll undoubtedly apply it to situations closest to your heart.  It resonates with me and the software I write. Of course we can’t just make up our own requirements, but the final product needs to come from you.

I also hear a call to courage. Don’t be arrogant, but stand your ground. Use your best judgment. Don’t be dulled–or let your project be dulled–by the strongest personalities in the room.




“I am a terrible programmer”

That provocative title from Dan Shipper:

Part of me is thinking: in some ways, you were a terrible programmer
Other part is, well … it’s worked perfectly for the last 20 months and I’ve never had to touch it.

Like most things in life, the answer to what a good coder is, is somewhere in between the guy who wants to get it out fast and the guy who wants to make it beautiful.

His post-script is also a profound trueism in software:

P.S. In my defense, the find_art.js file that Scott was referencing was supposed to be a prototype. They weren’t sure if people would actually use the feature and wanted to test it. It ended up being so popular that they left it in!

Software Runs the World: How Scared Should We Be That So Much of It Is So Bad?

From The Atlantic:

The underlying problem here is that most software is not very good. Writing good software is hard. There are thousands of opportunities to make mistakes. More importantly, it’s difficult if not impossible to anticipate all the situations that a software program will be faced with, especially when–as was the case for both UBS and Knight–it is interacting with other software programs that are not under your control. It’s difficult to test software properly if you don’t know all the use cases that it’s going to have to support.

Though there are a number of angles to this piece (and you should read it all), here’s another nugget:

This is one problem that regulation probably can’t solve directly. How can you write a rule saying that companies have to write good software?

Fantasecond response time

Here’s a fascinating in-depth study of one second of market data for a single stock:

HFT [High Frequency Trading] Breaks Speed-of-Light Barrier, Sets Trading Speed World Record.
Adds a new unit of time measurement to the lexicon: fantasecond.

On September 15, 2011, beginning at 12:48:54.600, there was a time warp in the trading of Yahoo! (YHOO) stock. HFT has reached speeds faster than the speed-of-light, allowing time travel into the future. Up to 190 milliseconds into the future, or 0.19 fantaseconds is the record so far. It all happened in just over one second of trading, the evidence buried under an avalanche of about 19,000 quotes and 3,000 individual trade executions. The facts of the matter are indisputable. Based on official UQDF/UTDF exchange timestamps, there is unmistakable proof that YHOO trades were executed on quotes that didn’t exist until 190 milliseconds later!

Millions of traders depend on the accuracy of exchange timestamps — especially after bad timestamps were found to be a key factor in the disastrous market crash known as the flash crash of May 2010. …

The lowly time-stamp. Not rocket science. Not crucial like price, right? What’s to get wrong? An exchange with unlimited resources couldn’t mess up like that, could it? (Assuming the analysis is correct.)

As a software builder, I wonder how did it happen?

It probably wasn’t buggy code: unit tests are good, but more of them probably won’t catch this one.

Was it sheer load? It wasn’t the highest overall traffic they experienced:

The chart below shows quote message rates for UQDF and multicast line #6 which is the line that carries YHOO quotes. … Although traffic from YHOO was a significant percentage of all traffic on UQDF, it was not high enough to indicate any problems. Note the much higher surge on the right side of the chart; there weren’t any known problems at that time in YHOO.

So the system would pass a generic stress test. Yet still allow this.

Maybe the system was stressed in some unexpected way. This stress’ profile wasn’t predicted, characterized, mitigated. The system design didn’t account for it.

A multi-tasking operating system is likely part of the picture. Maybe another task was started that tied up crucial resources, even just CPU. Maybe not even a lot of resources, or not for very long. A multi-tasking operating system is geared to never say “enough.” Another task in the CPU’s ready queue? No problem. Out of RAM? Swap some to disk: user programs can’t even tell!

Information hiding gets us off the “honors system” with memory usage. But in the time domain, you’re still very much on the honors system: no one can make you complete your task in a given time. In fact, any innocuous-looking call can have any amount of code hiding deep inside it.

You can make a user wait for a tenth of a second or more and still seem really responsive. (Think “garbage collection.”) But a real-time data feed won’t wait. It’s a different ball-game.

Maybe we’re just seeing a hardware limitation hit. Maybe the best machines money can buy just can’t keep up with certain kinds of volume. There’s no guarantee they can.

Interesting stuff. Hat’s off to the author for such a detailed analysis.

Via ZeroHedge.