Monthly Archives: November 2007

Who Manages the Managed Language?

Managed languages like Java and C# try to help us by tracking and freeing our memory for us. Sounds good, but I don’t like it.

Maybe my problem is philosophical. To me, it seems you now have a butler quietly following you around cleaning up after you. When you’re done with something, you drop it and move on (like a kid dropping a toy on the floor when he loses interest): the butler sees it and puts it away. You and the butler don’t talk.

It breeds a reliance that I don’t think is good.

I don’t think there’s any substitute for the programmer managing his/her own resources, including memory. You bring a “managed language” into the picture to make your life easier, but now you’re troubleshooting very subtle ways you’re interacting with it.

Case in point:

DARPA Grand Challenge team member Bryan Cattle describes a nasty memory problem that cost them a shot at the $2 million prize.

Actually, most of our code is written in garbage-collected C#, so it wasn’t a memory leak per se, but it wasn’t until two weeks later that we discovered the true problem.

It was the closest thing to a memory leak that you can have in a “managed” language. C# manages your memory for you by watching the objects you create. When your code no longer maintains any reference to the object, it automatically gets flagged for deletion without the programmer needing to manually free the memory, as they would need to do in C or C++.

Resource problems are ugly. Eventually the system thrashes and dies:

We kept noticing that the computer would begin to bog down after extended periods of driving. This problem was pernicious because it only showed up after 40 minutes to an hour of driving around and collecting obstacles. The computer performance would just gradually slow down until the car just simply stopped responding, usually with the gas pedal down, and would just drive off into the bush until we pulled the plug.

The money quote, emphasis mine:

We looked through the code on paper, literally line by line, and just couldn’t for the life of us imagine what the problem was. It couldn’t be the list of obstacles: right there was the line where the old obstacles got deleted.

Murphy’s law torpedoes the work-around:

Because we didn’t know why this problem kept appearing at 40 minutes, we decided to set a timer. After 40 minutes, we would stop the car and reboot the computer to restore the performance.

On race day, we set the timer and off she went for a brilliant 9.8 mile drive. Unfortunately, our system was seeing and cataloging every bit of tumbleweed and scrub that it could find along the side of the road. Seeing far more obstacles than we’d ever seen in our controlled tests, the list blew up faster than expected and the computers died only 28 minutes in, ending our run.

Memory leaks can be deadly subtle without the managed language. Maybe in the final analysis the managed language wasn’t technically to blame. But it clouded the picture, and their reliance on it seemed to be the core of the problem.

I’ve spoken with a few people who manage large Java projects. They usually wind up restarting the virtual machine(s) on a regular basis, as its memory footprint seems to grow unbounded. Or they try to assert more and more control over garbage collection, usually to control performance.

Maybe I’m not being fair. After all, how many potential bugs have managed languages averted? There’s no way to measure. Nor can we measure the bugs like the one described above.

Via Slashdot.

P.S.: I work with managed languages whenever my clients request it, and realize they offer more than just memory management.

Not even Google

From the Google code blog, emphasis mine:

In 2005 we launched Google Code to provide a home for our developer and open source programs. Two years, dozens of new products and new programs, and one major redesign later, Google Code is bigger and more dynamic than ever.

Two years operating and they’ve redesigned it once already.

I don’t point this out to embarrass Google but to show that redesigns are necessary from time to time. No one is omniscient, not even Google. As they better understand their mission, direction, operations, or issues, they find that what they had designed is no longer sufficient. And it’s not to be shoehorned or just endured, but fixed. Redesigned if need be.

The decision to redesign can be agonizing; the time required painful; the expense daunting. It’s not a decision to be taken lightly. It’s tempting to blame yourself, saying if only I’d have seen a little further into the future. But we’re finite beings: we don’t stand a chance. There’s no shame in that.

In fact, I think one predictor of a project’s success is how willing people are to dive in and fix things instead of trying to live with real problems.

It might take all the courage you can muster to bring it up, but you owe it to yourself and your project to give your honest assessment.

Lunch 2.0 @ Google Chicago

Hats off to Google Chicago for hosting today’s Lunch 2.0.

Having no idea what to expect (except, um, food), I had to check it out.

Free lunch and “no time-share pitch to sit through!”

I learned (among other things) that Google has a decent-sized presence in Chicago, a lot of it engineering. If I understand it correctly, Google’s summer of code comes out of Chicago. (I also took away a nice Google lunch cooler. Thanks!)

It’s always nice rubbing shoulders with the tech community, too: learning what’s going on, talking shop. (I’m guessing 100 people were there.)

Lunch 2.0 @ Google