In my professional life, I'm only afraid of a few things. John's keyboard, which is coated with that fine patina of brown gunk. A call in the middle of the night saying the app server won't start. And the dreaded request: 'Russ, see if you can speed this code up.' It's usually code I've never seen. It's probably important, otherwise they wouldn't be worried about how it is performing. And speed it up to what? Still, there are some basics you can fall back on when someone asks you to optimize some code. Here goes.

1) Start with working code and good tests or you are doomed

Before you can make something work fast, you need to make it work. And you can't know that it is working if you don't have good unit tests to tell you so. There is no point in making broken code run faster ' a program that simply prints out '42' will run in no time flat but is unlikely to produce the output you are looking for. How are you going to know if your optimizations haven't sped things up by breaking them horribly? You need a good set of tests.

2) Rest assured, you don't know where the problem is

Back when I was still doing a lot of Java, I was assigned the task of speeding up some  code that managed query results. Very large query results. A quarter of a million rows, each with more than a hundred columns was not out of the question. The code that I inherited was actually the second or third generation of the thing. The original had kept all the results in memory, a strategy that worked fine as long as the number of users was approximately two. Any more and smoke started to billow up from the server room. The version that I started with at least stayed up, since it stored the results in a random access file and kept an index of the file in memory. The problem with the new version was that it was very slow.

I looked over the code and immediately found the performance problem: they were using Java serialization to turn the rows into bytes before writing them to disk. Ah, I thought, serialization is dead slow. It only took me the better part of the afternoon to come up with a way of turning those rows into bytes without the overhead of Java serialization. But removing the serialization didn't really make much of a difference. A few percent when I was looking for something like 75%.

Although my change made only a tiny difference in the speed of the code, I did gain something. I was reminded that when optimizing, you need to profile first and code later. I was so darned sure it was the serialization that I didnâ't bother to profile the code. Well OK, so I took my change out and profiled the code. Surprise! The culprit turned out to be in the way we were reading and writing the data to the file. Our IO was completely unbuffered â' we were laboriously reading and writing all that data one byte at a time. Compared to huge IO overhead, the serialization was a drop in the bucket.

My experience is not unusual: Your best guess as to what is making this program a pig is usually wrong. The only way to know is to run your code with a profiler and see which bits are the slow ones.

3) There is always a long pole in the tent, but it is not always the same pole.

As you optimize you code, you need to run your profiling after each change. You are looking to answer two questions. First, did my change actually help? If the change did speed things up, is there now a new bottleneck? Some part of our program is always going to be the limiting factor â' otherwise your code would be infinitely fast. As you optimize things, it is quite likely that the part you sped up will fade into the background and some other section of the code will become the new bottleneck.

Going back to my results manager project, turns out that after I put in the buffered IO, which made a huge difference, the slowest part of the code was – Yes! The serialization. Remove the huge cost of unbuffered IO and the Java serialization was the problem. Back into the code went my original change.

4) If it doesn't help, take it out

I said that you should reprofile your code after each change. The idea is to see if the change helped and if so to identify the new problem area. But what if the change didn't help? Take It Out! I don't care if your idea is so brilliantly efficient that it can't possibly not speed things up. If Mother Nature doesn't agree, Take It Out. Look at it this way: you are starting with code that works and your job is to make it faster. And oh yes, keep it working. Despite your unit tests, every change you make risks screwing things up. If your optimization isn't an optimization, then Take It Out. How fast is broken?

5) You need to know when to stop

The trouble with optimization is there is no end to it. You can always think of something you can do to make the thing run a little faster, lighter, cheaper. The trouble is that making programs run faster frequently means making them uglier. There are exceptions: sometimes the original is slow because it is badly thought out, or it takes up too much space because it was hacked together from the start. But much of the time optimization is the process of taking nice code and making it longer, more complex, harder to read. In short, less nice. Really we should stop using the term optimization. We should call it what it really is: screwing up the code so that it performs better or maybe justifiable mutilation. You have to know when to stop, when the mutilation to performance ratio is just too high.

That's really it: when optimizing you need to know what you are starting with, what you need to change, if your changes are working, and when to stop.

– Russ