I was reading another article about cloud computing today. Almost all articles and posts seem to focus on how easy it is to add resources to your environment when you need more power.

Before you start to explain to me why this is true, yes, I do agree. It is very easy to add resources to an existing environment. When you use vSphere, Hyper-V or XenServer just add another host to your cluster or datacenter and you have more power that can be used by your machines. You can give virtual machines more CPU power and/or memory, etc. In the end your applications (that’s in the end what’s most important) have more chance for time to run on a shared environment.

My problem with this approach is simple: Aren’t we doing things the wrong way around?

Shouldn’t we be troubleshooting the problem?

We are accepting the current way of extending the environment with extra resources instead of doing some real troubleshooting and pointing fingers.

If you find out that your application is slow, first find out why it is slow, instead of adding resources. I remember a problem with a document management system that wasn’t performing. “It isn’t suitable for virtualization”, was the first thing the DBA said. After we gave them a physical server they said: “The way you install Windows automatically is incorrect, there’s something in it that slows down the server”. You already guessed it, we installed the server by hand and the problem still existed.

After some real troubleshooting the DBA found out that they missed one index in the database. After the index was created everything went as smooth as you would expect.

A couple of weeks ago I was working on another

It seems to me that this happens since the beginning of the x86 platform. When we first started with computers somebody shouted “640k is enough”. Those days are long gone. Even with 640MB internal memory you can’t get very far unless you’re using something else than Windows.

Now, I’m not a developer by profession, although I know how to program in a couple of language, but it seems to me that optimization of the application isn’t very high on the list of priorities. How many developers can honestly say that they do a rigorous optimization of their application before they ship it?

Here are some pointers for troubleshooting. It’s high level, so it’s valid for all applications and platforms.

  • Make sure that you (and your application users) are talking about one and the same thing
  • Know how your environment performs. Make baselines. Document.
  • Start troubleshooting with all parties. If it is a client/server app, get them all: network, storage, virtualization, DBA, and end-user
  • Talk to the application developer/vendor. Is his application performing like it should?

When you did all this and still no performance improvement, then you can (perhaps) add some extra resources to your machine or application.

To the developers and vendors amongst us I ask: Please optimize your application before you ship it. In the whole Green IT discussion I think this is a valid point. The less memory and resources you need, the less computing power you need to have. the more trees you save.

As I was finishing this post Erik pointed me to this article talking about cores and even more cores. The author of this article also concludes that more isn’t always better.