Friday, October 28


Well, That Should Help

Just doing a spot of benchmarking.

Mew is the current server. Comparing it with Kei. The Lovely Angels are equivalent in brainpower* so it doesn't matter which one I test.

MewKei% Improvement
Compress MySQL Backup25m15.65s13m9.937s91.8%
Uncompress MySQL Backup4m27.88s2m41.47s65.7%
Compress Trackback Log**53.87s26.15s106%
Uncompress Trackback Log**3.65s2.62s39.3%
Python Loop Test**5.9s4.72s 3.092s25.5% 90.8%***

The problem with this is hyperthreading. Hyperthreading splits each CPU in half, but Linux doesn't know about this, so just how reflective of reality these results are is somewhat up in the air. The best approach is to run the test many times and pick the lowest number. Or to shut down every other application... Which the Munuvians may not appreciate.

* In this incarnation. Management makes no representations, etc, etc.
** Best of ten trials.
*** Note to self: RPM distributions tend not to be well-optimised. For anything you'll be using a lot - particularly languages - compile your own. It's just a ./configure; make; make install anyway.

Posted by: Pixy Misa at 01:46 AM | Comments (5) | Add Comment | Trackbacks (Suck)
Post contains 166 words, total size 2 kb.

1 Hyperthreading doesn't quite split the CPU in half, though that's something of the idea of what's going on. The problem is that when hyperthreading is enabled, each of the two parts of the CPU don't have quite the abilities of a full single CPU. So to really use hyperthreading correctly in multi-processor systems the kernel really needs to know. That's how it is with my workstation, which has two Xeons in it. Win2K would treat them as if they were 4 processors, and would schedule them without paying attention to what they were. So if it had two CPU-bound jobs it might toss both of them onto the same CPU. WinXP's scheduler is hyperthreading-aware and handles it properly. And I thought that there was a version of the Linux kernel which handled it properly. I remember reading about it more than two years ago when I got my workstation. However, it may be something that has to be enabled, and which might require a recompile of the kernel. I'm afraid I can't give you any pointers or help on where you might need to look to find out more.

Posted by: Steven Den Beste at Friday, October 28 2005 03:32 AM (CJBEv)

2 Yep. I was simplifying like mad. The issue arises when you're benchmarking programs on a busy system. On a straightforward multi-processor or multi-core system, the cores are all the same no matter what is running. So the report of the amount of CPU time taken is reasonably accurate (unless your application is bound on memory bandwidth). With hyperthreading, though, the performance of a given "CPU" varies dramatically depending on whether the second thread is busy. So a report of the number of CPU seconds taken to run a given program can likewise vary dramatically. In 10 runs the variance was in fact only on the order of 10%, and there was a fair amount of idle time, so I don't think my results were too badly screwed up. Which means that between the new hardware and the new compilers, the new servers are each significantly faster than the current one. And there are two of them. And we are getting CPU-bound. With the rise of dynamic web sites and the ever-present crappy code, you need a lot of CPU power to run a large set of web sites. Fortunately, CPU power is readily available and cheap.

Posted by: Pixy Misa at Friday, October 28 2005 03:45 AM (RbYVY)

3 So if I run two copies of the Python script at the same time, Linux is smart enough (i.e. hyperthreading-aware) to assign them to different physical CPUs, and it executes in about 6 seconds. If I run four at the same time, it can't do that, and it they take 11 seconds to run. (I just verified this.) The problem is that the system reports that it took 6 seconds of CPU time in one instance and 11 seconds in the other, making standard benchmarking tools unreliable.

Posted by: Pixy Misa at Friday, October 28 2005 04:19 AM (RbYVY)

4 OK; I wasn't meaning to nitpick. The reason I posted was to say that the Linux kernel actually is hyper-threading aware. Apparently you already knew that, so "never mind!"

Posted by: Steven Den Beste at Friday, October 28 2005 11:41 AM (CJBEv)

5 No, you made a perfectly valid point, I just hadn't the time to explain it sufficiently in my original post.

Posted by: Pixy Misa at Friday, October 28 2005 12:32 PM (QriEg)

Hide Comments | Add Comment

Comments are disabled. Post is locked.
49kb generated in CPU 0.0139, elapsed 0.1242 seconds.
56 queries taking 0.1162 seconds, 338 records returned.
Powered by Minx 1.1.6c-pink.