Monday, March 21



Running my little Python benchmark again:

   AMD 3.0GHzIntel 2.93GHz   AMD 2.6GHZ   Intel 3.3GHzPsyco

After a little work to eliminate as many of the variables as possible, this is what I get. These scores are from my little Python benchmark, run on Fedora 13 under OpenVZ on my development machine, a 3GHz AMD Phenom II, and the main production server, a 2.93GHz dual Xeon 5670.

One tricky factor is that the Xeon 5670 can actually run at up to 3.33GHz when lightly loaded. I can't see directly what clock speed each core is running at, but by comparing results between busy and quiet times, and taking the best of ten scores for each test when the CPU was lightly loaded, I'm pretty sure I got a snapshot of it running at top speed, and the difference is about 7%. Intel's newer Xeons also have turbo boost, so I've left the numbers unchanged as averages measured on a moderately busy system.

When it comes to new server hardware, I'm projecting these scores to the Opteron 4180, a 2.6GHz $200 chip, and the Xeon E3-1245, a $280 3.3GHz chip. The Opteron clock speed is slower and the Xeon E3 somewhat faster than my test systems, making the difference much more significant. On the other hand, the Opteron has six cores vs. the Xeon E3's four. On the third hand, the Xeon has hyperthreading, which gives a small but measurable boost as well. All that means that the throughput is likely to be pretty much the same between the two chips.

And the Xeon E3 has a downside in that you can't put more than 16GB of RAM on it: It only supports unbuffered memory, and only four modules. Operon 4180 supports both unbuffered and registered memory, and up to six modules of the latter, so it can easily take 48GB. (More is possible, but requires more expensive high-density DIMMs.)

Also, the Xeon E3 got side-swiped by the Great Sandy Bridge Chipset Disaster, and isn't actually available.

So the new low-end Intel chips will be measurably faster than the current low-end AMD server chips, about 45%, in response times if not overall throughput.

On the other hand, there's that 16GB limit. Memory is dirt cheap and you want to put as much of it in a server as you can, and being able to put three times as much in the AMD system is pretty significant. (Oh, and the Opteron is a dual-socket CPU, so you can easily scale to 96GB and a dozen cores if you want.)

The Psyco numbers are from my dev environment, and point out once again what a nifty bit of work Psyco is, and that it should have been rolled into the Python core years ago.

Posted by: Pixy Misa at 10:20 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 459 words, total size 6 kb.

Comments are disabled. Post is locked.
43kb generated in CPU 0.02, elapsed 0.1745 seconds.
54 queries taking 0.1527 seconds, 253 records returned.
Powered by Minx 1.1.6c-pink.