Sunday, December 17
I was going to run some little benchmarks to compare the speed of my web servers (Akane, Nabiki and Ranma) and my development servers at home (Naga and Martina).
But the results weren't very interesting, because they're all the same.
Akane and Nabiki are Opteron 170s: dual-core 64-bit 2.0GHz.
Ranma is an Athlon XP 3000+: single-core 32-bit 2.16GHz.
Martina is an Athlon XP 2800+: single-core 32-bit 2.08GHz.
Naga is an Athlon 64 3200+: single-core 64-bit 2.0GHz.
Naga is the only one running a 64-bit kernel, and hence a 64-bit Python.
I also checked up on Namo, the little Celeron box I got to run The Jawa Report when we were getting DDoSed. Kei and Yuri seem to be down right now... Uh, which is bad. They should be alive until the end of the month.
After making my benchmark self-timing, so that I can run it on Windows, I can add:
Lina: Pentium 4 2.6GHz
Amelia: Core Duo 1.66GHz
Haruhi: Core 2 Duo 2.4GHz
I'm not going to bother running tests on Sylphiel and Kyon, my Linux virtual machines, because the times vary by +/- 20% due to the clock problems I mentioned earlier.
|Lina||Pentium 4||2.6GHz||2.5 (Win)||2.038||5.058||0.875||7.971|
|Haruhi||Core 2 Duo||2.4GHz||2.5 (Win)||0.644||1.933||0.477||3.053|
|Amelia||Core Duo||1.66GHz||2.5 (Win)||1.243||3.158||1.033||5.434|
You can see why all the servers for the New Site are going to be Core 2 Duos. I'll probably migrate mu.nu to Core 2 Duo servers some time in '07 as well - the monthly charge is the same. I just have to wait until SoftLayer are offering their double-memory deal again.
And here's the reason why I'm going to stick with the 32-bit kernel for the application servers:
|Haruhi||Core 2 Duo||2.4GHz||2.5 (Win)+Psyco||0.012||0.273||0.554||0.839|
Psyco is a JIT compiler for Python. While it doesn't always improve performace, when it does, the advantage can be huge, and you can tell it to compile only specific functions if you need to.
Psyco produces 32-bit code. It doesn't work at all on 64-bit Python, and if you run a 32-bit Python on a 64-bit kernel, the overhead of the compatibility layer makes Psyco slower than the standard interpreter.
import time def loop(): d=0 for i in xrange(10000000): d+=1 def str(): for i in xrange(100): e='' for j in xrange(100000): e+='.' def scan(): d=0 for i in xrange(10): e='' for j in xrange(10000): d+=1 e+='.' f=e.find(',') def run(p,label): n=3 t0=time.clock() for i in xrange(n): p() t=(time.clock()-t0)/n pp(t,label) return t def pp(t, label): print '%s: %3.3f' % (label, t) t=0 t+=run(loop,'Loop') t+=run(str,'String') t+=run(scan,'Scan') pp(t,'Total')
54 queries taking 0.1513 seconds, 329 records returned.
Powered by Minx 1.1.6c-pink.