CAN I BE OF ASSISTANCE?

Friday, March 25

Geek

Going, Going, Gone!

It took nearly 24 hours all told - not including the backups, which took 48 hours themselves - but I'm on Windows 7 now, and it's working fine.

One critical point: If you have a Realtek network controller (either a card or built in to your motherboard), download the Windows 7 driver for it from the manufacturer's site before upgrading, because your network will be seriously disfunctional afterwards.  The driver that ships with Windows 7 delivers only slightly better average speeds than dial-up - even on your local network - and frequently stops working entirely for several seconds at a time.

Posted by: Pixy Misa at 12:30 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 104 words, total size 1 kb.

Geek

It Keeps Going, And Going, And Going

Thirteen hours into my Windows 7 upgrade now.

Still going.

The progress indicator has, thankfully, moved from where it was six hours ago, and is now at 2,099,020 of 2,777,119.

Posted by: Pixy Misa at 02:07 AM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 37 words, total size 1 kb.

Thursday, March 24

Geek

Please Wait...

Transferring files, settings, and programs (608,859 of 2,777,119 transferred)

This is the first time I've ever upgraded a Windows system.  Usually I'll hang onto them until they're old enough to need replacing or the operating system gets corrupted and dies.*

Nagi is a quad-core machine with 8GB of RAM, and until AMD's new Bulldozer chips arrive later this year there's no upgrade that's worth bothering with.  Not that I can reasonably afford, anyway.

So after carefully backing up 2.2TB of miscellaneous stuffs, I kicked off the upgrade at about 2 o'clock this afternoon.  It's just gone 9 o'clock now, and the status is exactly as I gave above.

It's not a quick process, not when you start with a 2.5 year old Vista system with 748 applications installed.

And it's telling me that The Sims 2 may not work afterwards. sad

Also my IDE controller, but I don't think that's even in use.

* Which has happened to me twice, both times due to memory problems of one sort or another.

Posted by: Pixy Misa at 08:02 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 173 words, total size 1 kb.

Monday, March 21

Geek

Numbers

Running my little Python benchmark again:


   AMD 3.0GHzIntel 2.93GHz   AMD 2.6GHZ   Intel 3.3GHzPsyco
Loop0.6130.6900.7070.6130.013
String1.1030.9871.2730.8760.180
Scan0.5400.4530.6230.4020.547
Call1.3831.1401.5961.0120.100
Mean3.6393.2704.1992.9030.840
Score2753062383441190
Mark1000111386712534332


After a little work to eliminate as many of the variables as possible, this is what I get. These scores are from my little Python benchmark, run on Fedora 13 under OpenVZ on my development machine, a 3GHz AMD Phenom II, and the main production server, a 2.93GHz dual Xeon 5670.

One tricky factor is that the Xeon 5670 can actually run at up to 3.33GHz when lightly loaded. I can't see directly what clock speed each core is running at, but by comparing results between busy and quiet times, and taking the best of ten scores for each test when the CPU was lightly loaded, I'm pretty sure I got a snapshot of it running at top speed, and the difference is about 7%. Intel's newer Xeons also have turbo boost, so I've left the numbers unchanged as averages measured on a moderately busy system.

When it comes to new server hardware, I'm projecting these scores to the Opteron 4180, a 2.6GHz $200 chip, and the Xeon E3-1245, a $280 3.3GHz chip. The Opteron clock speed is slower and the Xeon E3 somewhat faster than my test systems, making the difference much more significant. On the other hand, the Opteron has six cores vs. the Xeon E3's four. On the third hand, the Xeon has hyperthreading, which gives a small but measurable boost as well. All that means that the throughput is likely to be pretty much the same between the two chips.

And the Xeon E3 has a downside in that you can't put more than 16GB of RAM on it: It only supports unbuffered memory, and only four modules. Operon 4180 supports both unbuffered and registered memory, and up to six modules of the latter, so it can easily take 48GB. (More is possible, but requires more expensive high-density DIMMs.)

Also, the Xeon E3 got side-swiped by the Great Sandy Bridge Chipset Disaster, and isn't actually available.

So the new low-end Intel chips will be measurably faster than the current low-end AMD server chips, about 45%, in response times if not overall throughput.

On the other hand, there's that 16GB limit. Memory is dirt cheap and you want to put as much of it in a server as you can, and being able to put three times as much in the AMD system is pretty significant. (Oh, and the Opteron is a dual-socket CPU, so you can easily scale to 96GB and a dozen cores if you want.)

The Psyco numbers are from my dev environment, and point out once again what a nifty bit of work Psyco is, and that it should have been rolled into the Python core years ago.

Posted by: Pixy Misa at 10:20 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 459 words, total size 6 kb.

Wednesday, March 16

Geek

All Grist For The Bayesian Mill

I'm busy working on the new (and much needed) spam filter for mu.nu and mee.nu.

The old filter was based on heuristics and blacklists and a couple of security-by-obscurity tricks (a honeypot, a secret question).

The new filter is purely Bayesian.

It's more than a simple text analyser, though.  Some of the things I'm doing:
  • Contextual analysis: A comment about designer shoes might be fine on a fashion blog, but on a politics blog it's almost certainly spam.
  • Language analysis: A comment in Chinese may or may not be spam, but a comment in Chinese replying to a post in French almost certainly is.
  • Geographics analysis: Are you in a spam hotspot?  Are you in the same part of the world as the blogger?
  • Content analysis: Is the comment full of crappy Microsoft markup?
  • Metadata analysis: You can put a name, URL, and email address on your comments.  The system treats those specifically as names, URLs, and email addresses, not just more comment text.
  • Trend analysis: How many comments have you posted in the last ten minutes?  How many total?  How about under that name vs. that IP?  What's the average spam score for comments from that IP?
The problem is, some of these produce tokens that I can add to my big spam token table, while others produce numbers.  So I need to work out some heuristics and weights by which to modify the Bayesian score with

SMACK

The key understanding here is that Bayesian analysis makes that problem go away.  You don't feed the Bayesian score into a calculation along with a bunch of numbers generated by other heuristics.  That just makes more work and reduces the reliability of the core mechanism.

What you do is you simplify the numbers in some way (rounding, logarithms, square roots), turn them into tokens, and throw them into the pool.  You want to simplify the numbers so that there's a good chance of a match; for example, a five-digit ratio of content:markup isn't going to get many hits, but one or two digits will.

So what we do is we parse, compute, and calculate all these different tokens for a given post, and then we look for the most interesting ones in our database - the ones that, based on our training data, vary the most from the neutral point.

Then we just take the scores for each of those interesting elements, positive or negative, and throw them at Bayes' formula.

And out pops the probability that the comment is spam.  (Not just an arbitrary score, but an actual, very realistic, probability.)

And then, based on that, we go and update the scores in the database for every token we pulled from the comment.  So if it works out that a comment is spam using one set of criteria, it can train itself to recognise spam using the other identifiable criteria in the comment - based on how distinct those criteria are from non-spam.

Automatically.  Which means I don't have to come back and tweak weights or add items to blacklists; it works it all out from context.

The framework is done; I need to write some database code now, load up some tables (like the GeoIP data), and then start training and testing it.  If that goes well, I should have it in place early next week.

I have a ton (4 gigabytes) of known spam to train against, but I need to identify a similar amount of known good comments, and that alone is going to take me a day or two.

I looked at just using a service like Akismet.  That, all by itself, would cost me more than all the other expenses for keeping the system running put together.  Just filtering what's been filtered by the current edition of the spam filter would have cost upwards of $50,000.

A week or two of fiddly coding and training looks like it should pay for itself very quickly.

Posted by: Pixy Misa at 04:16 PM | Comments (15) | Add Comment | Trackbacks (Suck)
Post contains 659 words, total size 4 kb.

Friday, March 11

Geek

Huffwin's Law

As an online discussion grows longer, the probability of someone citing the Huffington Post approaches one.

Depending on local statute, you may be allowed to shoot the offender.  In Texas, this is actually mandatory.

Posted by: Pixy Misa at 05:44 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 36 words, total size 1 kb.

Tuesday, March 08

Geek

Hiccups

Sorry about the hiccups earlier - incoming DDoS from Turkey (again) and I accidentally screwed up the networking while blocking it.

Posted by: Pixy Misa at 07:32 PM | Comments (10) | Add Comment | Trackbacks (Suck)
Post contains 22 words, total size 1 kb.

Saturday, March 05

Geek

Looking At Lupa

I'm doing some testing on Lupa:

Calling a LuaJIT function from Python: 363ns
Calling a Python function from LuaJIT: 447ns
Calling a LuaJIT function from Psyco: 253ns
Calling a Psyco function from LuaJIT: 730ns
Calling a Python function from Python: 177ns
Calling a Pysco function from Psyco: 3ns (!)

I also tested some sample code that calls a Lua function from Python and passes it a Python function as a parameter; that takes bout 1.8Β΅s in Python and 2.1Β΅s in Psyco (jumping into and out of the JIT clearly has some overhead).

The worst case, unfortunately, is likely to be the most common one - calling back to Python/Psyco (specifically the Minx API) to get data for the Lua script.  Lupa has some nice wrappers for using data structures rather than functions, so I'm going to see how they go.

That said, the worst case is 730 nanoseconds.

The one hiccup is that creating a Lupa LuaRuntime instance leaks about 30kB, and crashes Python after 13,000 to 15,000 instances - even if I force garbage collection.  I've posted that to the Lupa mailing list, and will follow up and see if I can help find the problem and fix it.

That can be solved using a worker pool on the web server, with worker processes being retired after (say) 100 requests.  The overhead on the server would be quite small, it would make for much better scalability, and would keep potentially buggy libraries or library use under control.  (A careless PIL call can use a huge amount of memory.)

Update: The author has fixed the problem and released a new version of Lupa (0.19) - on the weekend.  It now works flawlessly.

Posted by: Pixy Misa at 10:04 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 285 words, total size 2 kb.

Friday, March 04

Geek

Extra Crunchy

I just realised that with Lupa and the new internal Minx API, I can compile templates down to machine code.

Posted by: Pixy Misa at 06:16 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 22 words, total size 1 kb.

<< Page 1 of 1 >>
71kb generated in CPU 0.06, elapsed 0.2473 seconds.
53 queries taking 0.204 seconds, 301 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 299