Okay, looks like I'm going to have to either write some more music or wallow in guilt. Sony Creative Software is going download only, so they're clearing out their stock of loops on physical media at 75% off.
I thought that the delivery charges to Australia would be prohibitive, but I guess CDs and DVDs in cardboard sleeves don't cost much to ship, because it's a flat $30 for FedEx Priority shipping.
So I went through the sale catalog, ticked just about everything I had ever wanted to buy but couldn't quite justify previously, and ordered the whole lot. Whee!
Should land here late next week, which is perfect.
I'm busy working on the new (and much needed) spam filter for mu.nu and mee.nu.
The old filter was based on heuristics and blacklists and a couple of security-by-obscurity tricks (a honeypot, a secret question).
The new filter is purely Bayesian.
It's more than a simple text analyser, though. Some of the things I'm doing:
Contextual analysis: A comment about designer shoes might be fine on a fashion blog, but on a politics blog it's almost certainly spam.
Language analysis: A comment in Chinese may or may not be spam, but a comment in Chinese replying to a post in French almost certainly is.
Geographics analysis: Are you in a spam hotspot? Are you in the same part of the world as the blogger?
Content analysis: Is the comment full of crappy Microsoft markup?
Metadata analysis: You can put a name, URL, and email address on your comments. The system treats those specifically as names, URLs, and email addresses, not just more comment text.
Trend analysis: How many comments have you posted in the last ten minutes? How many total? How about under that name vs. that IP? What's the average spam score for comments from that IP?
The problem is, some of these produce tokens that I can add to my big spam token table, while others produce numbers. So I need to work out some heuristics and weights by which to modify the Bayesian score with
The key understanding here is that Bayesian analysis makes that problem go away. You don't feed the Bayesian score into a calculation along with a bunch of numbers generated by other heuristics. That just makes more work and reduces the reliability of the core mechanism.
What you do is you simplify the numbers in some way (rounding, logarithms, square roots), turn them into tokens, and throw them into the pool. You want to simplify the numbers so that there's a good chance of a match; for example, a five-digit ratio of content:markup isn't going to get many hits, but one or two digits will.
So what we do is we parse, compute, and calculate all these different tokens for a given post, and then we look for the most interesting ones in our database - the ones that, based on our training data, vary the most from the neutral point.
Then we just take the scores for each of those interesting elements, positive or negative, and throw them at Bayes' formula.
And out pops the probability that the comment is spam. (Not just an arbitrary score, but an actual, very realistic, probability.)
And then, based on that, we go and update the scores in the database for every token we pulled from the comment. So if it works out that a comment is spam using one set of criteria, it can train itself to recognise spam using the other identifiable criteria in the comment - based on how distinct those criteria are from non-spam.
Automatically. Which means I don't have to come back and tweak weights or add items to blacklists; it works it all out from context.
The framework is done; I need to write some database code now, load up some tables (like the GeoIP data), and then start training and testing it. If that goes well, I should have it in place early next week.
I have a ton (4 gigabytes) of known spam to train against, but I need to identify a similar amount of known good comments, and that alone is going to take me a day or two.
I looked at just using a service like Akismet. That, all by itself, would cost me more than all the other expenses for keeping the system running put together. Just filtering what's been filtered by the current edition of the spam filter would have cost upwards of $50,000.
A week or two of fiddly coding and training looks like it should pay for itself very quickly.
The annoying thing that I've noticed recently, is that the spambots (or heaven forbid, some idiot manually doing this) have taken to copy-pasting stuff randomly from within the blog/post to use as text before dumping their garbage URL under the Web field.
The filter doesn't knock them down quite enough to never show up.
Posted by: Will at Thursday, March 17 2011 02:12 AM (ZYwON)
Yeah. The advantage of a Bayesian solution is that (with training) it learns what rules work and what don't, and narrows in on just the ones that work. So any time I have a potentially neat idea for filtering spam, I don't have to spend long hours testing it and calculating optimal cutoff levels, I just chuck it into the mix and let the system train itself. Makes it much easier to keep up with their new tricks.
I'm going to add some more behavioural analysis (because spam generators don't behave like humans in the way they connect to the server, and that's useful data) and possibly add Markovian analysis to the text analyser. But both of those should be relatively simple, because I'll just throw them into the Bayesian pool again.
Posted by: Pixy Misa at Thursday, March 17 2011 03:23 AM (PiXy!)
Okay, just threw in a context-free behavioural analysis module. Only problem is I don't currently have any training data for that, so I'll have to patch the live server to collect the data.
Posted by: Pixy Misa at Thursday, March 17 2011 03:47 AM (PiXy!)
One of the biggest ways a spambot is different is that it's going to be a lot faster than a human. Transaction timestamps should be a huge clue.
Yes, I have a token based on a function of the posting rate.
I've also added improved link parsing, link count, link:text ratio, markup:text ratio, and language vs. location checks.
The only thing remaining is the Markovian analysis, which I'll leave for now because that could signficantly impact performance.
So, time to build myself a test and training framework!
Posted by: Pixy Misa at Thursday, March 17 2011 11:07 AM (PiXy!)
Unrelated, what problem does SetPageHeight() in util.js solve? I ask because it drives me crazy on Wonderduck's site. With Safari and Chrome, it seems to run before all of the pictures are loaded, calculating a maximum height for the page that is well before the end of each post (presumably because he's not putting height and width attributes on the IMG tags). I have to use Firefox to see all those Rio pictures...
Posted by: J Greely at Thursday, March 17 2011 11:52 AM (fpXGN)
I'm going to either fix or remove SetPageHeight(). The system is set up to support a three-column layout with banner and footer, without forcing a fixed content ordering. As of 2008, the only way to make that work cross-browser was by manually recalculating the page height. Ghastly, and also buggy.
I'll be re-testing with the current browser range shortly - probably next week - and fixing some of the CSS oddities like that.
Posted by: Pixy Misa at Thursday, March 17 2011 03:08 PM (PiXy!)
Oh, is that what's happening? I've had that problem with Wonderduck's site for a long time now.
Yeah, sorry. It will happen on any page using the default 1.1 templates if you load up enough images - if you don't use size specifications (and frankly, who does?) and you don't have a lengthy sidebar.
It's not supposed to do that, but it does. Not sure if it was a bug at the time it was deployed, but it happens across multiple browsers now, so it needs to get fixed.
Posted by: Pixy Misa at Thursday, March 17 2011 03:15 PM (PiXy!)
At least, I think it does. It's definitely the culprit on Wonderduck's blog, anyway.
It's a race condition, and if you wait for all the images to load and then refresh, it will show up fine.
Posted by: Pixy Misa at Thursday, March 17 2011 03:16 PM (PiXy!)
I made it a rule to supply height attribute to all prictures that I post and thus my blog is immune to the height problem. BTW, Firefox breaks too at certain points. Old Brickmuppet's travel posts have to be reloaded, in Firefox.
Posted by: Pete Zaitcev at Thursday, March 17 2011 06:12 PM (9KseV)
I'm sorry The Pond is such a pain. If it's any consolation, friends, it does the same thing to me when I try to read my own blog.
Posted by: Wonderduck at Friday, March 18 2011 02:49 PM (W8Men)
Posted by: Pixy Misa at Friday, March 18 2011 03:48 PM (PiXy!)
I think one easy fix would be to update the upload code so that the suggested <img> were to include height= attribute always. The uploader knows the dimensions of the image.
Posted by: Pete Zaitcev at Friday, April 01 2011 05:18 AM (9KseV)
Pete's idea is an interesting one. In the file upload frame, when it generates the cut-and-paste code for using an image, it could include size parameters instead of just the filename.
Place I work for finally gave up - we've got all of Turkey blocked from the server. Too many different attacks - hack attempts, spam attempts, take-down attempts, etc. We've got one site that does tours of Israel. They targeted that one a LOT. We got tired. Turkey is no longer welcome. Luckily, 99% of our sites are locally oriented (in the US) so except for a few possible tourists visiting Turkey and wanting to visit one of our sites, we aren't really blocking anyone who would have any legitimate interest.
Posted by: Kathy Kinsley at Wednesday, March 09 2011 09:10 AM (bBCeM)
I haven't had any trouble from Turkey, but on more than one occasion I've given serious thought to blocking all of Russia and Ukraine.
No, really. I was sitting here, reading my email, when there was a horrible crash from the other end of the living room. One of the glass doors of the entertainment unit had spontaneously disintegrated. The rubble is all over the floor, still ticking and popping.
The glass is - was - curved, so I suspect it's been under internal stress the entire time and just suddenly gave way. I have my air conditioner on, and it's in the line of the air stream, so perhaps the temperature differential added to that.
Somewhat unsettling, having furniture explode without warning like that.
Calling a LuaJIT function from Python: 363ns Calling a Python function from LuaJIT: 447ns Calling a LuaJIT function from Psyco: 253ns Calling a Psyco function from LuaJIT: 730ns Calling a Python function from Python: 177ns Calling a Pysco function from Psyco: 3ns (!)
I also tested some sample code that calls a Lua function from Python and passes it a Python function as a parameter; that takes bout 1.8Âµs in Python and 2.1Âµs in Psyco (jumping into and out of the JIT clearly has some overhead).
The worst case, unfortunately, is likely to be the most common one - calling back to Python/Psyco (specifically the Minx API) to get data for the Lua script. Lupa has some nice wrappers for using data structures rather than functions, so I'm going to see how they go.
That said, the worst case is 730 nanoseconds.
The one hiccup is that creating a Lupa LuaRuntime instance leaks about 30kB, and crashes Python after 13,000 to 15,000 instances - even if I force garbage collection. I've posted that to the Lupa mailing list, and will follow up and see if I can help find the problem and fix it.
That can be solved using a worker pool on the web server, with worker processes being retired after (say) 100 requests. The overhead on the server would be quite small, it would make for much better scalability, and would keep potentially buggy libraries or library use under control. (A careless PIL call can use a huge amount of memory.)
Update: The author has fixed the problem and released a new version of Lupa (0.19) - on the weekend. It now works flawlessly.