Brickmuppet, they're just now coming out of a long, dry, summer. That area is semi-arid anyway. Rain is a good thing; it makes things grow, and it fills the rivers (and the reservoirs).
Yeah, rain is good. (Hell, rain is good where I live, even though water is not scarce here.)
Posted by: Steven Den Beste at Saturday, March 19 2011 06:33 AM (+rSRq)
Okay, looks like I'm going to have to either write some more music or wallow in guilt. Sony Creative Software is going download only, so they're clearing out their stock of loops on physical media at 75% off.
I thought that the delivery charges to Australia would be prohibitive, but I guess CDs and DVDs in cardboard sleeves don't cost much to ship, because it's a flat $30 for FedEx Priority shipping.
So I went through the sale catalog, ticked just about everything I had ever wanted to buy but couldn't quite justify previously, and ordered the whole lot. Whee!
Should land here late next week, which is perfect.
Posted by: Pixy Misa at
04:31 AM
| No Comments
| Add Comment
| Trackbacks (Suck)
Post contains 113 words, total size 1 kb.
Wednesday, March 16
All Grist For The Bayesian Mill
I'm busy working on the new (and much needed) spam filter for mu.nu and mee.nu.
The old filter was based on heuristics and blacklists and a couple of security-by-obscurity tricks (a honeypot, a secret question).
The new filter is purely Bayesian.
It's more than a simple text analyser, though. Some of the things I'm doing:
Contextual analysis: A comment about designer shoes might be fine on a fashion blog, but on a politics blog it's almost certainly spam.
Language analysis: A comment in Chinese may or may not be spam, but a comment in Chinese replying to a post in French almost certainly is.
Geographics analysis: Are you in a spam hotspot? Are you in the same part of the world as the blogger?
Content analysis: Is the comment full of crappy Microsoft markup?
Metadata analysis: You can put a name, URL, and email address on your comments. The system treats those specifically as names, URLs, and email addresses, not just more comment text.
Trend analysis: How many comments have you posted in the last ten minutes? How many total? How about under that name vs. that IP? What's the average spam score for comments from that IP?
The problem is, some of these produce tokens that I can add to my big spam token table, while others produce numbers. So I need to work out some heuristics and weights by which to modify the Bayesian score with
SMACK
The key understanding here is that Bayesian analysis makes that problem go away. You don't feed the Bayesian score into a calculation along with a bunch of numbers generated by other heuristics. That just makes more work and reduces the reliability of the core mechanism.
What you do is you simplify the numbers in some way (rounding, logarithms, square roots), turn them into tokens, and throw them into the pool. You want to simplify the numbers so that there's a good chance of a match; for example, a five-digit ratio of content:markup isn't going to get many hits, but one or two digits will.
So what we do is we parse, compute, and calculate all these different tokens for a given post, and then we look for the most interesting ones in our database - the ones that, based on our training data, vary the most from the neutral point.
Then we just take the scores for each of those interesting elements, positive or negative, and throw them at Bayes' formula.
And out pops the probability that the comment is spam. (Not just an arbitrary score, but an actual, very realistic, probability.)
And then, based on that, we go and update the scores in the database for every token we pulled from the comment. So if it works out that a comment is spam using one set of criteria, it can train itself to recognise spam using the other identifiable criteria in the comment - based on how distinct those criteria are from non-spam.
Automatically. Which means I don't have to come back and tweak weights or add items to blacklists; it works it all out from context.
The framework is done; I need to write some database code now, load up some tables (like the GeoIP data), and then start training and testing it. If that goes well, I should have it in place early next week.
I have a ton (4 gigabytes) of known spam to train against, but I need to identify a similar amount of known good comments, and that alone is going to take me a day or two.
I looked at just using a service like Akismet. That, all by itself, would cost me more than all the other expenses for keeping the system running put together. Just filtering what's been filtered by the current edition of the spam filter would have cost upwards of $50,000.
A week or two of fiddly coding and training looks like it should pay for itself very quickly.
1
The annoying thing that I've noticed recently, is that the spambots (or heaven forbid, some idiot manually doing this) have taken to copy-pasting stuff randomly from within the blog/post to use as text before dumping their garbage URL under the Web field.
The filter doesn't knock them down quite enough to never show up.
Posted by: Will at Thursday, March 17 2011 02:12 AM (ZYwON)
2
Yeah. The advantage of a Bayesian solution is that (with training) it learns what rules work and what don't, and narrows in on just the ones that work. So any time I have a potentially neat idea for filtering spam, I don't have to spend long hours testing it and calculating optimal cutoff levels, I just chuck it into the mix and let the system train itself. Makes it much easier to keep up with their new tricks.
I'm going to add some more behavioural analysis (because spam generators don't behave like humans in the way they connect to the server, and that's useful data) and possibly add Markovian analysis to the text analyser. But both of those should be relatively simple, because I'll just throw them into the Bayesian pool again.
Posted by: Pixy Misa at Thursday, March 17 2011 03:23 AM (PiXy!)
3
Okay, just threw in a context-free behavioural analysis module. Only problem is I don't currently have any training data for that, so I'll have to patch the live server to collect the data.
Posted by: Pixy Misa at Thursday, March 17 2011 03:47 AM (PiXy!)
4
One of the biggest ways a spambot is different is that it's going to be a lot faster than a human. Transaction timestamps should be a huge clue.
Posted by: Steven Den Beste at Thursday, March 17 2011 05:11 AM (+rSRq)
5
Yes, I have a token based on a function of the posting rate.
I've also added improved link parsing, link count, link:text ratio, markup:text ratio, and language vs. location checks.
The only thing remaining is the Markovian analysis, which I'll leave for now because that could signficantly impact performance.
So, time to build myself a test and training framework!
Posted by: Pixy Misa at Thursday, March 17 2011 11:07 AM (PiXy!)
6
Unrelated, what problem does SetPageHeight() in util.js solve? I ask because it drives me crazy on Wonderduck's site. With Safari and Chrome, it seems to run before all of the pictures are loaded, calculating a maximum height for the page that is well before the end of each post (presumably because he's not putting height and width attributes on the IMG tags). I have to use Firefox to see all those Rio pictures...
-j
Posted by: J Greely at Thursday, March 17 2011 11:52 AM (fpXGN)
7
I'm going to either fix or remove SetPageHeight(). The system is set up to support a three-column layout with banner and footer, without forcing a fixed content ordering. As of 2008, the only way to make that work cross-browser was by manually recalculating the page height. Ghastly, and also buggy.
I'll be re-testing with the current browser range shortly - probably next week - and fixing some of the CSS oddities like that.
Posted by: Pixy Misa at Thursday, March 17 2011 03:08 PM (PiXy!)
8
Oh, is that what's happening? I've had that problem with Wonderduck's site for a long time now.
Posted by: Steven Den Beste at Thursday, March 17 2011 03:09 PM (+rSRq)
9
Yeah, sorry. It will happen on any page using the default 1.1 templates if you load up enough images - if you don't use size specifications (and frankly, who does?) and you don't have a lengthy sidebar.
It's not supposed to do that, but it does. Not sure if it was a bug at the time it was deployed, but it happens across multiple browsers now, so it needs to get fixed.
Posted by: Pixy Misa at Thursday, March 17 2011 03:15 PM (PiXy!)
10
At least, I think it does. It's definitely the culprit on Wonderduck's blog, anyway.
It's a race condition, and if you wait for all the images to load and then refresh, it will show up fine.
Posted by: Pixy Misa at Thursday, March 17 2011 03:16 PM (PiXy!)
11
I made it a rule to supply height attribute to all prictures that I post and thus my blog is immune to the height problem. BTW, Firefox breaks too at certain points. Old Brickmuppet's travel posts have to be reloaded, in Firefox.
Posted by: Pete Zaitcev at Thursday, March 17 2011 06:12 PM (9KseV)
12
I'm sorry The Pond is such a pain. If it's any consolation, friends, it does the same thing to me when I try to read my own blog.
Posted by: Wonderduck at Friday, March 18 2011 02:49 PM (W8Men)
13
The Pond is awesome. This is entirely the fault of a conflict between my CSS and Javascript and recent browsers.
Posted by: Pixy Misa at Friday, March 18 2011 03:48 PM (PiXy!)
14
I think one easy fix would be to update the upload code so that the suggested <img> were to include height= attribute always. The uploader knows the dimensions of the image.
Posted by: Pete Zaitcev at Friday, April 01 2011 05:18 AM (9KseV)
15
Pete's idea is an interesting one. In the file upload frame, when it generates the cut-and-paste code for using an image, it could include size parameters instead of just the filename.
Posted by: Steven Den Beste at Friday, April 01 2011 10:26 AM (+rSRq)
Posted by: Old Grouch at Wednesday, March 09 2011 06:24 AM (WJKI0)
3
So it's basically just a spambot that went crazy.
Posted by: Pete Zaitcev at Wednesday, March 09 2011 06:50 AM (9KseV)
4
Don't think so. It's not trying to post anything, it's just opening huge numbers of HTTP connections from lots of different IP addresses, almost all of them in Turkey. And always to the same URL.
Posted by: Pixy Misa at Wednesday, March 09 2011 06:56 AM (PiXy!)
Place I work for finally gave up - we've got all of Turkey blocked from the server. Too many different attacks - hack attempts, spam attempts, take-down attempts, etc. We've got one site that does tours of Israel. They targeted that one a LOT. We got tired. Turkey is no longer welcome. Luckily, 99% of our sites are locally oriented (in the US) so except for a few possible tourists visiting Turkey and wanting to visit one of our sites, we aren't really blocking anyone who would have any legitimate interest.
Posted by: Kathy Kinsley at Wednesday, March 09 2011 09:10 AM (bBCeM)
6
I haven't had any trouble from Turkey, but on more than one occasion I've given serious thought to blocking all of Russia and Ukraine.
Posted by: Steven Den Beste at Wednesday, March 09 2011 04:08 PM (+rSRq)
7
Nothing good ever came out of Russia except for Red Hat programmers. ;p
Posted by: Avatar_exADV at Thursday, March 10 2011 06:48 PM (mRjOr)
Posted by: Wonderduck at Sunday, March 13 2011 05:35 AM (W8Men)
10
I use Kaspersky for anti-virus, another good thing out of Russia. But we really have had more trouble from Turkey than all the other countries combined.
Posted by: Kathy Kinsley at Monday, March 14 2011 08:03 AM (3oJmk)
Posted by: Steven Den Beste at Tuesday, March 08 2011 03:03 AM (+rSRq)
2
Eggs and minced meat (often mixed together - I think this is a standard insectivore food substitute), and squished fruit and cooked grains to add a bit of variety. And, when available, ants.
Posted by: Pixy Misa at Tuesday, March 08 2011 08:33 AM (PiXy!)
No, really. I was sitting here, reading my email, when there was a horrible crash from the other end of the living room. One of the glass doors of the entertainment unit had spontaneously disintegrated. The rubble is all over the floor, still ticking and popping.
The glass is - was - curved, so I suspect it's been under internal stress the entire time and just suddenly gave way. I have my air conditioner on, and it's in the line of the air stream, so perhaps the temperature differential added to that.
Somewhat unsettling, having furniture explode without warning like that.
1
My mother gave us a set of stressed-glass plates for wedding. Those things are like bombs.
Posted by: Pete Zaitcev at Tuesday, March 08 2011 01:09 AM (9KseV)
2
Did the penguin on the top of your television set survive?
Posted by: dkallen99 at Tuesday, March 08 2011 01:14 AM (1PFDl)
3
Pete, I hope you didn't have them stored in close proximity to each other, or in one of those fancy glass-fronted display cabinets. The potential for chain-reaction detonation sounds alarming.
Posted by: Mitch H. at Tuesday, March 08 2011 07:20 AM (jwKxK)
4
I have a Lenore playset on top of the TV, and since they're all dead already they came through it fine.
Posted by: Pixy Misa at Wednesday, March 09 2011 10:31 AM (PiXy!)
Calling a LuaJIT function from Python: 363ns Calling a Python function from LuaJIT: 447ns Calling a LuaJIT function from Psyco: 253ns Calling a Psyco function from LuaJIT: 730ns Calling a Python function from Python: 177ns Calling a Pysco function from Psyco: 3ns (!)
I also tested some sample code that calls a Lua function from Python and passes it a Python function as a parameter; that takes bout 1.8µs in Python and 2.1µs in Psyco (jumping into and out of the JIT clearly has some overhead).
The worst case, unfortunately, is likely to be the most common one - calling back to Python/Psyco (specifically the Minx API) to get data for the Lua script. Lupa has some nice wrappers for using data structures rather than functions, so I'm going to see how they go.
That said, the worst case is 730 nanoseconds.
The one hiccup is that creating a Lupa LuaRuntime instance leaks about 30kB, and crashes Python after 13,000 to 15,000 instances - even if I force garbage collection. I've posted that to the Lupa mailing list, and will follow up and see if I can help find the problem and fix it.
That can be solved using a worker pool on the web server, with worker processes being retired after (say) 100 requests. The overhead on the server would be quite small, it would make for much better scalability, and would keep potentially buggy libraries or library use under control. (A careless PIL call can use a huge amount of memory.)
Update: The author has fixed the problem and released a new version of Lupa (0.19) - on the weekend. It now works flawlessly.
Posted by: Pixy Misa at
10:04 PM
| No Comments
| Add Comment
| Trackbacks (Suck)
Post contains 285 words, total size 2 kb.
Friday, March 04
Extra Crunchy
I just realised that with Lupa and the new internal Minx API, I can compile templates down to machine code.
Posted by: Pixy Misa at
06:16 PM
| No Comments
| Add Comment
| Trackbacks (Suck)
Post contains 22 words, total size 1 kb.