Ooh! My gu-uts, my guts.

Sunday, May 24


So I Was Wondering

Note to self: Implement auto-save, dammit.

I already knew that LMDB supported multiple values per key.  Reading the docs last week, I noticed that values within a key were stored in sorted order.  This evening, I was wondering if it were possible to seek to a particular value within a key.

It is.

This is neat.  It means you can use LMDB as an engine to implement a two-dimensional database like Cassandra, or a data structure server like Redis, with the elements of lists, sets, and hashes individually addressable.

Plus the advantage that unlike Redis, it can live on disk and have B-tree indexes rather than just a hash table.  (Though of course Redis has the advantage of predictable performance - living in memory and accessed directly, it's very consistent.)

The other big advantage of LMDB (for me, anyway) is that it natively supports multiple processes - not just multiple threads, but independent processes - handling transactions and locking automatically.  I love Python, but it has what is known as the Global Interpreter Lock - while you can have many threads, only one of them can be executing Python code at any time.  The other threads can be handling I/O, or calling C libraries that don't access shared data, but can't actually be running your code at the same time.

That puts a limit on the performance of any single Python application, and breaking out into multiple processes means you need to find a way to share data between those processes, which is a lot more fiddly than it is with threads.

LMDB don't care.  Thread, process, all the same, just works.


It does have limitations - it's a single-writer/multiple-reader design, so it will only scale so far unless you implement some sort of sharding scheme on top of it.  But I've clocked it at 100,000 transactions per second, and a million bulk writes per second, so it's not bad at all.

Admittedly that was with the write safety settings disabled, so  server crash could have corrupted my database.  But my main use for LMDB is as a smart distributed data structure cache, so if one node dies it can just be flushed and restarted.  In practical use, as a robust database, the numbers are somewhat lower (though with a smart RAID controller you should still be able to do pretty well).

It also supports a rather nice hot backup facility, where the backup format is either a working LMDB database ready to go (without needing to restore) or a cdbmake format backup (which is plain text if you're using JSON for keys and values), and it can back up around 1GB per second - if you have the I/O bandwidth - and only about 20% slower if the database is in heavy use at the time.


Posted by: Pixy Misa at 01:08 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 472 words, total size 3 kb.

Friday, May 22


A Few Of My Favourite Things

Once I got past the segfaults, anyway.  You should be using these things.

MongoDB 3.0 (not earlier versions, though)
Elasticsearch (though their approach to security is remarkably ass-backwards)
LZ4 (and its friend, LZ4_HC)
LMDB and its magical set_range_dup


Ai Shinozaki

Posted by: Pixy Misa at 05:10 PM | Comments (10) | Add Comment | Trackbacks (Suck)
Post contains 65 words, total size 2 kb.

Wednesday, May 13


Some Pig

So, I'm tinkering with what will become Minx 1.2, and testing various stuff, and I'm pretty happy with the performance.

Then I run the numbers, and realise that I'm flooding a 10GbE connection with HTTP requests using a $15 cloud server.

I think we can count that part of the problem space as solved.


Posted by: Pixy Misa at 05:30 PM | Comments (4) | Add Comment | Trackbacks (Suck)
Post contains 56 words, total size 1 kb.


Hard Things

There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors.

Posted by: Pixy Misa at 11:27 AM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 18 words, total size 1 kb.

Tuesday, May 12


That'll Do

I was getting about 1000 random record reads per second.
I needed to achieve 10,000 reads per second to make things work.
I wanted to reach 100,000 reads per second to make things run nicely.
I'm currently at 1,000,000.*

That'll do.


* Best test run so far was ~1.6 million records per second, with some special-case optimisations.**  Without optimisations, around 300k. Per thread.

** Since you asked, the problem was with unpacking large cached records into native objects.  A common case in templates is that you only want to access one or two fields in a record - perhaps just the user's name - but unless the record is already a native object you need to load the external representation and parse it to find the field you need.  The solution was to keep an immutable version of the of the object in the process, sign it with SHA-256, and sign the matching cache entry.  Then, when we need to access the record, we can read the binary data from the cache, compare the signatures, and if they match, we're safe to continue using the existing native structure.  If they don't match, we take the payload, decrypt it (if encryption is enabled) check that the decrypted payload matches the signature (if not, something is badly wrong), uncompress the payload (if compression is enabled), parse it (MsgPack or JSON), instantiate a new object, freeze it, and put it back into the native object cache.  This can take as long as 20 microseconds.

Posted by: Pixy Misa at 05:53 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 253 words, total size 2 kb.

Wednesday, May 06


Our Team Of Highly Trained Ninja Moths Hard At Work

Yesterday this sweater had full-length sleeves.


Posted by: Pixy Misa at 02:52 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 16 words, total size 1 kb.


After Texas Shooting: If Free Speech Is Provocative, Should There Be Limits?


Posted by: Pixy Misa at 02:44 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 13 words, total size 1 kb.

Friday, May 01


Je Suis Protein World


Posted by: Pixy Misa at 12:55 PM | Comments (5) | Add Comment | Trackbacks (Suck)
Post contains 4 words, total size 1 kb.

Sunday, April 26


For Those Who Don't Care So Much About Benchmarking High-Level Languages Down To The Nanosecond...

More slightly creepy music videos.

Posted by: Pixy Misa at 07:33 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 20 words, total size 1 kb.


Needs For Speeds

Testing various libraries and patterns on Python 2.7.9 and PyPy 2.5.1

Test Python PyPy Gain
Loop 0.27 0.017 1488%
Strlist 0.217 0.056 288%
Scan 0.293 0.003 9667%
Lambda 0.093 0.002 4550%
Pystache 0.213 0.047 353%
Markdown 0.05 0.082 -39%
ToJSON 0.03 0.028 7%
FromJSON 0.047 0.028 68%
ToMsgPack 0.023 0.012 92%
FromMsgPack 0.02 0.013 54%
ToSnappy 0.027 0.032 -16%
FromSnappy 0.027 0.024 13%
ToBunch 0.18 0.016 1025%
FromBunch 0.187 0.016 1069%
CacheSet 0.067 0.046 46%
CacheGet 0.037 0.069 -46%
CacheMiss 0.017 0.015 13%
CacheFast 0.09 0.067 34%
CachePack 0.527 0.162 225%
PixyMarks 13.16 40.60 209%

  • The benchmark script runs all the tests once to warm things up, then runs them three times and takes the mean.  The PixyMark score is simply the inverse of the geometric mean of the scores.  This matters for PyPy, because it takes some time for the JIT compiler to engage.

    Tests were run on a virtual machine on what I believe to be a Xeon E3 1230, though it might be a 1225 v2 or v3.

  • The Python Markdown library is very slow. The best alternative appears to be Hoep, which is a wrapper for the Hoedown library, which is a fork of the Sundown library, which is a fork of the unfortunately named Upskirt library.   (The author of which is not a native English speaker, and probably had not previously run into the SJW crowd.)

    Hoep is slower for some reason in PyPy than CPython, but still plenty fast.

  • cPickle is an order of magnitude slower than a good JSON or MsgPack codec.

  • The built-in JSON module in CPython is the slowest Python JSON codec. The built-in JSON module in PyPy appears to be the fastest.  For CPython I used uJSON, which seems to be the best option if you're not using PyPy.

  • CPython is very good at appending to strings. PyPy, IronPython (Python for .Net) and Jython (Python for Java) are uniformly terrible at this. This is due to a clever memory allocation optimisation that is tied closely to CPython's garbage collection mechanism, and isn't available in the other implementations.

    I removed the test from my benchmark because for large strings it's so slow that it overwhelms everything else.  Instead, append to a list and join it when you're done, or something along those lines.

  • I generally see about a 6x speedup from PyPy.  In these benchmarks I've been focusing on getting the best possible speed for various functions, using C libraries wherever possible.  A C library called from Python runs at exactly the same speed as a C library called from PyPy, so this has inherently reduced the relative benefits of PyPy.  PyPy is still about 3x faster, though; in other words, migrating to PyPy effectively turns a five-year-old mid-range CPU into 8GHz next-gen unobtainium.  

  • If you are very careful about selecting your libraries.  There's an alternate Snappy compression library available.  It's about the same speed under CPython, but 30x slower under PyPy due to inefficiencies in PyPy's CTypes binding.

  • uWSGI is pretty neat.  The cache tests are run using uWSGI's cache2 module; it's the fastest caching mechanism I've seen for Python so far.  Faster than the native caching decorators I've tested - and it's shared across multiple processes.  (It can also be shared across multiple servers, but that is certain to be slower, unless you have some seriously fancy networking hardware.)

    One note, though: The uWSGI cache2 Python API is not binary-safe.  You need to JSON-encode or Base64 or something along those lines.

  • The Bleach package - a handy HTML sanitiser - is so slow that it's useless for web output - you have to sanitise on input, which means that you either lose the original text or have to store both.  Unless, that is, you have a caching mechanism with a sub-microsecond latency.

  • The Bunch package on the other hand - which lets you use object notation on Python dictionaries, so you can say customer.address rather than customer['address'] - is really fast.  I've been using it a lot recently and knew it was fast, but 1.6us to wrap a 30-element dictionary under PyPy is a pretty solid result.

  • As an aside, if you can retrieve, uncompress, unpack, and wrap a record with 30 fields in 8us, it's worth thinking about caching database records.  Except then you have to worry about cache invalidation.  Except - if you're using MongoDB, you can tail the oplog to automatically invalidate cached records.  And if you're using uWSGI, you can trivially fork that off as a worker process.

    Which means that if you have, say, a blogging platform with a template engine that frequently needs to look up related records (like the author or category for a post) this becomes easy, fast, and almost perfectly consistent.  

Posted by: Pixy Misa at 01:28 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 1403 words, total size 15 kb.

<< Page 1 of 375 >>
72kb generated in CPU 0.04, elapsed 0.121 seconds.
59 queries taking 0.092 seconds, 274 records returned.
Powered by Minx 1.1.6c-pink.