This accidentally fell out of her pocket when I bumped into her. Took me four goes.

Sunday, May 31

Cool

Sony Gets Steamed

Sony Creative Software are having a little sale on their loop libraries this weekend.

And when I say "little sale", I mean "buy one, get three free".

I spent a bit.  More than a bit, really.  But I cleared out my entire wishlist.

/animeclips/SonySteam.gif

I hadn't bought any loop libraries since 2011, when Sony moved to electronic delivery and sold off their old stock of CD-ROMs at a 75% discount, so now I'm all caught up.

Update: Oops.  Horncraft for R&B is the subtitle for Crimson, Blue, and Fabulous, which I already had.  So I only saved $1108.50 rather than $1148.50.

Posted by: Pixy Misa at 04:53 PM | Comments (5) | Add Comment | Trackbacks (Suck)
Post contains 103 words, total size 1 kb.

Wednesday, May 27

Geek

Unicode

Glitchr explains Unicode.

Ben Frederickson explains Unicode.

Ai Shinozaki explains Unicode.

/images/AiUnicodeWat.jpg?size=720x&q=95


Posted by: Pixy Misa at 09:47 AM | Comments (6) | Add Comment | Trackbacks (Suck)
Post contains 12 words, total size 1 kb.

Monday, May 25

Cool

The Modders Are Restless

Cities:Skylines, with a little tweaking.


Also, this, with very little tweaking:

/images/CountryTown.jpg

Posted by: Pixy Misa at 08:36 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 15 words, total size 1 kb.

Sunday, May 24

Geek

So I Was Wondering

Note to self: Implement auto-save, dammit.

I already knew that LMDB supported multiple values per key.  Reading the docs last week, I noticed that values within a key were stored in sorted order.  This evening, I was wondering if it were possible to seek to a particular value within a key.

It is.

This is neat.  It means you can use LMDB as an engine to implement a two-dimensional database like Cassandra, or a data structure server like Redis, with the elements of lists, sets, and hashes individually addressable.

Plus the advantage that unlike Redis, it can live on disk and have B-tree indexes rather than just a hash table.  (Though of course Redis has the advantage of predictable performance - living in memory and accessed directly, it's very consistent.)

The other big advantage of LMDB (for me, anyway) is that it natively supports multiple processes - not just multiple threads, but independent processes - handling transactions and locking automatically.  I love Python, but it has what is known as the Global Interpreter Lock - while you can have many threads, only one of them can be executing Python code at any time.  The other threads can be handling I/O, or calling C libraries that don't access shared data, but can't actually be running your code at the same time.

That puts a limit on the performance of any single Python application, and breaking out into multiple processes means you need to find a way to share data between those processes, which is a lot more fiddly than it is with threads.

LMDB don't care.  Thread, process, all the same, just works.

Neat.

It does have limitations - it's a single-writer/multiple-reader design, so it will only scale so far unless you implement some sort of sharding scheme on top of it.  But I've clocked it at 100,000 transactions per second, and a million bulk writes per second, so it's not bad at all.

Admittedly that was with the write safety settings disabled, so  server crash could have corrupted my database.  But my main use for LMDB is as a smart distributed data structure cache, so if one node dies it can just be flushed and restarted.  In practical use, as a robust database, the numbers are somewhat lower (though with a smart RAID controller you should still be able to do pretty well).

It also supports a rather nice hot backup facility, where the backup format is either a working LMDB database ready to go (without needing to restore) or a cdbmake format backup (which is plain text if you're using JSON for keys and values), and it can back up around 1GB per second - if you have the I/O bandwidth - and only about 20% slower if the database is in heavy use at the time.

/images/AiSailor.jpg?size=720x&q=95

Posted by: Pixy Misa at 01:08 AM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 472 words, total size 3 kb.

Friday, May 22

Geek

A Few Of My Favourite Things

Once I got past the segfaults, anyway.  You should be using these things.

MongoDB 3.0 (not earlier versions, though)
Elasticsearch (though their approach to security is remarkably ass-backwards)
LZ4 (and its friend, LZ4_HC)
LMDB and its magical set_range_dup

/images/AiPurple.jpg?size=720x&q=95

Ai Shinozaki

Posted by: Pixy Misa at 05:10 PM | Comments (330) | Add Comment | Trackbacks (Suck)
Post contains 65 words, total size 2 kb.

Wednesday, May 13

Geek

Some Pig

So, I'm tinkering with what will become Minx 1.2, and testing various stuff, and I'm pretty happy with the performance.

Then I run the numbers, and realise that I'm flooding a 10GbE connection with HTTP requests using a $15 cloud server.

I think we can count that part of the problem space as solved.

/images/AiHeadband.jpg?size=720x&q=95

Posted by: Pixy Misa at 05:30 PM | Comments (4) | Add Comment | Trackbacks (Suck)
Post contains 56 words, total size 1 kb.

Geek

Hard Things

There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors.

Posted by: Pixy Misa at 11:27 AM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 18 words, total size 1 kb.

Tuesday, May 12

Geek

That'll Do

I was getting about 1000 random record reads per second.
I needed to achieve 10,000 reads per second to make things work.
I wanted to reach 100,000 reads per second to make things run nicely.
I'm currently at 1,000,000.*

That'll do.

/images/AiGreeenStripes.jpg?size=720x&q=95

* Best test run so far was ~1.6 million records per second, with some special-case optimisations.**  Without optimisations, around 300k. Per thread.

** Since you asked, the problem was with unpacking large cached records into native objects.  A common case in templates is that you only want to access one or two fields in a record - perhaps just the user's name - but unless the record is already a native object you need to load the external representation and parse it to find the field you need.  The solution was to keep an immutable version of the of the object in the process, sign it with SHA-256, and sign the matching cache entry.  Then, when we need to access the record, we can read the binary data from the cache, compare the signatures, and if they match, we're safe to continue using the existing native structure.  If they don't match, we take the payload, decrypt it (if encryption is enabled) check that the decrypted payload matches the signature (if not, something is badly wrong), uncompress the payload (if compression is enabled), parse it (MsgPack or JSON), instantiate a new object, freeze it, and put it back into the native object cache.  This can take as long as 20 microseconds.

Posted by: Pixy Misa at 05:53 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 253 words, total size 2 kb.

Wednesday, May 06

Art

Our Team Of Highly Trained Ninja Moths Hard At Work

Yesterday this sweater had full-length sleeves.

/images/MothingInProgress.jpg?size=720x&q=95

Posted by: Pixy Misa at 02:52 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 16 words, total size 1 kb.

Rant

After Texas Shooting: If Free Speech Is Provocative, Should There Be Limits?

No.

Posted by: Pixy Misa at 02:44 PM | Comments (44) | Add Comment | Trackbacks (Suck)
Post contains 13 words, total size 1 kb.

<< Page 1 of 2 >>
81kb generated in CPU 0.1188, elapsed 0.3952 seconds.
58 queries taking 0.3796 seconds, 409 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 407