Tuesday, May 12
That'll Do
I was getting about 1000 random record reads per second.
I was getting about 1000 random record reads per second.
I needed to achieve 10,000 reads per second to make things work.
I wanted to reach 100,000 reads per second to make things run nicely.
I'm currently at 1,000,000.*
That'll do.
* Best test run so far was ~1.6 million records per second, with some special-case optimisations.** Without optimisations, around 300k. Per thread.
** Since you asked, the problem was with unpacking large cached records into native objects. A common case in templates is that you only want to access one or two fields in a record - perhaps just the user's name - but unless the record is already a native object you need to load the external representation and parse it to find the field you need. The solution was to keep an immutable version of the of the object in the process, sign it with SHA-256, and sign the matching cache entry. Then, when we need to access the record, we can read the binary data from the cache, compare the signatures, and if they match, we're safe to continue using the existing native structure. If they don't match, we take the payload, decrypt it (if encryption is enabled) check that the decrypted payload matches the signature (if not, something is badly wrong), uncompress the payload (if compression is enabled), parse it (MsgPack or JSON), instantiate a new object, freeze it, and put it back into the native object cache. This can take as long as 20 microseconds.
Posted by: Pixy Misa at
05:53 PM
| Comments (1)
| Add Comment
| Trackbacks (Suck)
Post contains 253 words, total size 2 kb.
49kb generated in CPU 0.0137, elapsed 0.1161 seconds.
58 queries taking 0.1059 seconds, 348 records returned.
Powered by Minx 1.1.6c-pink.
58 queries taking 0.1059 seconds, 348 records returned.
Powered by Minx 1.1.6c-pink.