Posted by: The Brickmuppet at Thursday, June 04 2015 02:24 AM (ohzj1)
3
I think it's clever that they made the set and a lot of the props out of things that the hamster thought were tasty, like using pieces of pasta (presumably cooked) for the train, or making the building out of cookies.
Posted by: Steven Den Beste at Thursday, June 04 2015 03:09 AM (+rSRq)
4
So what you're saying is you enjoyed watching an enormous ham chewing the scenery?
Posted by: Pixy Misa at Thursday, June 04 2015 06:21 PM (PiXy!)
5
Have I mentioned lately how much I enjoy watching Brian Blessed's work?
Posted by: Steven Den Beste at Saturday, June 06 2015 12:00 PM (+rSRq)
6
Pixy, have you upgraded the 'tab icon' for mee.nu? 'Cause mine is animated!
Posted by: The Brickmuppet at Monday, June 08 2015 11:45 AM (ohzj1)
7
Well, it's "fixed" now...or at least not animated.
Posted by: The Brickmuppet at Monday, June 08 2015 12:50 PM (ohzj1)
Sony Creative Software are having a little sale on their loop libraries this weekend.
And when I say "little sale", I mean "buy one, get three free".
I spent a bit. More than a bit, really. But I cleared out my entire wishlist.
I hadn't bought any loop libraries since 2011, when Sony moved to electronic delivery and sold off their old stock of CD-ROMs at a 75% discount, so now I'm all caught up.
Update: Oops. Horncraft for R&B is the subtitle for Crimson, Blue, and Fabulous, which I already had. So I only saved $1108.50 rather than $1148.50.
1
What, exactly, was it that you just bought? I don't know what a "loop library" is.
Posted by: Steven Den Beste at Monday, June 01 2015 12:11 AM (+rSRq)
2
They're content libraries for making music. Rather than just having a synthesized piano sound, or a set of recorded piano notes, loops are live recordings of musicians cut into little pieces (typically 1, 2, or 4 bars) and timed and edited so that they repeat perfectly.
Plus each file is tagged with its key and tempo, and the software can shift the pitch and tempo to match that of the song. Though that's not perfect with the software I have; sometimes it works great, but sometimes you can tell. I try to use loops that match the key I'm working in and are pretty close to the tempo so there's minimal change.
So rather than having to sequence everything using a MIDI keyboard or a software sequencer, you can string together these pieces of rhythm and melody to produce music, so that even if you're a no-good talentless like me you can do stuff like this or this.
Posted by: Pixy Misa at Monday, June 01 2015 01:24 AM (PiXy!)
3
That is, the recordings are cut into little pieces. Musicians tend to give unsatisfactory performances when diced.
Posted by: Pixy Misa at Monday, June 01 2015 01:25 AM (PiXy!)
I've been working on and off on a new version of Minx which is much easier to work with; the problem has been that it's significantly slower than the old version. Not slow, as such, but slower. But I recently came up with a way to make it about 300x faster (seriously; it's amazingly quick now), so it's full steam ahead on that project now.
Not only does it have lots of new features, but the speed means I can replace my $200/month physical servers with $40/month cloud servers and still run faster than before.
Posted by: Pixy Misa at Sunday, May 31 2015 01:17 PM (PiXy!)
Posted by: Pixy Misa at
08:36 PM
| No Comments
| Add Comment
| Trackbacks (Suck)
Post contains 15 words, total size 1 kb.
Sunday, May 24
So I Was Wondering
Note to self: Implement auto-save, dammit.
I already knew that LMDB supported multiple values per key. Reading the docs last week, I noticed that values within a key were stored in sorted order. This evening, I was wondering if it were possible to seek to a particular value within a key.
It is.
This is neat. It means you can use LMDB as an engine to implement a two-dimensional database like Cassandra, or a data structure server like Redis, with the elements of lists, sets, and hashes individually addressable.
Plus the advantage that unlike Redis, it can live on disk and have B-tree indexes rather than just a hash table. (Though of course Redis has the advantage of predictable performance - living in memory and accessed directly, it's very consistent.)
The other big advantage of LMDB (for me, anyway) is that it natively supports multiple processes - not just multiple threads, but independent processes - handling transactions and locking automatically. I love Python, but it has what is known as the Global Interpreter Lock - while you can have many threads, only one of them can be executing Python code at any time. The other threads can be handling I/O, or calling C libraries that don't access shared data, but can't actually be running your code at the same time.
That puts a limit on the performance of any single Python application, and breaking out into multiple processes means you need to find a way to share data between those processes, which is a lot more fiddly than it is with threads.
LMDB don't care. Thread, process, all the same, just works.
Neat.
It does have limitations - it's a single-writer/multiple-reader design, so it will only scale so far unless you implement some sort of sharding scheme on top of it. But I've clocked it at 100,000 transactions per second, and a million bulk writes per second, so it's not bad at all.
Admittedly that was with the write safety settings disabled, so server crash could have corrupted my database. But my main use for LMDB is as a smart distributed data structure cache, so if one node dies it can just be flushed and restarted. In practical use, as a robust database, the numbers are somewhat lower (though with a smart RAID controller you should still be able to do pretty well).
It also supports a rather nice hot backup facility, where the backup format is either a working LMDB database ready to go (without needing to restore) or a cdbmake format backup (which is plain text if you're using JSON for keys and values), and it can back up around 1GB per second - if you have the I/O bandwidth - and only about 20% slower if the database is in heavy use at the time.
Posted by: Pixy Misa at
01:08 AM
| No Comments
| Add Comment
| Trackbacks (Suck)
Post contains 472 words, total size 3 kb.
Friday, May 22
A Few Of My Favourite Things
Once I got past the segfaults, anyway. You should be using these things.
1
I really am over the hill; I don't recognize any of those things.
Posted by: Steven Den Beste at Friday, May 22 2015 05:18 PM (+rSRq)
2
A lot of them are specific to Python, so you wouldn't have run into them. And even within the Python ecosystem, some of them are very specific. (Hoep is a high-performance Markdown parser, but there are a dozen other Markdown parsers just for Python. Hoep is the most efficient of all the ones I tested, and achieves that without sacrificing any features or compatibility.)
Nginx is a very popular front-end webserver/proxy; it's acting as the front-end to mee.nu right now. It's now the second most popular web server after Apache.
Posted by: Pixy Misa at Friday, May 22 2015 10:48 PM (PiXy!)
Posted by: Kurt Duncan at Saturday, May 23 2015 02:05 AM (c/F3T)
4
Bleach is what you use to get your clothes whiter!
Posted by: Wonderduck at Saturday, May 23 2015 03:27 AM (jGQR+)
5
No, silly, Bleach is a major league anime series!
Posted by: Steven Den Beste at Saturday, May 23 2015 03:43 AM (+rSRq)
6
This blog post has been rated Unacceptable by the Bureau Of Obvious Blatant Service. Please supply sufficient padding to stimulate positive reactions from the audience.
-j
Posted by: J Greely at Saturday, May 23 2015 07:07 AM (fpXGN)
13
Hello, this weekend is pleasant designed for me, since this point in time i am reading this great educational article here at my residence.
Posted by: pixel gun 3d hack at Wednesday, June 17 2015 02:51 AM (eOCIl)
14
Greetings! This is my 1st comment here so I just wanted to give a quick shout out and tell you I really enjoy reading through your posts.
Can you suggest any other blogs/websites/forums
that cover the same topics? Many thanks!
15
Whenever you met with an unexpected fiscal trouble, you can consider any financial alternative that will allow you to out at any pint of time ed sheeran concert you tube one kind
of deferment is named an "economic hardship" deferment.
I was getting about 1000 random record reads per second.
I needed to achieve 10,000 reads per second to make things work.
I wanted to reach 100,000 reads per second to make things run nicely.
I'm currently at 1,000,000.*
That'll do.
* Best test run so far was ~1.6 million records per second, with some special-case optimisations.** Without optimisations, around 300k. Per thread.
** Since you asked, the problem was with unpacking large cached records into native objects. A common case in templates is that you only want to access one or two fields in a record - perhaps just the user's name - but unless the record is already a native object you need to load the external representation and parse it to find the field you need. The solution was to keep an immutable version of the of the object in the process, sign it with SHA-256, and sign the matching cache entry. Then, when we need to access the record, we can read the binary data from the cache, compare the signatures, and if they match, we're safe to continue using the existing native structure. If they don't match, we take the payload, decrypt it (if encryption is enabled) check that the decrypted payload matches the signature (if not, something is badly wrong), uncompress the payload (if compression is enabled), parse it (MsgPack or JSON), instantiate a new object, freeze it, and put it back into the native object cache. This can take as long as 20 microseconds.
(also, there's surprisingly little overlap between Kawaii Sexy Love's collection and the AsiaDreaming link I posted earlier; pity the site's no longer being updated)
-j
Posted by: J Greely at Wednesday, May 06 2015 07:25 PM (ZlYZd)