They're content libraries for making music. Rather than just having a synthesized piano sound, or a set of recorded piano notes, loops are live recordings of musicians cut into little pieces (typically 1, 2, or 4 bars) and timed and edited so that they repeat perfectly.
Plus each file is tagged with its key and tempo, and the software can shift the pitch and tempo to match that of the song. Though that's not perfect with the software I have; sometimes it works great, but sometimes you can tell. I try to use loops that match the key I'm working in and are pretty close to the tempo so there's minimal change.
So rather than having to sequence everything using a MIDI keyboard or a software sequencer, you can string together these pieces of rhythm and melody to produce music, so that even if you're a no-good talentless like me you can do stuff like this or this.
Posted by: Pixy Misa at Monday, June 01 2015 01:24 AM (PiXy!)
That is, the recordings are cut into little pieces. Musicians tend to give unsatisfactory performances when diced.
Posted by: Pixy Misa at Monday, June 01 2015 01:25 AM (PiXy!)
I've been working on and off on a new version of Minx which is much easier to work with; the problem has been that it's significantly slower than the old version. Not slow, as such, but slower. But I recently came up with a way to make it about 300x faster (seriously; it's amazingly quick now), so it's full steam ahead on that project now.
Not only does it have lots of new features, but the speed means I can replace my $200/month physical servers with $40/month cloud servers and still run faster than before.
Posted by: Pixy Misa at Sunday, May 31 2015 01:17 PM (PiXy!)
I already knew that LMDB supported multiple values per key. Reading the docs last week, I noticed that values within a key were stored in sorted order. This evening, I was wondering if it were possible to seek to a particular value within a key.
This is neat. It means you can use LMDB as an engine to implement a two-dimensional database like Cassandra, or a data structure server like Redis, with the elements of lists, sets, and hashes individually addressable.
Plus the advantage that unlike Redis, it can live on disk and have B-tree indexes rather than just a hash table. (Though of course Redis has the advantage of predictable performance - living in memory and accessed directly, it's very consistent.)
The other big advantage of LMDB (for me, anyway) is that it natively supports multiple processes - not just multiple threads, but independent processes - handling transactions and locking automatically. I love Python, but it has what is known as the Global Interpreter Lock - while you can have many threads, only one of them can be executing Python code at any time. The other threads can be handling I/O, or calling C libraries that don't access shared data, but can't actually be running your code at the same time.
That puts a limit on the performance of any single Python application, and breaking out into multiple processes means you need to find a way to share data between those processes, which is a lot more fiddly than it is with threads.
LMDB don't care. Thread, process, all the same, just works.
It does have limitations - it's a single-writer/multiple-reader design, so it will only scale so far unless you implement some sort of sharding scheme on top of it. But I've clocked it at 100,000 transactions per second, and a million bulk writes per second, so it's not bad at all.
Admittedly that was with the write safety settings disabled, so server crash could have corrupted my database. But my main use for LMDB is as a smart distributed data structure cache, so if one node dies it can just be flushed and restarted. In practical use, as a robust database, the numbers are somewhat lower (though with a smart RAID controller you should still be able to do pretty well).
It also supports a rather nice hot backup facility, where the backup format is either a working LMDB database ready to go (without needing to restore) or a cdbmake format backup (which is plain text if you're using JSON for keys and values), and it can back up around 1GB per second - if you have the I/O bandwidth - and only about 20% slower if the database is in heavy use at the time.
A lot of them are specific to Python, so you wouldn't have run into them. And even within the Python ecosystem, some of them are very specific. (Hoep is a high-performance Markdown parser, but there are a dozen other Markdown parsers just for Python. Hoep is the most efficient of all the ones I tested, and achieves that without sacrificing any features or compatibility.)
Nginx is a very popular front-end webserver/proxy; it's acting as the front-end to mee.nu right now. It's now the second most popular web server after Apache.
Posted by: Pixy Misa at Friday, May 22 2015 10:48 PM (PiXy!)
Greetings! This is my 1st comment here so I just wanted to give a quick shout out and tell you I really enjoy reading through your posts.
Can you suggest any other blogs/websites/forums
that cover the same topics? Many thanks!
Whenever you met with an unexpected fiscal trouble, you can consider any financial alternative that will allow you to out at any pint of time ed sheeran concert you tube one kind
of deferment is named an "economic hardship" deferment.
I don't know if it's just me or if everybody else encountering issues
with your website. It appears as if some of the written text in your posts are running off the screen. Can somebody else please provide feedback and let me know if this is happening to them as well?
This may be a issue with my web browser because I've
had this happen previously. Thank you
Posted by: anal cumshot at Friday, July 24 2015 11:59 AM (KbTIn)
I constantly spent my half an hour to read this blog's posts daily along
with a cup of coffee.
Posted by: Ian Leaf Scam at Thursday, July 30 2015 04:50 AM (8ZljZ)
I was getting about 1000 random record reads per second.
I needed to achieve 10,000 reads per second to make things work.
I wanted to reach 100,000 reads per second to make things run nicely.
I'm currently at 1,000,000.*
* Best test run so far was ~1.6 million records per second, with some special-case optimisations.** Without optimisations, around 300k. Per thread.
** Since you asked, the problem was with unpacking large cached records into native objects. A common case in templates is that you only want to access one or two fields in a record - perhaps just the user's name - but unless the record is already a native object you need to load the external representation and parse it to find the field you need. The solution was to keep an immutable version of the of the object in the process, sign it with SHA-256, and sign the matching cache entry. Then, when we need to access the record, we can read the binary data from the cache, compare the signatures, and if they match, we're safe to continue using the existing native structure. If they don't match, we take the payload, decrypt it (if encryption is enabled) check that the decrypted payload matches the signature (if not, something is badly wrong), uncompress the payload (if compression is enabled), parse it (MsgPack or JSON), instantiate a new object, freeze it, and put it back into the native object cache. This can take as long as 20 microseconds.
It's a rather obvious rejoinder, but anyone who wants to gripe about how Gellar "provoked" anyone should be reminded, then, that Piss Christ was equally provocative, and such griping just serves to remind Christians that perhaps they should be a little muscular in their complaints.
Posted by: RickC at Wednesday, May 06 2015 10:45 PM (0a7VZ)
Clash of Kings Hack works with with operating methods: Mac,
Windows, Linux Our software doesn't get any of your personal data - accounts or e-mails.
Can I just say what a relief to discover someone who genuinely understands what they are talking
about on the net. You actually understand how to bring a problem to light and make it important.
More people really need to read this and understand this side of your story.
I was surprised you're not more popular because you most certainly
possess the gift.
We are a bunch of volunteers and opening a new scheme in our community.
Your web site provided us with useful info to
work on. You have performed a formidable task and our entire neighborhood can be thankful to you.