Is this how time normally passes? Really slowly, in the right order?

Friday, February 12

Geek

3 Across

5 letters, priority queue.

Posted by: Pixy Misa at 10:48 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 6 words, total size 1 kb.

Rant

Die, Atom, Die!

Atom feeds are evil and must be killed.

Posted by: Pixy Misa at 09:14 PM | Comments (3) | Add Comment | Trackbacks (Suck)
Post contains 11 words, total size 1 kb.

Rant

That Is Not The Way To Do It!

I've mentioned LINQ, Microsoft's attempt to make accessing datasources more idiomatic.  And I've mentioned IronPython, Python running on Microsoft's .NET platform.  Since LINQ is part of the .NET library, IronPython can use LINQ.  So I went looking for examples, and found this:
songs = ThenBy(OrderByDesc(  
          Select(content.Elements(xhtml.ns +'tr'), ScrapeSong),   
          lambda s: s.added), lambda s: s.artist)
That is not only not the right way to do it, it actually leaves me wondering if it is possible to construct a more wrong way to do it, short of INTERCAL.

Posted by: Pixy Misa at 05:27 AM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 92 words, total size 1 kb.

Geek

All My Servers Are No Longer The Same Speed

I was planning to spend the weekend working with MongoDB, but those plans evaporated when it crashed and destroyed my test database. So instead I dug out my toy Python benchmark and ran it on Eineus. And just for fun, did the same in Psyco and Cython and Jython. Results are... Mixed. Yeah, that's a good word, particularly since the IronPython benchmark is still running.
SystemCPUClockPythonLoopStringScanTotal
EineusPhenom II 9453.0GHz2.6.4/320.9501.4830.4372.870
EineusPhenom II 9453.0GHz2.6.4/Pysco0.0130.1800.4770.670
EineusPhenom II 9453.0GHz2.6.4/Cython0.00084.7500.49085.240
EineusPhenom II 9453.0GHz2.5.1/Jython0.682499.9360.758501.376
NagiPhenom 97502.4GHz*2.6/IronPython/320.5443502.6521.5413504.739
NagiPhenom 97502.4GHz*2.6/IronPython/640.9165399.0201.2645401.202
PotemayoCore 2 Duo1.8GHz2.6/IronPython/320.5591.6321.5673.759
MiyabiPhenom II 9453.0GHz2.6.4/640.6371.0030.5302.170
AkaneOpteron2.0GHz2.51.8872.7330.8805.500

* Normalised to 3.0GHz for ease of comparison.

I'll paste in the IronPython results if it ever finishes. (Update: Done now.)

Some notes:

64-bit Python is now a good bit faster than 32-bit for many cases. It's actually a bit slower in string scanning; I don't know why.

A 3GHz Phenom II running Python 2.6 is 2x faster than a 2GHz Opteron running Python 2.5 from 3 years ago. Someone's been doing some good work, either the Python people or AMD or the Gnu compiler team.

CPython (the standard Python) has some really neat string optimisations that I depend on in Minx. These flow through nicely to Psyco, but are conspicuously absent from Cython, Jython, and IronPython, which are 60, 400, and a couple of thousand times slower for heavy string concatenation (as I said, that benchmark hasn't finished yet...) It's certainly possible to avoid that idiom, and instead, for example, create a list of substrings and then join them all in one operation.

Apart from that, Jython seems to perform fairly well; certainly, if you need to run heavily multi-threaded Python code and can avoid doing millions of concatenations of large strings, Jython could be a winner. The Python interpreter can only run one thread at a time, though other threads can be handling I/O or library functions. The Jython runtime is fully multithreaded, so if you have a multi-threaded application and more than two CPUs - which I do - then Jython can provide an overall performance boost even if the single thread performance declines somewhat. (And it's actually faster on one of the tests, so depending on your code you might win both ways.)

As for IronPython, well, the string concatenation results are just terrible. Looping is comparable with Python or Jython (but far behind Psyco or Cython), and string scanning is the slowest of the lot, though only by a factor of 2, not 2000. It should be fast enough for most tasks as long as you really avoid concatenating large strings. I wonder what list performance is like - I'll have to add a test for that.

Bumped and updated: Tested this just now on Potemayo with a StringBuilder implementation for the strcat part of the benchmark, which delivered about 2000x faster performance (!) on that part of the benchmark and 1000x on average, which would make a .Net deployment of Minx pretty feasible.  It would be nice if all these people deploying immutable string libraries could also deploy the Python concatentation performance trick, because StringBuilder is not a general-purpose string library, just a faster way to concatenate and modify a stringlike object.

Posted by: Pixy Misa at 05:09 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 527 words, total size 5 kb.

Geek

Obectification

Just pootling about benchmarking stuffs again, which is what I do when I'm too lazy to actually code something but want to appear productive.  This time I'm seeing how many little objects Python can create per second.  Here's the code:
import time

class iwb(int):
    def __setattr__(self, k, v):
        pass
    def __getattr__(self, k, default=None):
        return None

def perf(count):
    t0 = time.time()
    for i in range(count):
        j = iwb(i)
    print '%s per second' % (count / (time.time() - t0))

perf(10000000)
And here are the numbers:

Python 2.6.4/32: 3044540.34221 per second
Psyco 2.6.4/32: 5378755.4303 per second
Jython 2.5.1: 795102.161706 per second  (significantly slower, but better than I'd expected)
Cython 2.6.4/32: 3830629.40698 per second (faster than Python, slower than Psyco)
Python 2.6.4/64: 3617701.37013 per second (again nearly 20% faster than the 32-bit version)
IronPython 2.6: 1931247.06236 per second (On my laptop, a 1.8GHz Core 2 Duo, compared to the 3GHz Phenom II for the preceding results, so not too bad.  I need to see if it works under Mono.)

The reason I'm fiddling with this is that I'm looking at an optimisation/simplification for Minx, but it would involve creating about 10,000 objects to load the current main page of my blog.  It's part of some syntactical trickery I'm trying to work out that creates virtual methods on objects in trees....  Without the objects being visibly objects, which involves some hackery when you put them in the tree in the first place.

Posted by: Pixy Misa at 03:16 AM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 236 words, total size 2 kb.

Geek

Syntax Matters

Pixy's First Rule of Computer Language Design: Syntax matters.

You can have all the semantic brilliance in the world, but if your syntax is counter-intuitive, no-one will use it.  See for example the widespread success of Smalltalk and Lisp in the industry.

... crickets ...

Right.

So, how does this apply to databases, the topic to which I am dedicating an extended rant?  Well, the problem with databases in the modern world is essentially twofold:
  1. SQL.
  2. Everything else.
SQL's syntax, semantics, and implementations generally leave something to be desired, but it does get the job done, the job being selecting and modifying sets of data.

Everything else's syntax, semantics and implementations generally leave something to be desired too; it's impossible to design the perfect language (though if someone created interoperable implementations of Python and Ruby atop the Go compiler, they'd be getting close) because there are syntactic and semantic conflicts that cannot be resolved; you have to choose on which side of the conflict you wish to fall, and live with that decision.*

But the real clash comes when you want to get your data and do something with it, like, for example, display it on the screen.  You write a SQL query (which is a program in its own right), send it to the database server, get a bunch of text back, which you then interpret into structures suited to your programming language, process those structures, and finally, display it on the screen.

That is, frankly, a godawful mess, slow, insecure, painful to code, and nonetheless how most of the websites in the world actually work.

The frustrating thing is that this problem was solved completely in the early 80s.**

In Progress, to display a list of customers with balances over $100, you can say:
for each customer where balance>100:
  display customer.
end.
Okay, that's nothing special; in SQL it's even easier:
select * from customer where balance>100;
But in Progress, you can do this:
def var reminder as logical.

for each customer where balance>100:
    display customer.
    update reminder.
    if reminder then do:
        send_reminder(id).
        reminder_date = today.
    end.
end.
That is, you can loop through the customers, prompt the user whether to send a reminder letter to each, send out the letters as instructed, and mark it on the account.  It even initiates the transactions for you and provides automatic commit and undo semantics.  No need to write queries in one language and application code in another, your data is simply there for you to work with.  Your user could be on a Wyse 60, your application on a Dell 386 running Xenix, and your database on an AS/400, and it would all just work.

It was a thing of beauty.  Until your r-code (Progress's bytecode) exceeded 64k and you couldn't compile it any more.***

Nothing else compares.  Microsoft's LINQ, two full decades later, is a poor and partial reinvention of Progress.

Exactly why Progress is languishing in near-irrelevance today is a topic of a whole month of rants, and beside the point, which is that somebody already got it right.

I can't use Progress for my applications because the language itself is abjectly inefficient and limited compared to a modern dynamic language like Python or Ruby.****  So the question is, what can I do with a modern dynamic language like Python or Ruby to recapture some of that functionality, some of that breezy syntactic and semantic awesomeness?

Well, neither Python nor Ruby has an interactive mode that remotely compares with Progress, so I'm going to ignore that part for the moment and concentrate on the data syntax and semantics.  And I'm going to mostly use Python, because I'm more familiar with it and because they're similar enough that it doesn't matter that much.

Python has these things called generators.  Generators are program structures that pretend to be data structures.  In Python, if you have a list
lollipops = ['cherry','huckleberry','lime']
You can iterate through with:
for each pop in lollipops:
    print pop
(As an aside, there's no end statement, which is one of Python's greatest strengths-and/or-weaknesses.)  Which will print, as you might expect,
cherry
huckleberry
lime
So far, it's elegant but limited.  The clever trick is that if I write this piece of code:
def lollipops():
    flavours = ['cherry','huckleberry','lime']
    for flavour in flavours:
        yield flavour
And then write:
for pop in lollipops():
    print pop
I will again get
cherry
huckleberry
lime
We have a program (a subprogram) here acting almost exactly like a data structure.  So rather than being limited to the data structures that we have, or being forced to write code that manipulates the data structures and calls functions to get the bits and pieces that we want, we can just use the language's own for loop.  The only distinction is that calling a subprogram requires dereferencing it with () (otherwise you get Python's version of a function pointer - very useful, but not what we want right now).  But we can do better still by creating a class, like so:
class candy():
    def __init__(self):
        self.flavours = ['cherry', 'huckleberry', 'lime']
 
    def __iter__(self):
        for flavour in self.flavours:
            yield flavour

lollipops = candy()
for pop in lollipops:
    print pop

cherry
huckleberry
lime
And now we have code that acts exactly like a data structure.  (Well, for the requirements of this little example.)

At which point I shall leave off for the moment, to resume at some time other than 3AM.

* Some of the syntactic conflicts arise from the restrictions of the ASCII character set, which simply doesn't have some of the symbols we need, so we substitude and overload the ones we do.  Of course, today we have Unicode and can display every symbol we could ask for - but you can't type in Unicode.

** And probably before, but certainly in the 80s.

*** Something which, as far as I know, they still haven't entirely resolved.

**** And because it's overpriced and the licensing model is insane.

Posted by: Pixy Misa at 02:31 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 967 words, total size 8 kb.

Thursday, February 11

Geek

Enough With The Curly Brackets Already!

There's a quote (which I am unable to find) to the effect that Algol 60 was better not only than all the languages that preceded it, but all the languages that followed it as well.  Which rather steps on the punchline here, but at mee.nu we're all about the stepping on of punchlines.

P.S.  Enough with the curly brackets!

P.P.S.  Please don't remind me that it could have been worse.

Posted by: Pixy Misa at 02:17 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 76 words, total size 1 kb.

Wednesday, February 10

Anime

Important Public Service Announcement

Baka.

Posted by: Pixy Misa at 09:01 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 5 words, total size 1 kb.

Rant

SecondPersonPronounTube

YouTube has zotted my account, for a "Community Guidelines warning sanction" over the opening credits for Popotan.

They sent me an email saying they'd suspended the video, although the reason is about as vague as it could possibly be - that text I just quoted is all they gave me.  But no problem, I'll take it down....

My account is also suspended, something they neglected to mention.

Meh.  Guess I can post anime ops and eds here if I really need to.

Still, it's small potatoes compared to some suspensions.

Posted by: Pixy Misa at 05:10 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 91 words, total size 1 kb.

Geek

View From A Pita

Quick list of the planned view types for Pita.  Not all of these will be supported in initial releases, though only two (Braid and Chord) involve known unknowns.  (A view is what SQL would call a table; a derived view in Pita is still called a view.)

Type Storage Records Indexing Recovery Distribution
Basic Memory Variable Hash Snapshot Broker
Space Memory Variable Atree Snapshot, Log Broker
Cloud Memory Variable Atree Replication Replication, Sharding
Scene Memory Variable Any memory Snapshot, Log Broker
Batch Memory Variable Atree Snapshot Broker
Array Memory Fixed Array Snapshot
Broker
Graph Memory Variable Graph Snapshot, Log Broker
Queue Memory Variable None Snapshot, Log Broker
Stack Memory Variable None Snapshot, Log Broker
Deque Memory Variable None Snapshot, Log Broker
Cache Memory Variable Hash None Local
Image Memory Variable Any memory Snapshot Broker
Store Disk Variable Btree Versioning Broker
Shelf Disk Variable Btree Versioning Broker
Fixed Disk Fixed Btree Log Broker
Table Disk Variable, Fixed Btree Versioning, Log Broker
Share Disk Variable, Fixed Btree Versioning, Log Caching
Shard Disk Variable, Fixed Btree Replication Replication, Sharding
Index Disk Variable Btree None Broker
Chord Disk Variable Btree Versioning Caching
Merge Virtual Variable, Fixed Virtual Virtual Broker
Braid Virtual Variable, Fixed Any memory Virtual Broker
Slice Virtual Variable, Fixed Any memory Virtual Broker
Remap Virtual Variable, Fixed Virtual Virtual Broker
Magic Virtual Variable, Fixed Virtual Virtual Broker


Storage - Memory, Disk, or Virtual

The immediate storage mechanism.  Memory views are accessed in memory (and must fit in memory), but have persistency to disk.  Disk views are read and written from disk.  Virtual views are representations of other views or of programmatic data.

Records - Fixed, Variable

Pita suports three main modes of storing record data: Fixed-length random access on disk, variable-length sequential access on disk, or any format in memory.  This is to avoid the complications and overhead of dynamically managing space on disk (it does handle that for Btree indexes, though).

Indexing - Hash, Atree, Btree, Qtree, Rtree, Array, Graph, Virtual

Hashes are simple one-to-one mappings of key to data, unordered, and are availble for in-memory of on-disk data.  Atrees (actually AVL trees), Qtrees (quadtrees) Rtrees, Arrays, and Graphs are available indexing methods for in-memory data.  Btrees are available for on-disk data.

Recovery - Log, Snapshot, Versioning, Replication

Crash recovery is handled by a variety of mechanisms depending on the view type.  Log recovery involves replaying a log of recent operations to make sure the data is consistent and up-to-date.  Snapshot recovery simply reloads a snapshot of data from disk.  Versioning records every change to the data as a new version, and the only recovery needed is to ensure the indexes are up-to-date.  Replication recovery is done by reading the data back from a replica.

Distribution - Broker, Caching, Local, Replication, Sharding

Broker distribution means that in a multi-server environment, client requests are directed to the dataserver that owns the desired view.  Caching is broker distribution plus a local cache of recently accessed rows; caching is suitable for frequently-accessed data where consistency is not critical.  Local distribution means that the view is available only on the local dataserver, that is, the server to which the client is directly connected.  Replication means that the data is replicated, and can be accessed from any replica node in the mesh.  Sharding means that the data is split up across nodes in the mesh, and a request may need to be sent to multiple nodes to be completed.  Sharding is intended at this point to be simple rather than optimal.

Notes

I now have libraries for all the required index structures, so that part is relatively easy. 

One problem, as I mentioned, is that the snapshot semantics don't work on Windows.  So on Windows you won't get Image views, and backups (and probably index creation) of memory views will be blocking operations.

Image views can be read-only or read-write, and single- or multi-user as required.

Shelf views will initially be read-only (that's simplest, for obvious reasons), but can be made read-write by creating an append log for the shelf and adding a shelf value to the version field in the indexes.  Images can be read-only or read-write as you prefer.  Taking either disk or memory snapshots is effectively instantaneous and uses almost no resources (CPU, memory, or disk), however, maintaining an Image or a writeable Shelf will gradually use an increasing amount of space as the original and the snapshot diverge.

Snapshots are used internally by the system for backups, recovery, and online index building, but can also be created by the user for manual point-in-time recovery, for reporting and data extracts (where it is required that the data be consistent to a point in time), and for testing - you can easily snapshot your data, run any operations you like on the snapshot, and then just drop the snapshot when you are done.  You can have multiple snapshots open at once, and you can take a snapshot of a snapshot, though of course this can multiple the resource requirements.  A single snapshot operation can be used to simulataneously snapshot all the memory-based views on that dataserver, even across multiple databases.

Also, particularly with CPython (regular Python) deployments, the dataserver is likely to be a single process.  Multi-threaded, but a single process, so it will be limited by the Python global interpreter lock, so a single dataserver can only effectively use a single core.  However, a snapshot is automatically assigned to a new process, so backups can run on a second core, and so can your reports, data extracts, and testing.

At any time when working with an Image view, you have the option of dropping it, committing it as the live version of the original view (losing any changes that happened to the original in the meantime), or committing it as a new, persistent view.  You can also simply snapshot one memory view into a new persistent view; this is also an online operation that takes almost no time or resources, and your new view is immediately recoverable by using the existing snapshot and log of the base view plus an ongoing log on the new view.

A database can contain any assortment of table types, and you don't need to know a table's type when using it, except when it comes to specific features like the expected performance of a range or spatial lookup, or reverting a record to a previous version (which of course is only available on versioned tables).

The storage and recovery mechanisms are intended to allow consistent, recoverable backups of entire databases simply by copying the files on disk using standard operating system commands.  This would require an index rebuild, but it's simple, reliable, and doesn't requires no locking of any structures.

The two partial exceptions are Batches and Arrays.  Batches are designed to be programmatically checkpointed (using the snapshot mechanism), and are for cases where it is better for the database to recover to a known point than to the latest point.  So, when you run a batch process that stores information in its own memory, it can use a Batch view to store external data.  In the event of a crash, the server will restore the Batch to its last checkpoint and you can simply re-run the program.

Arrays are intended to be high performance, and support atomic matrix and sub-matrix manipulations that can modify thousands or millions of values at once.  For this reason, they are not logged, but as with Batches you can issue checkpoint operations as required.

I'll post some more details on the planned special view types - like Index, Braid, and Chord - soon.

P.S.  Yes, I'm aware that all the view types have five-letter names.  And yes, I am mildly OCD.

Update: Come to think of it, since Pita doesn't support cross-table transactions, I could hand off each database or even each table to a separate process and scale to multiple cores that way.  The potential problem there is that Merges and Braids would require interprocess traffic rather than running in a single shared memory space.  Hmm, and I couldn't maintain a single Lua instance per connection via the embedded API.  Okay, on third thoughts, that won't work on a per-table basis.  Per database is probably fine though.

Posted by: Pixy Misa at 08:41 AM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 1381 words, total size 14 kb.

<< Page 3 of 5 >>
85kb generated in CPU 0.0195, elapsed 0.2091 seconds.
55 queries taking 0.1965 seconds, 364 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 362