A cricket bat!
Twelve years, and four psychiatrists!
Four?
I kept biting them!
Why?
They said you weren't real.

Saturday, February 13

Geek

Jsyn

Announcing Jsyn, the JSON syndication format for blogs and everything else.

Specification for version sqrt(-1):

1. Put a bunch of stuff in a data structure.
2. JSON-encode it.
3. Make it available somehow.

...

What, you need more of a spec?  But I've registered a domain and everything!

Seriously, freedom from so-called "simple" syndication, coming soon!  Part of the Pita* project.

Update: Spec version -1:

A Jsyn feed object will have precisely three first level sub-elements:

1. A feed element, an object containing the feed properties (required).
2. A schema element, an object containing advisory schema information (optional).
3. A items element, an array of objects representing the data items (required, but may be empty).

Example:

{"feed": {"source": "http://ai.mee.nu/feed.jsyn"},
 "schema": {"source": "http://jsyn.net/schemas/blog.jsyn", "version": 1.0},
 "items": []}

A client may use a local copy of the schema so long as the version matches that specified in the schema object.  The server must increment the version when updating the schema.  The server may revert to an older schema with a lower version number; the client must not continue to use the local copy of the schema in this case.

* Which is part of the Minx project**, which is part of the make-Pixy-rich-or-drive-him-insane-either-is-fine project.

** I've subdivided Minx into three parts, like Gaul, only with less garlic: Minx, the bliki; Meta, the template, formatting, and scripting engine; and Pita, the database engine/abstraction layer.  In addition, there's Miko, the planned desktop client.

Posted by: Pixy Misa at 01:17 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 240 words, total size 2 kb.

Geek

Syntactistaticality

Now where the heck was I, before being buried under an avalanche of poorly-considered Atom feeds and Chinese replica watch spam?

Ah, right.

We can't do a full Progress-style where clause in Python, unfortunately.  Or not without more trickery than I intend to apply; someone did make a working goto - never mind that, a working comefrom - but I'm not inclined to go to that sort of length.

So.  I want the first 20 posts in a given folder of a given blog, sorted by date order (descending, of course).  Pythonically.  No SQL.  Let's see:
db = Pita.Connect(host, user, pass, database)
posts = db.views.posts
I've connected to the Pita server and have a view open.  Now:
for post in posts(folder=f,order='date-',limit=20):
    ...
That's not bad.  With Python's named parameters, you can use any field in a flat record structure, so:
authors = db.views.authors
a = authors.find(name='Pixy Misa')
for post in posts(author=a.id, tag='databases',order='score-',limit=20):
    ...
If we have a nested structure, though, it doesn't work.  Python doesn't let you say:
for post in posts(author.name = 'Pixy Misa')
even if we have the code to automatically resolve the relation.  It's not valid syntax.  So that's one place where it breaks down.  Another place is ranges; we can say
for author in authors(country = 'Australia')
to get a list of authors who live in Australia, but we can't say
for author in authors('Andorra' < country < 'Azerbaijan')
even though that is a valid Python expression.  It will get evaluated, and we'll just pass either True or False to authors() (or throw an exception), and it just won't work.

Now, the design of Pita is that it's primarly a document database with advisory schemas.  It's not schemaless like many or the key-value stores, and it's not fixed-schema like most traditional relational databases.  Each view has a schema, which specifies what fields should be there, and if they are, what type they should be.  Fields can be missing, in which case the schema may specify a default value.  And you can stick in whatever additional data you want, so long as the schema doesn't specifically conflict with that.

What this means is that we can know that country is a string, and if we do an equality comparison between a string and a list, we mean that we want to know if the string is in the list.  So we can also do this:
for author in authors(country = ['Australia', 'New Zealand', 'Canada'])
to get authors from any of those countries.

By returning a generator or iterator, we can efficiently replace this:
for post in posts(blog=b,tag='databases',order='score-',limit=20)
with the more Pythonic
for post in posts(blog=b,tag='databases',order='score-')[:20]
Slicing (as it is called) is very general in Python and very useful, so adapting it to database selects will come naturally.

But what about range searches?  There's no obvious Pythonic syntax for this, at least, not one that works.  Here are a few possiblities:
for house in houses(price = '<100000'):
    ...
for house in houses(price = ['>50000','<100000']):
    ...
We know price is of type money, so we look at that string, and the leading < means it's a range match.  Goody!  Doesn't work so well - or at all, for that matter - for strings, because we could be perfectly well looking for those exact strings. 

We could have an explicit range function:
for house in houses.price.range(50000,100000):
    ...
That's not too bad either; it's pretty clear syntactically and semantically, and it requires no parsing.  Doesn't let you differentiate between > and >= though - and you can't do a range match on more than one field.  (You can't effectively use a binary tree for such a search anyway.)  We can still slide in our other parameters like so:
for house in houses.price.range(50000,100000,suburb='Wondabyne'):
    ...
But (again due to the strictures of Python), they must come last.

Since we're building a generator, it should be possible to do this LINQish trick:
for house in houses(suburb='Wondabyne').price.range(50000,100000):
    ...
The first operation produces a view that knows to search on the suburb field for Wondabyne.  This derived view has the exact same attributes of the original view, and price is one of those attributes, and we can use the range selector on price just like before.

We should be able to keep doing that sort of thing, until we get something like:
for house in houses.suburb('Wondabyne').bedrooms.range(3,5).bathrooms.min(2).price.max(150000).order('price+'):
...
But it's not terribly dynamic.  So, next stop, dynamism.

Posted by: Pixy Misa at 03:13 AM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 701 words, total size 5 kb.

Friday, February 12

Geek

3 Across

5 letters, priority queue.

Posted by: Pixy Misa at 10:48 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 6 words, total size 1 kb.

Geek

All My Servers Are No Longer The Same Speed

I was planning to spend the weekend working with MongoDB, but those plans evaporated when it crashed and destroyed my test database. So instead I dug out my toy Python benchmark and ran it on Eineus. And just for fun, did the same in Psyco and Cython and Jython. Results are... Mixed. Yeah, that's a good word, particularly since the IronPython benchmark is still running.
SystemCPUClockPythonLoopStringScanTotal
EineusPhenom II 9453.0GHz2.6.4/320.9501.4830.4372.870
EineusPhenom II 9453.0GHz2.6.4/Pysco0.0130.1800.4770.670
EineusPhenom II 9453.0GHz2.6.4/Cython0.00084.7500.49085.240
EineusPhenom II 9453.0GHz2.5.1/Jython0.682499.9360.758501.376
NagiPhenom 97502.4GHz*2.6/IronPython/320.5443502.6521.5413504.739
NagiPhenom 97502.4GHz*2.6/IronPython/640.9165399.0201.2645401.202
PotemayoCore 2 Duo1.8GHz2.6/IronPython/320.5591.6321.5673.759
MiyabiPhenom II 9453.0GHz2.6.4/640.6371.0030.5302.170
AkaneOpteron2.0GHz2.51.8872.7330.8805.500

* Normalised to 3.0GHz for ease of comparison.

I'll paste in the IronPython results if it ever finishes. (Update: Done now.)

Some notes:

64-bit Python is now a good bit faster than 32-bit for many cases. It's actually a bit slower in string scanning; I don't know why.

A 3GHz Phenom II running Python 2.6 is 2x faster than a 2GHz Opteron running Python 2.5 from 3 years ago. Someone's been doing some good work, either the Python people or AMD or the Gnu compiler team.

CPython (the standard Python) has some really neat string optimisations that I depend on in Minx. These flow through nicely to Psyco, but are conspicuously absent from Cython, Jython, and IronPython, which are 60, 400, and a couple of thousand times slower for heavy string concatenation (as I said, that benchmark hasn't finished yet...) It's certainly possible to avoid that idiom, and instead, for example, create a list of substrings and then join them all in one operation.

Apart from that, Jython seems to perform fairly well; certainly, if you need to run heavily multi-threaded Python code and can avoid doing millions of concatenations of large strings, Jython could be a winner. The Python interpreter can only run one thread at a time, though other threads can be handling I/O or library functions. The Jython runtime is fully multithreaded, so if you have a multi-threaded application and more than two CPUs - which I do - then Jython can provide an overall performance boost even if the single thread performance declines somewhat. (And it's actually faster on one of the tests, so depending on your code you might win both ways.)

As for IronPython, well, the string concatenation results are just terrible. Looping is comparable with Python or Jython (but far behind Psyco or Cython), and string scanning is the slowest of the lot, though only by a factor of 2, not 2000. It should be fast enough for most tasks as long as you really avoid concatenating large strings. I wonder what list performance is like - I'll have to add a test for that.

Bumped and updated: Tested this just now on Potemayo with a StringBuilder implementation for the strcat part of the benchmark, which delivered about 2000x faster performance (!) on that part of the benchmark and 1000x on average, which would make a .Net deployment of Minx pretty feasible.  It would be nice if all these people deploying immutable string libraries could also deploy the Python concatentation performance trick, because StringBuilder is not a general-purpose string library, just a faster way to concatenate and modify a stringlike object.

Posted by: Pixy Misa at 05:09 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 527 words, total size 5 kb.

Geek

Obectification

Just pootling about benchmarking stuffs again, which is what I do when I'm too lazy to actually code something but want to appear productive.  This time I'm seeing how many little objects Python can create per second.  Here's the code:
import time

class iwb(int):
    def __setattr__(self, k, v):
        pass
    def __getattr__(self, k, default=None):
        return None

def perf(count):
    t0 = time.time()
    for i in range(count):
        j = iwb(i)
    print '%s per second' % (count / (time.time() - t0))

perf(10000000)
And here are the numbers:

Python 2.6.4/32: 3044540.34221 per second
Psyco 2.6.4/32: 5378755.4303 per second
Jython 2.5.1: 795102.161706 per second  (significantly slower, but better than I'd expected)
Cython 2.6.4/32: 3830629.40698 per second (faster than Python, slower than Psyco)
Python 2.6.4/64: 3617701.37013 per second (again nearly 20% faster than the 32-bit version)
IronPython 2.6: 1931247.06236 per second (On my laptop, a 1.8GHz Core 2 Duo, compared to the 3GHz Phenom II for the preceding results, so not too bad.  I need to see if it works under Mono.)

The reason I'm fiddling with this is that I'm looking at an optimisation/simplification for Minx, but it would involve creating about 10,000 objects to load the current main page of my blog.  It's part of some syntactical trickery I'm trying to work out that creates virtual methods on objects in trees....  Without the objects being visibly objects, which involves some hackery when you put them in the tree in the first place.

Posted by: Pixy Misa at 03:16 AM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 236 words, total size 2 kb.

Geek

Syntax Matters

Pixy's First Rule of Computer Language Design: Syntax matters.

You can have all the semantic brilliance in the world, but if your syntax is counter-intuitive, no-one will use it.  See for example the widespread success of Smalltalk and Lisp in the industry.

... crickets ...

Right.

So, how does this apply to databases, the topic to which I am dedicating an extended rant?  Well, the problem with databases in the modern world is essentially twofold:
  1. SQL.
  2. Everything else.
SQL's syntax, semantics, and implementations generally leave something to be desired, but it does get the job done, the job being selecting and modifying sets of data.

Everything else's syntax, semantics and implementations generally leave something to be desired too; it's impossible to design the perfect language (though if someone created interoperable implementations of Python and Ruby atop the Go compiler, they'd be getting close) because there are syntactic and semantic conflicts that cannot be resolved; you have to choose on which side of the conflict you wish to fall, and live with that decision.*

But the real clash comes when you want to get your data and do something with it, like, for example, display it on the screen.  You write a SQL query (which is a program in its own right), send it to the database server, get a bunch of text back, which you then interpret into structures suited to your programming language, process those structures, and finally, display it on the screen.

That is, frankly, a godawful mess, slow, insecure, painful to code, and nonetheless how most of the websites in the world actually work.

The frustrating thing is that this problem was solved completely in the early 80s.**

In Progress, to display a list of customers with balances over $100, you can say:
for each customer where balance>100:
  display customer.
end.
Okay, that's nothing special; in SQL it's even easier:
select * from customer where balance>100;
But in Progress, you can do this:
def var reminder as logical.

for each customer where balance>100:
    display customer.
    update reminder.
    if reminder then do:
        send_reminder(id).
        reminder_date = today.
    end.
end.
That is, you can loop through the customers, prompt the user whether to send a reminder letter to each, send out the letters as instructed, and mark it on the account.  It even initiates the transactions for you and provides automatic commit and undo semantics.  No need to write queries in one language and application code in another, your data is simply there for you to work with.  Your user could be on a Wyse 60, your application on a Dell 386 running Xenix, and your database on an AS/400, and it would all just work.

It was a thing of beauty.  Until your r-code (Progress's bytecode) exceeded 64k and you couldn't compile it any more.***

Nothing else compares.  Microsoft's LINQ, two full decades later, is a poor and partial reinvention of Progress.

Exactly why Progress is languishing in near-irrelevance today is a topic of a whole month of rants, and beside the point, which is that somebody already got it right.

I can't use Progress for my applications because the language itself is abjectly inefficient and limited compared to a modern dynamic language like Python or Ruby.****  So the question is, what can I do with a modern dynamic language like Python or Ruby to recapture some of that functionality, some of that breezy syntactic and semantic awesomeness?

Well, neither Python nor Ruby has an interactive mode that remotely compares with Progress, so I'm going to ignore that part for the moment and concentrate on the data syntax and semantics.  And I'm going to mostly use Python, because I'm more familiar with it and because they're similar enough that it doesn't matter that much.

Python has these things called generators.  Generators are program structures that pretend to be data structures.  In Python, if you have a list
lollipops = ['cherry','huckleberry','lime']
You can iterate through with:
for each pop in lollipops:
    print pop
(As an aside, there's no end statement, which is one of Python's greatest strengths-and/or-weaknesses.)  Which will print, as you might expect,
cherry
huckleberry
lime
So far, it's elegant but limited.  The clever trick is that if I write this piece of code:
def lollipops():
    flavours = ['cherry','huckleberry','lime']
    for flavour in flavours:
        yield flavour
And then write:
for pop in lollipops():
    print pop
I will again get
cherry
huckleberry
lime
We have a program (a subprogram) here acting almost exactly like a data structure.  So rather than being limited to the data structures that we have, or being forced to write code that manipulates the data structures and calls functions to get the bits and pieces that we want, we can just use the language's own for loop.  The only distinction is that calling a subprogram requires dereferencing it with () (otherwise you get Python's version of a function pointer - very useful, but not what we want right now).  But we can do better still by creating a class, like so:
class candy():
    def __init__(self):
        self.flavours = ['cherry', 'huckleberry', 'lime']
 
    def __iter__(self):
        for flavour in self.flavours:
            yield flavour

lollipops = candy()
for pop in lollipops:
    print pop

cherry
huckleberry
lime
And now we have code that acts exactly like a data structure.  (Well, for the requirements of this little example.)

At which point I shall leave off for the moment, to resume at some time other than 3AM.

* Some of the syntactic conflicts arise from the restrictions of the ASCII character set, which simply doesn't have some of the symbols we need, so we substitude and overload the ones we do.  Of course, today we have Unicode and can display every symbol we could ask for - but you can't type in Unicode.

** And probably before, but certainly in the 80s.

*** Something which, as far as I know, they still haven't entirely resolved.

**** And because it's overpriced and the licensing model is insane.

Posted by: Pixy Misa at 02:31 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 967 words, total size 8 kb.

Thursday, February 11

Geek

Enough With The Curly Brackets Already!

There's a quote (which I am unable to find) to the effect that Algol 60 was better not only than all the languages that preceded it, but all the languages that followed it as well.  Which rather steps on the punchline here, but at mee.nu we're all about the stepping on of punchlines.

P.S.  Enough with the curly brackets!

P.P.S.  Please don't remind me that it could have been worse.

Posted by: Pixy Misa at 02:17 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 76 words, total size 1 kb.

Wednesday, February 10

Geek

View From A Pita

Quick list of the planned view types for Pita.  Not all of these will be supported in initial releases, though only two (Braid and Chord) involve known unknowns.  (A view is what SQL would call a table; a derived view in Pita is still called a view.)

Type Storage Records Indexing Recovery Distribution
Basic Memory Variable Hash Snapshot Broker
Space Memory Variable Atree Snapshot, Log Broker
Cloud Memory Variable Atree Replication Replication, Sharding
Scene Memory Variable Any memory Snapshot, Log Broker
Batch Memory Variable Atree Snapshot Broker
Array Memory Fixed Array Snapshot
Broker
Graph Memory Variable Graph Snapshot, Log Broker
Queue Memory Variable None Snapshot, Log Broker
Stack Memory Variable None Snapshot, Log Broker
Deque Memory Variable None Snapshot, Log Broker
Cache Memory Variable Hash None Local
Image Memory Variable Any memory Snapshot Broker
Store Disk Variable Btree Versioning Broker
Shelf Disk Variable Btree Versioning Broker
Fixed Disk Fixed Btree Log Broker
Table Disk Variable, Fixed Btree Versioning, Log Broker
Share Disk Variable, Fixed Btree Versioning, Log Caching
Shard Disk Variable, Fixed Btree Replication Replication, Sharding
Index Disk Variable Btree None Broker
Chord Disk Variable Btree Versioning Caching
Merge Virtual Variable, Fixed Virtual Virtual Broker
Braid Virtual Variable, Fixed Any memory Virtual Broker
Slice Virtual Variable, Fixed Any memory Virtual Broker
Remap Virtual Variable, Fixed Virtual Virtual Broker
Magic Virtual Variable, Fixed Virtual Virtual Broker


Storage - Memory, Disk, or Virtual

The immediate storage mechanism.  Memory views are accessed in memory (and must fit in memory), but have persistency to disk.  Disk views are read and written from disk.  Virtual views are representations of other views or of programmatic data.

Records - Fixed, Variable

Pita suports three main modes of storing record data: Fixed-length random access on disk, variable-length sequential access on disk, or any format in memory.  This is to avoid the complications and overhead of dynamically managing space on disk (it does handle that for Btree indexes, though).

Indexing - Hash, Atree, Btree, Qtree, Rtree, Array, Graph, Virtual

Hashes are simple one-to-one mappings of key to data, unordered, and are availble for in-memory of on-disk data.  Atrees (actually AVL trees), Qtrees (quadtrees) Rtrees, Arrays, and Graphs are available indexing methods for in-memory data.  Btrees are available for on-disk data.

Recovery - Log, Snapshot, Versioning, Replication

Crash recovery is handled by a variety of mechanisms depending on the view type.  Log recovery involves replaying a log of recent operations to make sure the data is consistent and up-to-date.  Snapshot recovery simply reloads a snapshot of data from disk.  Versioning records every change to the data as a new version, and the only recovery needed is to ensure the indexes are up-to-date.  Replication recovery is done by reading the data back from a replica.

Distribution - Broker, Caching, Local, Replication, Sharding

Broker distribution means that in a multi-server environment, client requests are directed to the dataserver that owns the desired view.  Caching is broker distribution plus a local cache of recently accessed rows; caching is suitable for frequently-accessed data where consistency is not critical.  Local distribution means that the view is available only on the local dataserver, that is, the server to which the client is directly connected.  Replication means that the data is replicated, and can be accessed from any replica node in the mesh.  Sharding means that the data is split up across nodes in the mesh, and a request may need to be sent to multiple nodes to be completed.  Sharding is intended at this point to be simple rather than optimal.

Notes

I now have libraries for all the required index structures, so that part is relatively easy. 

One problem, as I mentioned, is that the snapshot semantics don't work on Windows.  So on Windows you won't get Image views, and backups (and probably index creation) of memory views will be blocking operations.

Image views can be read-only or read-write, and single- or multi-user as required.

Shelf views will initially be read-only (that's simplest, for obvious reasons), but can be made read-write by creating an append log for the shelf and adding a shelf value to the version field in the indexes.  Images can be read-only or read-write as you prefer.  Taking either disk or memory snapshots is effectively instantaneous and uses almost no resources (CPU, memory, or disk), however, maintaining an Image or a writeable Shelf will gradually use an increasing amount of space as the original and the snapshot diverge.

Snapshots are used internally by the system for backups, recovery, and online index building, but can also be created by the user for manual point-in-time recovery, for reporting and data extracts (where it is required that the data be consistent to a point in time), and for testing - you can easily snapshot your data, run any operations you like on the snapshot, and then just drop the snapshot when you are done.  You can have multiple snapshots open at once, and you can take a snapshot of a snapshot, though of course this can multiple the resource requirements.  A single snapshot operation can be used to simulataneously snapshot all the memory-based views on that dataserver, even across multiple databases.

Also, particularly with CPython (regular Python) deployments, the dataserver is likely to be a single process.  Multi-threaded, but a single process, so it will be limited by the Python global interpreter lock, so a single dataserver can only effectively use a single core.  However, a snapshot is automatically assigned to a new process, so backups can run on a second core, and so can your reports, data extracts, and testing.

At any time when working with an Image view, you have the option of dropping it, committing it as the live version of the original view (losing any changes that happened to the original in the meantime), or committing it as a new, persistent view.  You can also simply snapshot one memory view into a new persistent view; this is also an online operation that takes almost no time or resources, and your new view is immediately recoverable by using the existing snapshot and log of the base view plus an ongoing log on the new view.

A database can contain any assortment of table types, and you don't need to know a table's type when using it, except when it comes to specific features like the expected performance of a range or spatial lookup, or reverting a record to a previous version (which of course is only available on versioned tables).

The storage and recovery mechanisms are intended to allow consistent, recoverable backups of entire databases simply by copying the files on disk using standard operating system commands.  This would require an index rebuild, but it's simple, reliable, and doesn't requires no locking of any structures.

The two partial exceptions are Batches and Arrays.  Batches are designed to be programmatically checkpointed (using the snapshot mechanism), and are for cases where it is better for the database to recover to a known point than to the latest point.  So, when you run a batch process that stores information in its own memory, it can use a Batch view to store external data.  In the event of a crash, the server will restore the Batch to its last checkpoint and you can simply re-run the program.

Arrays are intended to be high performance, and support atomic matrix and sub-matrix manipulations that can modify thousands or millions of values at once.  For this reason, they are not logged, but as with Batches you can issue checkpoint operations as required.

I'll post some more details on the planned special view types - like Index, Braid, and Chord - soon.

P.S.  Yes, I'm aware that all the view types have five-letter names.  And yes, I am mildly OCD.

Update: Come to think of it, since Pita doesn't support cross-table transactions, I could hand off each database or even each table to a separate process and scale to multiple cores that way.  The potential problem there is that Merges and Braids would require interprocess traffic rather than running in a single shared memory space.  Hmm, and I couldn't maintain a single Lua instance per connection via the embedded API.  Okay, on third thoughts, that won't work on a per-table basis.  Per database is probably fine though.

Posted by: Pixy Misa at 08:41 AM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 1381 words, total size 14 kb.

Tuesday, February 09

Geek

The Changer Things Are, The Samer They Get

Just priced a server for the office at my day job - all but one little server at the moment are hosted, and we need something local.

8CPUs, 24GB RAM, 12TB disk, $5000.

I remember at my PPPPOE spending a good million dollars to get that kind of system.  Throw a couple of Intel E-series SSDs in there and we'd get the IO throughput of that million dollar system as well.

Posted by: Pixy Misa at 06:04 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 79 words, total size 1 kb.

Geek

Tiny Bee Pistols!

I'd forgotten about this one: mxBeeBase.  Open source B+tree Python library.

Perfect!

So that's R-trees, Quadtrees and B+trees done.  Within hours of deciding I was going to write my own database I already have all the indexing dealt with. wink

Oh, and Pita is going to be open source too.  BSD license, probably, unless I need to use something GPL.

Posted by: Pixy Misa at 02:31 AM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 63 words, total size 1 kb.

<< Page 2 of 3 >>
92kb generated in CPU 0.0296, elapsed 0.177 seconds.
54 queries taking 0.1621 seconds, 358 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 356