XPoint: A Better Flash Than Flash
No, not the Adobe one, but the one that powers your phones and tablets and increasingly, notebooks and desktops.
Intel and Micron have announced XPoint
, a brand new memory technology based on magic smoke and fairy dust* that is up to 1000x faster and 1000x longer lasting than conventional flash memory.
Such announcements are not uncommon, but are of mostly technical interest, because it takes a good ten years to get such a technology off the ground - if, that is, it doesn't run into serious problems of technical or financial viability, which happens, roughly speaking, 100% of the time.
Turns out in this case that Intel and Micron have been working on this for ten years already, and they're fabbing viable 128Gbit devices right now. Products are expected to ship by the end of the year. This year. 2015. AD. Update: It sounds like chips will be available this year, but end products aren't expected until 2016. Darn.
The big difference between these devices and common NAND flash is that XPoint is bit addressable, like RAM, where with NAND flash you have to write a page (typically 4K) at a time and erase huge blocks (2M or more), which requires a lot of fiddling behind the scenes to work. With XPoint you just write what you want, when you want, where you want.
Pricing is expected to be somewhere between NAND flash and DRAM - DRAM is about 20x the price of commodity flash.
Some use cases are obvious - solid-state caches for RAID controllers and enterprise-grade SSDs. But if we finally have a large-scale non-volatile storage solution that runs at close to main memory speeds, that is going to set the cat amongst the pigeons in the database world. It's the database holy grail, and the heart of every good database is software that tries to deliver that sort of performance without the requisite hardware unobtainium.
Now that unobtainium is due to be on store shelves for Christmas, the impossible is set to become commonplace.
* They haven't disclosed exactly how these devices work, so that's a bit of informed guesswork on my part.
Posted by: Pixy Misa at
| Comments (8)
| Add Comment
| Trackbacks (Suck)
Post contains 362 words, total size 3 kb.
Been working with databases for a few decades. Databases generally buffer some data in memory, partly because it's so much faster than going to disk. SAP has been talking up HANA for a while, which basically moves the whole database into memory. Also recently involved with implementing HANA-lite for a pricing subsystem. It will be interesting to see where pricing ends up.
A lot of my job has been creating indexes and queries that reduce the time required to get the requisite data from disk to memory. My understanding is that will largely go away, and accessing database tables will be more like working with internal tables -- you'll still need to do binary searches in large tables but the whole physical layer problem is basically eliminated.
I wonder if in the long run this might affect AI more than anything. Seems like it would be a lot easier to model processes for massively parallel synaptic connections in addressable memory space. And real relationships are relational, which is why the relational database is so useful.
SSD is a huge benefit for PCs, extended the useful life of mine for a few years (well, maybe only because I stopped gaming).
Posted by: TallDave at Wednesday, July 29 2015 01:32 PM (74ZYB)
Yep. Most people are looking at this and saying, okay, that sounds nice, I guess,
but us hard-core database guys are giggling and spinning our wheels. We've been promised something like this every couple of years since about 1970, so the normals will have to forgive us if we act a bit giddy.
SSDs are already a life-saver for database tuning, but XPoint sounds like pure magic.
Posted by: Pixy Misa at Wednesday, July 29 2015 01:53 PM (PiXy!)
What happens when Storage and RAM are indistinguishable....
Posted by: Mauser at Wednesday, July 29 2015 07:56 PM (TJ7ih)
Posted by: Pixy Misa at Wednesday, July 29 2015 07:57 PM (PiXy!)
Don't you mean byte addressable?
Posted by: conrad6 at Thursday, July 30 2015 04:43 AM (826FZ)
The array itself is bit-addressable - it's an X/Y(/Z) crossbar at the bit level. I assume the controller / interface will impose a word size over that, as with DRAM, which has signal lines to allow the CPU to write a single byte on a 128-bit or 256-bit bus, but not individual bits.
Posted by: Pixy Misa at Thursday, July 30 2015 11:08 AM (PiXy!)
Yeah... they moved one of our DBs onto SSD a few years back, got about 10x faster.
It's interesting, it was almost exactly the same performance boost as I'd gotten from moving the database to an 8K blocksize (they were using a 1K DB blocksize on an OS that was 8K blocksize, so every read was 8x slower).
Users like it when your database gets 100x faster over a year.
Just today I worked on a program that does a nine-table join, made it about 25x faster by finding a way to restrict one of the result sets at a different point in the query. Even when everything is indexed properly you can still have issues like this in large joins where some of the table have tens of millions of records.
Posted by: TallDave at Saturday, August 01 2015 04:18 PM (74ZYB)
v3.0.1is an Android app which will let you unlock the premium features of popular games free of cost. Obviously, this is a hac-king tool that bypasses the credit checking system of Google Play and uses a fake credit card to pay for the premium features. The fake credit card has no credit loaded but the app can convince Play Store that actual money was paid.
Posted by: Freedom apk at Tuesday, December 18 2018 02:30 PM (vdqin)
| Add Comment
49kb generated in CPU 0.06, elapsed 0.2923 seconds.
54 queries taking 0.256 seconds, 286 records returned.
Powered by Minx 1.1.6c-pink.