Saturday, March 14
The world's a mess, as it always is, and Moore's Law is coming to an end. The automatic cycle of faster-better-cheaper that has helped drive the global economy for thirty years is coming to an end.
- 3D Chips
Flash memory is a near miracle combination of cheap, capacious, and lightning fast. I have a Sandisk 128GB micro SD card; it cost me about $150, stores the equivalent of a million Apple II floppies, and is the size of my little fingernail. I was afraid I would lose it before it was safely installed in my tablet.
The problem with flash memory is that it wears out over time. Not quickly - recent tests of consumer-grade SSDs showed that they last a long time even under appalling workloads. But as technology advances and flash chips get smaller, an unavoidable consequence is that they wear out faster. The newer your SSD or SD card, the shorter its lifespan.
The answer? Move into the third dimension. To pack more capacity into its flash chips without making them less reliable, Samsung is now stacking up to 32 individual flash cells on top of each other. The drawback is that this requires a level of control that we don't yet have in the latest process technologies - around 20nm. Samsung had to go back to an older process, closer to 50nm.
But by combining larger memory cells with 3D stacking, Samsung can keep increasing capacities without sacrificing lifespan or reliability. The 850 Pro and 850 Evo SSDs, already shipping, use Samsung's second-generation 3D flash, and other manufacturers will follow in the next couple of years.
- USB 3.1 and the Type-C Connector
Remember before USB? When printers had printer ports, and keyboards and mice had keyboard and mice ports? Modems had serial ports, joysticks had joystick ports, and every mobile phone in the world had a different charger.
USB swept all that away with a single cable (cough) and a single connector (cough cough). Yes, since then we've had USB 1, 2, and 3.0, in a total of four different speeds and at least eight standard connectors.
USB 3.1 changes things just a little. First, it's 2.4 times the speed of the already fairly zippy USB 3.0, thanks to a higher bit rate and improved encoding standards. Second, it boosts the power supply standard from about 10W to 100W - from just about enough to charge a tablet to enough to charge a full-size notebook.
And third, it introduces the Type-C connector.
The Type-C connector is the same at both ends of the cable. It works either way up. It's about the size of the USB 2 micro-B connector, but more robust (at least, it's designed to be), and has up to 100 times the usable bandwidth.
And if can carry a video signal to drive a 5K monitor at 60Hz.
Look for it to do to things like HDMI, DVI, and Thunderbolt what USB 1 did to parallel and serial ports, and to notebook power supplies what the micro-B port did to phone chargers.
- NBASE-T
That looks like gibberish if you don't know the history of Ethernet. Early ethernet ran over shared coaxial cable, and used something called CSMA/CD to control the sharing of the bandwidth. That worked fine if the network wasn't busy, but congested networks just plain sucked. Plus, coax cable is a pain to work with.
It was slowly supplanted by 10BASE-T, a standard for running a 10-megabit network over cheap twisted pair cabling. That was supplanted by Fast Ethernet on the 100BASE-T (or, really, the 100BASE-TX) standard. And that has been replaced by 1000BASE-T, gigabit ethernet.
There is a standard called 10GBASE-T - 10 gigabits over twisted pair. The standard was published in 2006, and you can count the number of consumer products implementing it on, well, zero hands. Even Apple's fancy Mac Pro doesn't have 10GBASE-T.
The slow uptake has been due to a combination of factors - the power requirements to drive a signal that fast, the chip design required to handle the signal processing, and the cost of ripping out all your old cables because they don't meet the 10GBASE-T spec.
NBASE-T basically says, well, if we can't do 10 gigabits, rather than dropping all the way back to the 1000BASE-T standard from 1999, let's see if we can run at 5Ggbits, or failing that, at 2.5Gbits.
If that doesn't sound immediately exciting to you, think of how it would feel running any other piece of computer equipment from 1999.
- Retina Displays
Like computer monitors for example. The standard resolution - dots per inch - of computer monitors had remained largely unchanged for at least 20 years.
The screens got bigger, something made possible by LCD panels, because a 30" CRT is not something you can conveniently sit on your desk. But the number of pixels was tied directly to the size of the display.
No longer. Now, thanks to technology that started in the mobile phone and tablet markets, everything is all over the place. At the common screen size of 27", you now have a choice of 1920x1080, 2560x1440, 3840x2160, or 5120x2880 - anything from 2 million to 14 million pixels.
The change has been so rapid that it's left video standards behind; 5K monitors require two DisplayPort cables to run at 60Hz, and likewise, a 4K monitor only runs at 30Hz over HDMI unless both your video card and monitor support the very recent HDMI 2.0.
And companies are forging ahead to 8K. Which is where it will likely stop, because 5K is already pushing the point of diminishing returns. Until we get to large, continuous workspace displays - where your desk is backed with a single curved sheet of glass a couple of feet high and six or more feet wide - 8K will do.
- 2.5D Chips
Nvidia have just released their new crown jewel, the Titan X, which is one-and-a-half GTX 980s squeezed into a 600mm2 die. With 3072 shaders, 8 billion transistors, and 12GB of RAM on a 384-bit memory bus running at 7GHz, it's a technological tour-de-force, and not surprisingly, it will set $999.
But its reign as the fastest single-chip video card on the block may be short-lived. AMD is tipped to release their Radeon 390X, which will have a memory bus running at only 1GHz... But 4096 bits wide.
This trick is made possible by the prosaically named high bandwidth memory, a process of stacking and connecting individual memory dies vertically, and then presenting a very wide bus - up to 1024 bits wide - to the CPU or GPU.
The 390X is expected to come in 4GB and 8GB versions, with four memory stacks sitting right next to the GPU die on a tiny interposer board.
Which leaves the rest of the video card to do, basically, nothing.
- Shingles
As a bonus, Seagate recently released an 8TB disk drive. It doesn't cost the Earth, either; Amazon list it at $270. To bring that capacity down to that price, they've played a little trick.
The read head on modern disk drives has for a while been much smaller than the write head. After all, the write head needs to force the magnetic domains in the surface of the disk to change polarity; the read head only needs to sense the polarity as they zoom past.
In fact, the read head is about half the size of the write head. So in theory, you can read twice as much data from the disk as you can write. Which is a dumb theory and makes no sense, and that's just what Seagate have done.
The trick is this: The write head lays down a track of data - like tracks on a vinyl record, except they're invisible and largely intangible and they're concentric circles rather than a continuous spiral. And then the head moves inwards half a track, and writes the next track of data. And then it moves half a track again, and writes the next track.
So the tracks overlap each other like shingles - hence the name. The read head, being half the size of the write head, has no trouble reading just the half that isn't overlapped.
The complications arise when you need to go back and change or delete something - you can't just overwrite it directly, because that would wipe out the half-track next to it.
The solution is to put gaps into the tracks. I don't know the exact numbers Seagate have used, but let's say they started with a 5TB drive, used this shingle trick to double it to 10TB, and then left every fifth track empty, giving an effective 8TB.
When you want to overwrite something, you read four tracks from the disk into the buffer in the drive controller, make the change you want, and write them back again. That means you get twice the storage in the same space, and reading and writing files is as fast as ever, but changing data takes four times as long.
Since SSDs are phenomenally superior to regular disk drives for storing data that needs to be modified frequently, this makes for a good division of labour: Store your operating system, applications, documents, and databases on your SSD, and your media files - music, video, pictures, books - on the disk. Because 8TB of SSD isn't cheap, even now.
Posted by: Pixy Misa at
10:04 PM
| Comments (6)
| Add Comment
| Trackbacks (Suck)
Post contains 1602 words, total size 10 kb.
Posted by: Pete Zaitcev at Thursday, March 19 2015 02:36 AM (RqRa5)
Just remember to differentiate between USB 3.1 Gen 1 and USB 3.1 Gen 2, because Gen 1 is USB 3.0 retroactively renamed, so it's only the 5GBps, or so I read at Ars Technica.
And I guess that not all Type C ports will support the full range of things like power or video that the spec allows, which is just lovely.
Posted by: Rick C at Thursday, March 19 2015 11:32 AM (0a7VZ)
It's useful for backup / archival storage, or when it's used together with an SSD. Either completely separate SSD and disk, software tiering, or an on-device cache. That's probably their next step.
But where you do have large amounts of data that just sit there most of the time, it will work just fine.
Posted by: Pixy Misa at Thursday, March 19 2015 03:37 PM (PiXy!)
Posted by: Pixy Misa at Thursday, March 19 2015 03:41 PM (PiXy!)
Posted by: Pete Zaitcev at Saturday, March 21 2015 02:41 AM (RqRa5)
As you say, the next logical step would be an SSHD version. That could be good for Swift.
Posted by: Pixy Misa at Saturday, March 21 2015 04:32 PM (2yngH)
58 queries taking 0.6957 seconds, 350 records returned.
Powered by Minx 1.1.6c-pink.