It's been a busy time in the tech industry, with Computex, WWDC, and E3 following hard on each other's heels, so I thought a quick wrapup of the top stories might be useful to those who don't have the time or inclination to obsess over this stuff, but are nonetheless mildly interested.
I'll break it down by company, starting with
AMD: Eight Is The New Four
It's shaping up to be AMD's biggest year for a decade. So far they've released their Ryzen R7 and R5 processors and RX 500 series GPUs, announced the new Vega GPU and Epyc server processors for release this month, launched the ThreadRipper high-end workstation CPU, partnered with Apple for the updated iMac and Macbook models and the brand new iMac Pro, and with Microsoft for the Xbox One X (aka the Xbonx).
The Microsoft Xbox One X, at 6TFLOPs so powerful that its graphics look better than reality.*
They've already completely upset the industry by selling 8-core CPUs at 4-core prices, and they'll be doubling down in the second half of this year - literally - by selling 16-core CPUs at 8-core prices and 32-core CPUs at 16-core prices. Look for 16 cores at less than $1000 and 32 cores at under $2000, dramatically cheaper than Intel's pricing (if Intel even had a 32-core chip, which they don't).
Dell's Inspiron 27 7000 is based on the 8-core Ryzen 7, with twin overhead cams and optional supercharger.*
AMD's entire revenue stream is less than Intel's R&D budget, so they had to get clever to pull this off. The entire line of CPUs, from 4 cores all the way up to 32, is based on a single universal design with two clusters of 4 cores. A chip with no defects can be sold as an 8-core part, with one or two defects as a 6-core part, with more defects as a 4-core part. Reports are that AMD is actually getting a yield of 80% defect-free chips, so many of the 6-core and 4-core parts probably work perfectly and just have some parts of the chip switched off.
For their 16-core workstation chips they simply wire two of these standard 8-core modules together. The 32-core server parts likewise are made up of four 8-core modules. And if you need serious horsepower, you can plug two 32-core CPUs into a server motherboard for 64 total cores supporting up to 2TB of RAM.
That standard 8-core module also has a bunch of other features, including SATA ports, USB ports, network controllers, and 32 PCIe lanes. A two-socket Epyc server, without needing any chipset support, includes 128 available PCIe lanes, 32 memory slots, 16 SATA ports, 16 USB 3.1 ports, and a couple of dozen network controllers.
In the second half of this year AMD will be adding mobile parts with integrated graphics, a desktop chip also with integrated graphics, and a chip designed specifically for high-end networking equipment like routers and 5G wireless base stations.
Intel: Nine Is The New Seven
Intel has been in the lead with both chip design and manufacturing the last few years, and seems to have been caught napping with the success of AMD's Ryzen and the announcement of ThreadRipper. They've fired back with their Core i9 professional platform, but it's rather a mess. The low-end chips require a high-end motherboard they can't fully use; the mid-range chips require a high-end motherboard they can't fully use; the high-end chips are nice, but very expensive; and the super-high-end chips seem to have suffered a sudden total existence failure* - the 12 to 18-core parts cannot be found anywhere, not even as detailed specifications. We have a price for each, and a core count, and that's it.
The first and last hours of Apple's interminable WWDC keynote were stultifying, with such landmark announcements as support for Amazon video (like everyone else) and a wireless speaker (like everyone else).
In between they finally refreshed the iMac to current hardware - Intel's current generation CPU, AMD's current generation graphics, the same Thunderbolt 3 that everyone else has had for eighteen months, and DDR4, which everyone else has had for even longer.
Some welcome changes in that the specs are definitely better, prices are lower, screens are even better than before (and the screens on the current range of iMacs are amazing).
And then they announced the iMac Pro. Same 27" 5K screen as the regular model, but with an 8-core CPU on the low-end model. High end model has 18 cores, up to 128GB RAM, a 4TB SSD, and an AMD Vega GPU with 16GB of video RAM. Also, four Thunderbolt 3 ports, four USB 3.1 ports, and 10 Gbit ethernet.
The iMac Pro has so many cores that it can be seen from the International Space Station.*
It starts at $4999, which is awfully expensive for an iMac, but Apple claimed that it still works out cheaper than an equivalent workstation from anyone else. I configured a couple of systems from Dell and Lenovo, and I have to admit that Apple is right here. It costs no more, and possibly less, even though it includes a superb 5K monitor.
On the other hand, not one thing in the iMac Pro is user-upgradable. That's kind of a bummer.
nVidia: Bringing Skynet To Your Desktop, You Can Thank Us Later
nVidia have been a little quieter than their rivals at AMD, though more successful with their graphics parts so far - AMD's Vega is running several months late.
Their biggest announcement recently is their next generation Volta GPU, which delivers over 120 TFLOPs (sort of), and, at over 800 mm2, is the biggest chip I have ever heard of.
That "sort of" is because the vast majority of the processing power comes in the form of low-precision math for AI programming, not anything that will be directly useful for graphics. And such a large chip - more than four times the size of AMD's Ryzen CPU parts - will be hellishly expensive to manufacture.
nVidia's Volta GPU is the largest chip ever manufactured. For scale, a row of Grayhound buses is parked along each edge of this picture.*
It's nonetheless an exciting development for anyone working in machine learning, and it certainly had a positive effect on nV's share price.
Speaking of graphics, now is not a good time to be trying to buy a new graphics card, because there aren't any. Particularly with AMD, but the shortage is starting to affect nVidia as well. A bubble in cryptocurrency prices, especially for a new currency called Etherium, has triggered a virtual goldrush that has had miners buying every card they can get their hands on.
AMD cards are preferred for this for precisely the same reason nVidia are preferred by most gamers: AMD's design is more general-purpose, less specifically optimised for games. For Etherium mining, AMD's cards are roughly twice as effective as an equivalent nVidia card.
Result: No ETA.
AMD's entire production line captured by Bitcoin pirates.*
IBM: The Next Next Next Generation
One of the reasons AMD is having such a huge year is that they've spent most of the past five years stuck at the old 28nm process technology (called a "node"). The 20nm node that was supposed to replace it in 2014 wound up dead in a ditch* with only Intel managing to make it work (because they moved to FINFETs earlier than anyone else).
Last year the rest of the industry collectively got their new process nodes - called 14nm or 16nm depending on who you talk to, but all based on FINFETs and all far superior to the old 28nm node - got their new processes on line and started cranking out chips. This means that AMD can make 8-core parts that are faster, smaller, and more power-efficient than anything they had before, and do it cheaper. They were years behind Intel and caught up in a single step.
IBM just announced the first test chips on a brand new 5nm node. To put that in perspective, they could put the CPU and GPU of the top-of-the-line model of the new iMac Pro on a single chip, add a gigabyte of cache, and run it at low enough power that you could use it in an Xbox.
IBM provided us with this die photo of their 5nm sample chip. Unfortunately it is invisible to the naked eye.*
They're planning to follow up with a 3nm process. This is pretty much the end of the road for regular silicon; we have to go to graphene or 3D lithography or quantum well transistors or some other exotic thing to move forward from there. But the amazing stuff we're getting right now is at 14nm, so 3nm is not shabby at all.
ARM: We're Here Too!
ARM sells a trillion chips a year* dwarfing the combined scaled of Intel, AMD, and nVidia, but they're constrained by power and price and can't make huge splashy announcements of mega-chips like nVidia's Volta or AMD's Vega and Epyc.
Nonetheless, they've come up with new high-end and low-end designs in the A75 and A55 cores. The A75 replaces the A72 and A73 cores, which are alternative designs for a high-performance 64-bit core with different strengths and weaknesses; the A75 combines the best features of both to be faster and more power-efficient than either.
An early ARM motherboard from an Acorn Archimedes A3000. Note that none of the chips have fans, or even heatsinks. That's because these machines were cooled by photino radiation, before this was banned for causing birth defects in igneous rocks.*
The A55 is a follow-up to the ubiquitous A53, which is found in just about every budget phone and tablet and many not-so-budget ones. The A53 is a versatile low-power part with decent performance; the A55 is designed to improve performance and power efficiency at the same time. It's not an exciting CPU, but ARM's manufacturing partners will ship them in astronomical volume.
The other thing to note about these new CPUs is that again, eight is the new four. Most phone CPUs currently have cores grouped into fours - commonly four fast cores and four power-saving cores - because that's as many as you could group together. The A75 and A55 allow you to have up to eight cores in a group. Which changes the perspective a little, because eight A75 cores is getting into typical desktop performance territory.
1. Is the new Mac Pro using those imaginary Core i9s?
2. Regarding your comment about a 5nm cpu/gpu: I would imagine liquid nitrogen would be almost mandatory to run that at any reasonable power budget.
3. IIRC, those new ARMs will be able to join multiple 8-core clusters.
Posted by: Rick C at Friday, June 16 2017 04:35 AM (ECH2/)
1. No, it's using Xeon chips, which are expensive but at least ship reliably (except for the very highest-spec versions, which are impossible to find.)
2. IBM say that power consumption is reduced 75% over current-gen chips, meaning either 14nm or 10nm. That's the really impressive part - you could put two CPUs and two GPUs on a 5nm chip at the power consumption of a single 14nm chip.
3. Likely, yes. There are some dual-cluster A53 chips out now, that let you use all 8 cores at once.
Posted by: Pixy Misa at Friday, June 16 2017 11:12 AM (PiXy!)
Re: 2: that is impressive. And important. I've read that the 10nm chips are getting so small they can't easily transfer heat to the IHS.
Posted by: Rick C at Friday, June 16 2017 12:45 PM (ITnFO)
Yep. 5nm is not just a smaller process, they've changed the way they build the transistors, making them much more efficient. Won't be available until 2020 or so, but what we have right now is already pretty amazing.
Posted by: Pixy Misa at Friday, June 16 2017 02:27 PM (PiXy!)
Looks like Fry's has RX 580s and GTX 1080s in stock.
Posted by: Rick C at Friday, June 16 2017 11:57 PM (ECH2/)
Saw on Reddit that they've enforced a one-per-customer rule.
Posted by: Pixy Misa at Saturday, June 17 2017 11:11 AM (PiXy!)
Reviews are coming out for Skylake-X. Overclocking will require big watercooling. Tom's hardware: "All-in-ones like Corsair's H100i and Enermax's LiqTech 240 hit their limits at stock frequencies under Prime95. The custom loop threw in the towel at 4.6 GHz."
Posted by: Rick C at Tuesday, June 20 2017 06:03 AM (ECH2/)
Should've been clear: that's specifically for the highest-core part that will be available next week, the 10-core 7900X.
AIOs will work at stock speeds, but any level of overclocking requires a lot more. TH tested with a $1000 watercooling system and at 4.6GHz, the CPU was pulling well over 300W and still hitting 100C constantly, and then it shut down.
Posted by: Rick C at Tuesday, June 20 2017 06:07 AM (ECH2/)
Yeah, I saw one of the reviews. Clock speeds are up, which is great, but power consumption is through the roof. The 10-core system at stock speeds uses more than twice the power of an 8-core Ryzen 1700 system.
Not the CPU, the entire computer. So a 16-core AMD system will comfortably use less power than a 10-core Intel system.
Posted by: Pixy Misa at Tuesday, June 20 2017 02:35 PM (PiXy!)
Honestly, if AMD wasn't lurking in the shadows with ThreadRipper, it would be pretty impressive. 10 cores and a top speed of 4.5GHz? Shame about the price, but otherwise it's a no-compromise platform.
But AMD is lurking, so it's likely to get stomped into irrelevance by August.
Posted by: Pixy Misa at Tuesday, June 20 2017 04:21 PM (PiXy!)