Friday, October 27
Daily News Stuff 27 October 2023
Parks And Wrecks Edition
Parks And Wrecks Edition
Top Story
- Humanity is at risk from an AI "race to the bottom". (The Guardian)
What's the risk?A handful of tech companies are jeopardising humanity’s future through unrestrained AI development and must stop their "race to the bottom", according to the scientist behind an influential letter calling for a pause in building powerful systems.
What's the risk?"We're witnessing a race to the bottom that must be stopped," Tegmark told the Guardian. "We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don't jeopardise our shared future."
What's the risk?In a policy document published this week, 23 AI experts, including two modern "godfathers" of the technology, said governments must be allowed to halt development of exceptionally powerful models.
What's the risk?The paper, whose authors include Geoffrey Hinton and Yoshua Bengio – two winners of the ACM Turing award, the "Nobel prize for computing" – argues that powerful models must be licensed by governments and, if necessary, have their development halted.
What's the risk?The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation.
None of these companies are working on AGI. All of them are dumping huge amounts of money into glorified typeahead systems that understand nothing.
Tech News
- OpenAI is forming a team to study catastrophic risks relating to AI. (Tech Crunch)
What is the risk?... including nuclear threats.
I have a surefire mitigation for that.
If you even think of giving AI access to nuclear weapons, we will shoot you.
- Writing a simple virtual machine in less than 125 lines of C. (Andre Inc)
Which isn't bad, because my current test framework for opcode performance is double that.
But it's a really simple virtual machine. The only mathematical operation available is addition, for example.
Posted by: Pixy Misa at
05:46 PM
| Comments (5)
| Add Comment
| Trackbacks (Suck)
Post contains 363 words, total size 3 kb.
1
Risk: From all that verbiage, it almost looks like the risk will be the government not getting their "fair share" of any money AI will generate.
Posted by: Frank at Friday, October 27 2023 07:25 PM (0001W)
2
Or perhaps these "modern godfathers of the technology" (a phrase that only a truely educated journalist could write without laughing out loud) might not be able to help guide the future of research (by managing a decently lucrative grift)!
Posted by: normal at Friday, October 27 2023 08:49 PM (obo9H)
3
The risk is Skynet, of course. But I think Frank hit it on the head.
Posted by: Rick C at Friday, October 27 2023 11:09 PM (k3/O4)
4
There's also the possibility of Asimov's Mutivac, but I think I prefer my thinking systems not hooked up to weapons or critical infrastructure until we've had a few centuries to sort out what it's likely to do.
Posted by: normal at Saturday, October 28 2023 06:22 AM (obo9H)
5
This post was just what I needed. Thank you for sharing your knowledge. cheapest panel
Posted by: Viral DG at Thursday, December 19 2024 03:43 PM (jnoCG)
52kb generated in CPU 0.0148, elapsed 0.1084 seconds.
58 queries taking 0.0979 seconds, 352 records returned.
Powered by Minx 1.1.6c-pink.
58 queries taking 0.0979 seconds, 352 records returned.
Powered by Minx 1.1.6c-pink.