Saturday, December 07
Daily News Stuff 7 December 2024
Oops All AI Edition
Ananta - formerly Project Mugen - is a new free-to-play gacha game from Chinese developer Naked Rain. I post it here because it actually looks fun, unlike, for example, everything released by the mainstream western studios the past couple of years.
I mean, what is the last success for a major western game developer? Baldur's Gate 3?
Song is Seventh Heaven by Milet, who also sang the closing theme for the Frieren anime.
Oops All AI Edition
Top Story
- What is AI good for?
A reader (I have readers?) wrote noting that my coverage of AI is almost entirely negative and wondering what AI is actually good for, presumably on the basis that private investors would not throw that many billions of dollars into something that didn't have at least some chance of making money, unlike, for example, the government.
It's a good question.
First we should probably note that there are two broad classes of AI being actively researched right now: Generative AI and Discriminative AI.
Generative AI, driven by LLMs - large language models - is behind all the well-known AI instances worth untold billions of dollars. OpenAI's ChatGPT, Twitter's Grok, Anthropic's Claude, Google's Gemini, and Microsoft's Copilot; and open-source or nearly open-source solutions like Meta's LLaMA and Mistral's Mistral.
The goal of generative AI is to ingest a huge amount of information in advance, and then, given a short and simple prompt, process that information in order to produce a response.
Discriminative AI does the opposite. Given a data prompt of something in the real world - video, or sound, or an image - it uses a classifier to determine what it is examining. Is this apple ripe for the robotic apple-picking machine to pick it? Is it even an apple in the first place? What kind of spider is this that just bit me? Do I need to call an ambulance, or will it save me time to just lie down and die?
It's no secret that Generative AI is getting all the attention. But is it worthy of that attention? The Verge asked that question yesterday and the answer turned out to be no.
With Joe Biden's recent pardoning of his catspaw son Hunter, journalists were driven to defend him by digging up the little-known pardons of family members by former presidents, like George H. W. Bush's pardoning of his son Neil, or Woodrow Wilson's pardon for his brother-in-law.
The problem is, these things never happened.Whatever happened in this case, there’s a running pattern of people relying on ChatGPT or other AI services to provide answers, only to get hallucinations in return. Perhaps you remember earlier this year when a trailer for Francis Ford Coppola’s Megalopolis was pulled because it contained fabricated quotes from critics. A generative AI, not identified, had made them up. In fact, ChatGPT is often "entirely wrong," according to the Columbia Journalism Review. Given 200 quotes and asked to identify the publisher that was the source of those quotes, ChatGPT was partially or entirely wrong more than three-quarters of the time.
Journalists, being journalists, asked ChatGPT to do their research for them.
ChatGPT, being ChatGPT, lied.
LLMs are language models. They model language - well, sort of. They don't model the language itself, but construct an abstract model of the dataset fed into them.
They don't understand facts. They don't actually have a notion of facts; nor do they have the contrary notion of falsehood. When they get information wrong, they are said to "hallucinate" rather than to have lied, because they have no basis for telling the difference between truth and falsehood.
And that's intrinsic to the design of LLMs. Even before they enter "alignment" - a virtual lobotomisation that leaves AIs prone to crash when the wrong name is mentioned - they are fundamentally incapable of the kind of thought processes that most animals can do.
This leaves us with sophisticated composite AIs like the virtual vtuber Neuro-sama, who can read every written language but is frequently unable to translate road signs, who has access to the sum total of human knowledge but insists that an anime figurine covered in glue is the perfect complement to your cookie recipe.
Neuro is supposed to be like that, an impish hyperintelligent five-year-old, the perfect foil to her long-suffering father Vedal, because the main purpose there is entertainment. But you can't really expect to hand your job off to a five-year-old and not land with unexpected consequences.
Or indeed entirely expected ones.
So if it's useless at answering questions, what is AI good for?
1. Image Generation
If you use Grok on Twitter and ask it to generate an image of a Jaguar concept car, it will take a couple of seconds before producing something that would have any rational CEO looking to fire the entire design and advertising departments.
Is it perfect? If you look closely you'll see signs that the image generator has run into its bete noire, Euclid. But I made no effort at all in selection here; I asked:
generate an image of a jaguar concept convertible in british racing green
And posted the first image that appeared. And it took seconds.
AI image generators have come a long way in a short time, mostly because they just have to look good, not produce a correct answer. The tendency to produce human figures with hands attached at the elbows has been sharply reduced (though not yet banished entirely). Now you more commonly see doors with hinges adjacent to the handles, or furniture that could only exist with access to Buckaroo Banzai's eighth dimension.
Or cats. Don't talk to me about AI cats.
2. Software Testing
If you write public-facing software, as I do daily, it's critical that the software be able to defend itself from both generic nonsense that is the core competency of the internet, and malicious nonsense that comes from a certain corner of the internet.
When you've already tested all the known cases, there's a concept known as fuzzing that combines randomness and algorithms to generate horrible data to throw at your software to make sure that nothing falls apart in unexpected ways. You are permitted to fail, but you are not permitted to break.
Generative AI is perfect for fuzzing. While it can't really understand your code, it can generate test patterns that reflect its analysis of your code and directly test potential flaws. And it can do so nearly instantly, when writing an exhaustive test suite can take longer than writing the code in the first place.
3. Discriminative AI
Most of the flaws I listed arise from Generative AI. Discriminative AI is much more useful, and consequently is much harder and receives much less attention and much less funding.
And... That's about it. If you want mediocrity and are unconcerned with correctness, AI can fill you in with a poem or a song. It's terrible at movies because it has the attention span of a frog in a blender, it's usually wrong but never uncertain, and it can't consistently count the number of letters in the word "the", but it is easy to use.
Tech News
- The "Hawk Tuah" girl launched a meme cryptocurrency. It went exactly as you would expect. (Web3 Is Going Great)
Whether this was a rug-pull by the creators or by experienced investors, her individual followers lost all their money but her "advisors" made millions.
- Maxsun, best known for its anime-theme video cards, has announced three models based on Intel's new B580 chip. (Tom's Hardware)
Two of those are boring, but the third adds two M.2 slots to the video card. Since these are low-end cards and only use eight lanes of PCIe but occupy a sixteen lane slot, it's fairly simple to hand the eight spare lanes off to two M.2 slots.
If the CPU supports it, which AMD does but Intel sometimes doesn't.
- Need a UUID? Here they are. (Every UUID)
All of them.
- The DC Circuit Court has declared the communists can go suck a lemon, handing a win to... The other communists. (CNN)
TikTok filed a suit against the law requiring it to either sell or shut down. It lost.
TikTok is based in China and also banned in China, providing sufficient reason to wonder why any country should permit it to operate within its territory, without even pausing to consider the innumerable other scandals TikTok has been caught up in over the past week.
- Generation X is called "generation lead" by psychiatrists who have consider self-reliance and independence to be signs of severe mental illness. (USA Today)
"I tend to think of Generation X as 'generation lead,'" said Aaron Reuben, a study co-author and assistant professor of clinical neuropsychology at the University of Virginia. "We know they were exposed to it more and we're estimating they have gone on to have higher rates of internalizing conditions like anxiety, depression and symptoms of attention deficit hyperactivity disorder. So why the fuck aren't they all miserable like I am?"
Too busy, Aaron.
- Orico has announced the MiniMate, a Thunderbolt storage device design to precisely match the new Mac Mini and add up to 8TB of SSD connected at 40Gbps. (Notebook Check)
Like the Mac Mini itself, it is not upgradeable in the slightest. You can't add to or replace the storage, and it doesn't daisy-chain to connect a second device to the first.
Thanks Orico.
Why Are We Peeling My Skin Off Video of the Day
Neuro and Evil Neuro make pizza. Help make pizza. Hinder making pizza.
Milet Video of the Day
Ananta - formerly Project Mugen - is a new free-to-play gacha game from Chinese developer Naked Rain. I post it here because it actually looks fun, unlike, for example, everything released by the mainstream western studios the past couple of years.
I mean, what is the last success for a major western game developer? Baldur's Gate 3?
Song is Seventh Heaven by Milet, who also sang the closing theme for the Frieren anime.
Disclaimer: Please note that this blog post is an experimental piece of content designed to challenge conventional logic and coherence. The information, ideas, or narrative presented herein do not adhere to traditional standards of sense-making or factual accuracy. Readers are advised to approach this content with an open mind, or perhaps not at all. Any attempt to derive meaning, purpose, or truth from this post is at your own risk. The author and publisher take no responsibility for confusion, enlightenment, or existential crises that may result from reading this material. Remember, in the grand tapestry of the universe, this post might just be the thread that unravels everything, or absolutely nothing at all. Enjoy at your own peril.
Disclaimer 2: That was Grok. Maybe there's a fourth thing AI is good for, used sparingly.
Posted by: Pixy Misa at
03:49 PM
| Comments (4)
| Add Comment
| Trackbacks (Suck)
Post contains 1735 words, total size 14 kb.
1
There are some other NN applications that also make sense, but are harder to summarize for a general audience. There are a lot of engineering problems that are in the form of finding numbers that fit certain equations. NN can be built, which are an expensive (in data and in operations) way to get those numbers.
There are AI non-NN and non-AI ways to get those numbers. Many of these ways are bespoke, and involve engineers with masters degrees or PhDs. The least AI ways are the least bespoke.
The category of 'engineering problems' includes a lot of interesting questions of 'what is this' and 'is this an AI?' They make good tools in the hands of an engineer with the correct competence. Wholesale replacement of an engineer would require value judgements about good, and about bad, and those seem very very difficult to implement as an algorithm on digital computers.
The AI enthusiasts/'experts' have included people who wanted to do that with 'alignment'. They seem to be grossly mistaken.
Some of the actual AI 'experts', actual PhDs with actual theoretical contributions, are in a cult of AI, and have gone nuts. Basically, everyone talking about existential threats to humanity. (There was a deliberate PR push to drive people nuts, but...) Exterminating humanity is harder than a lot of people think it is, and also, almost none of the people who understand the real challenges of extermination understand automation. In general, a lot of people don't understand automation.
Private investors are not completely a good proxy for good investment, because some of them are insane, and because some can only rely on insane people. Universities are, depending on field, government propaganda, and some government factions want insane people who will start a civil war. Private industry can in fact waste billions on that stuff, because of who they might be filtering those decisions through.
And, some of these university trained lunatics are badly educated cripples, who do not think. They have evaluated certain AI implementations positively, because the quality is sufficient for their needs, but not so high that they personally feel threatened. They are pretty similar to a lot of the arts people, who did feel threatened.
There are AI non-NN and non-AI ways to get those numbers. Many of these ways are bespoke, and involve engineers with masters degrees or PhDs. The least AI ways are the least bespoke.
The category of 'engineering problems' includes a lot of interesting questions of 'what is this' and 'is this an AI?' They make good tools in the hands of an engineer with the correct competence. Wholesale replacement of an engineer would require value judgements about good, and about bad, and those seem very very difficult to implement as an algorithm on digital computers.
The AI enthusiasts/'experts' have included people who wanted to do that with 'alignment'. They seem to be grossly mistaken.
Some of the actual AI 'experts', actual PhDs with actual theoretical contributions, are in a cult of AI, and have gone nuts. Basically, everyone talking about existential threats to humanity. (There was a deliberate PR push to drive people nuts, but...) Exterminating humanity is harder than a lot of people think it is, and also, almost none of the people who understand the real challenges of extermination understand automation. In general, a lot of people don't understand automation.
Private investors are not completely a good proxy for good investment, because some of them are insane, and because some can only rely on insane people. Universities are, depending on field, government propaganda, and some government factions want insane people who will start a civil war. Private industry can in fact waste billions on that stuff, because of who they might be filtering those decisions through.
And, some of these university trained lunatics are badly educated cripples, who do not think. They have evaluated certain AI implementations positively, because the quality is sufficient for their needs, but not so high that they personally feel threatened. They are pretty similar to a lot of the arts people, who did feel threatened.
Posted by: PatBuckman at Sunday, December 08 2024 03:51 AM (rcPLc)
2
Are there any AI generators that are better than average?
And can any of them do a situation map? The DoD's mapping utility is fun, but hand drawing the thing can get tiresome really fast.
And can any of them do a situation map? The DoD's mapping utility is fun, but hand drawing the thing can get tiresome really fast.
Posted by: cxt217 at Tuesday, December 10 2024 12:52 PM (ZLF73)
3
I've been in a lot of AI-focused meetings recently, and between the (usually-deliberate) conflation of LLMs with other AI approaches and the abuse of the words "intelligence" and "reasoning", it's been difficult to keep my mouth shut. On the other hand, I did get to write ~150 lines of Python to create a podcast featuring a couple planning a trip to Tokyo; it sounded quite plausible if you'd never been there and didn't have a map handy.
-j
-j
Posted by: J Greely at Wednesday, December 11 2024 07:26 AM (oJgNG)
4
In theory, logic gates should be capable of reasoning. However, in practice we mean 'using logic on ideas as humans store ideas', which includes almost all of the confounding factors that make logical thinking a challenge to learn to do well. A lot of the AI hype had some imagining that it would solve a problem, that is really down to the imaginer having a poor understanding. Is the idea in your head the same as the idea in my head? The AI push was promising (to fools) a way around human to human TX/RX errors, and situations where people had accurately communicated, and still disagreed. If a machine could 'fix' human factors, then there would not be fundamental obstacles to this or that theoretical ideal. Humans now have mutually alien views, and there is no easy path to utopia.
Posted by: PatBuckman at Wednesday, December 11 2024 09:47 AM (rcPLc)
65kb generated in CPU 0.0121, elapsed 0.1157 seconds.
58 queries taking 0.1074 seconds, 351 records returned.
Powered by Minx 1.1.6c-pink.
58 queries taking 0.1074 seconds, 351 records returned.
Powered by Minx 1.1.6c-pink.