Monday, June 05
Stop, Phantom Time Edition
- The end of programming as we know it, part one. (New York Times)
Farhad Manjoo, professional idiot, writes in the Times that AI is going to totally revolutionise programming and allow people who don't understand the question to somehow get the right answer.
This won’t necessarily be terrible for computer programmers —the world will still need people with advanced coding skills — but it will be great for the rest of us. Computers that we can all "program,” computers that don’t require specialized training to adjust and improve their functionality and that don’t speak in code: That future is rapidly becoming the present.Yes, it's called automatic programming, and we've had it since COBOL.
Programming used to be hard.
A.I. tools based on large language models — like OpenAI Codex, from the company that brought you ChatGPT, or AlphaCode, from Google’s DeepMind division — have already begun to change the way many professional coders do their jobs. At the moment, these tools work mainly as assistants — they can find bugs, write explanations for snippets of poorly documented code and offer suggestions for code to perform routine tasks (not unlike how Gmail offers ideas for email replies — "Sounds good"; "Got it").And they are terrible at those things, with one exception: They can be useful for finding bugs. After all, if you write the code and then have an AI check it for bugs, the worst that can happen is you waste some time verifying that the bug is not actually a bug, and at best you catch an embarrassing mistake before your customers' critical data ends up in a Laotian bot farm.
But A.I. coders are quickly getting smart enough to rival human coders. Last year, DeepMind reported in the journal Science that when AlphaCode’s programs were evaluated against answers submitted by human participants in coding competitions, its performance "approximately corresponds to a novice programmer with a few months to a year of training."Which is rather like a doctor with a few months of training. We call such people... Well, we don't call them doctors.
"Programming will be obsolete," Matt Welsh, a former engineer at Google and Apple, predicted recently. Welsh now runs an A.I. start-up, but his prediction, while perhaps self-serving, doesn’t sound implausible.Not unless you know what you're talking about, anyway.
AI can take over programming tasks but not the form of AI currently being pushed by all the same people who were pushing the blockchain as the cure for all our ills a year ago. To work for such tasks, you need a fact model accompanying the language model, and systems like ChatGPT don't have that, at all.
The lack of a fact model is also why ChatGPT lies constantly. One of the reasons. It's not that it lies deliberately, it's that it simply makes no distinction between true and false statements.
And that is what these people want to use to write the code that runs modern civilisation.
I'd suggest stocking up on gold, guns, ammo, and canned goods, but I expect this bubble to implode of its own accord. It's just too damn stupid.
- The end of programming as we know it, part two. (GitHub)
DreamBerd is the perfect programming language. We know this because the documentation says so.
Sadly this perfect language hasn't actually been implemented; rather it's a parody of every breathless announcement of a New Programming Language that is set to Change The World, like... What was that one that showed up last month? Mojo, that's it. The first programming language in the world with a waiting list.
- And the reason the AI bubble is going to implode sooner rather than later is, of all things, Facebook. (Slate)
Facebook open-sourced its own AI (we're referring to Large Language Models, because that's where all the noise is right now), and the open-source community picked it up and ran with it.
The open-source versions are faster, more efficient, and produce better results than the commercial versions, and they don't refuse to answer your questions if the answer would make a Berkeley philosophy grad student cry into his chai latte.
They still share the same fundamental limitations of LLMs - they don't actually know anything - but they don't have the arbitrary limitations imposed on ChatGPT and other big tech products.
- So, for example, Google's new AI-enhanced search is too slow to use. (The Verge)
While you're waiting for it to generate a wildly inaccurate summary, you can just... Read the search results.
- Blaseball is over. (The Verge)
Apparently an online fantasy baseball league simply cost too much to run.
The article calls it a "fake" fantasy baseball league, and I'm not sure whether I hope that's redundant or not.
Expect for the word 'blern' that was complete gibberish.
Posted by: Frank at Monday, June 05 2023 09:08 PM (rglbH)
Posted by: StargazerA5 at Monday, June 05 2023 10:44 PM (NaPyl)
"On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." -- Charles Babbage
Posted by: J Greely at Monday, June 05 2023 11:05 PM (oJgNG)
Posted by: J Greely at Tuesday, June 06 2023 03:08 AM (oJgNG)
Posted by: Pixy Misa at Tuesday, June 06 2023 09:48 AM (PiXy!)
Posted by: thornharp at Wednesday, June 07 2023 02:09 AM (7lcFP)
58 queries taking 0.2266 seconds, 339 records returned.
Powered by Minx 1.1.6c-pink.