Monday, June 05

Geek

Daily News Stuff 5 June 2023

Stop, Phantom Time Edition

Top Story

  • The end of programming as we know it, part one.  (New York Times)

    Farhad Manjoo, professional idiot, writes in the Times that AI is going to totally revolutionise programming and allow people who don't understand the question to somehow get the right answer.
    This won’t necessarily be terrible for computer programmers —the world will still need people with advanced coding skills — but it will be great for the rest of us. Computers that we can all "program,” computers that don’t require specialized training to adjust and improve their functionality and that don’t speak in code: That future is rapidly becoming the present.
    Yes, it's called automatic programming, and we've had it since COBOL.

    Programming used to be hard.
    A.I. tools based on large language models — like OpenAI Codex, from the company that brought you ChatGPT, or AlphaCode, from Google’s DeepMind division — have already begun to change the way many professional coders do their jobs. At the moment, these tools work mainly as assistants — they can find bugs, write explanations for snippets of poorly documented code and offer suggestions for code to perform routine tasks (not unlike how Gmail offers ideas for email replies — "Sounds good"; "Got it").
    And they are terrible at those things, with one exception: They can be useful for finding bugs.  After all, if you write the code and then have an AI check it for bugs, the worst that can happen is you waste some time verifying that the bug is not actually a bug, and at best you catch an embarrassing mistake before your customers' critical data ends up in a Laotian bot farm.
    But A.I. coders are quickly getting smart enough to rival human coders. Last year, DeepMind reported in the journal Science that when AlphaCode’s programs were evaluated against answers submitted by human participants in coding competitions, its performance "approximately corresponds to a novice programmer with a few months to a year of training."
    Which is rather like a doctor with a few months of training.  We call such people...  Well, we don't call them doctors.
    "Programming will be obsolete," Matt Welsh, a former engineer at Google and Apple, predicted recently. Welsh now runs an A.I. start-up, but his prediction, while perhaps self-serving, doesn’t sound implausible.
    Not unless you know what you're talking about, anyway.

    AI can take over programming tasks but not the form of AI currently being pushed by all the same people who were pushing the blockchain as the cure for all our ills a year ago.  To work for such tasks, you need a fact model accompanying the language model, and systems like ChatGPT don't have that, at all.

    The lack of a fact model is also why ChatGPT lies constantly.  One of the reasons.  It's not that it lies deliberately, it's that it simply makes no distinction between true and false statements.

    And that is what these people want to use to write the code that runs modern civilisation.

    I'd suggest stocking up on gold, guns, ammo, and canned goods, but I expect this bubble to implode of its own accord.  It's just too damn stupid.

Tech News

  • The end of programming as we know it, part two.  (GitHub)

    DreamBerd is the perfect programming language.  We know this because the documentation says so.

    Sadly this perfect language hasn't actually been implemented; rather it's a parody of every breathless announcement of a New Programming Language that is set to Change The World, like...  What was that one that showed up last month?  Mojo, that's it.  The first programming language in the world with a waiting list.


  • And the reason the AI bubble is going to implode sooner rather than later is, of all things, Facebook.  (Slate)

    Facebook open-sourced its own AI (we're referring to Large Language Models, because that's where all the noise is right now), and the open-source community picked it up and ran with it.

    The open-source versions are faster, more efficient, and produce better results than the commercial versions, and they don't refuse to answer your questions if the answer would make a Berkeley philosophy grad student cry into his chai latte.

    They still share the same fundamental limitations of LLMs - they don't actually know anything - but they don't have the arbitrary limitations imposed on ChatGPT and other big tech products.


  • So, for example, Google's new AI-enhanced search is too slow to use.  (The Verge)

    While you're waiting for it to generate a wildly inaccurate summary, you can just...  Read the search results.


  • Blaseball is over.  (The Verge)

    Apparently an online fantasy baseball league simply cost too much to run.

    The article calls it a "fake" fantasy baseball league, and I'm not sure whether I hope that's redundant or not.


Dislaimer: Hey I'm starting to get the hang of this game. The blerns are loaded, the count's 3 blerns and 2 anti-blerns, and the in-field blern rule is in effect... right?
Expect for the word 'blern' that was complete gibberish.

Posted by: Pixy Misa at 06:09 PM | Comments (6) | Add Comment | Trackbacks (Suck)
Post contains 836 words, total size 7 kb.

1 I remember back in 1970 when the chattering classes from the programming world were all insisting that COBOL was obsolete and would very shortly be replaced by _______ (fill in the blank with the current in-thing language). 

Posted by: Frank at Monday, June 05 2023 09:08 PM (rglbH)

2 "allow people who don't understand the question to somehow get the right answer."  With respect to my compatriots who are programmers, ss somebody who manages Human Capital Management ERP systems, I find this often applies to those who are creating the software.  As for good reason, the people making the software get their requirements from Product managers who have never done the work they are building requirements from, usually from conversations with people like me who are involved with the work day-to-day just enough to be dangerous and work closely with the actual practitioners.  I remember during my last period of unemployment applying for a Manager of Product Managers position with an up-and-coming vendor, and the Hiring Manager asking me if I "knew [insert methodology for prioritizing requests from customers that will likely be replaced in 2-3 years]. " "No, but I have 20+ years experience working side-by-side day in and out with the practitioners understanding their needs and translating them to requirements."  "That doesn't matter"  The only contact that Senior Manager of HR Product Managers had with HR was their discussions when something wasn't going right or they needed to input in org shaping.

Posted by: StargazerA5 at Monday, June 05 2023 10:44 PM (NaPyl)

3

"On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." -- Charles Babbage

Posted by: J Greely at Monday, June 05 2023 11:05 PM (oJgNG)

4 I have no idea how my quote ended up so large and bold...

-j

Posted by: J Greely at Tuesday, June 06 2023 03:08 AM (oJgNG)

5 Worth it, though.

Posted by: Pixy Misa at Tuesday, June 06 2023 09:48 AM (PiXy!)

6 The AI-enhanced search sounds like a relative of the Graphic Omniscient Device from David Gerrold's "When Harlie Was One".

Posted by: thornharp at Wednesday, June 07 2023 02:09 AM (7lcFP)

Hide Comments | Add Comment




Apple pies are delicious. But never mind apple pies. What colour is a green orange?




56kb generated in CPU 0.0159, elapsed 0.4594 seconds.
58 queries taking 0.4496 seconds, 346 records returned.
Powered by Minx 1.1.6c-pink.