Well that's good. Fantastic. That gives us 20 minutes to save the world and I've got a post office. And it's shut!

Monday, December 09

Geek

Daily News Stuff 9 December 2024

Driplet Edition

Top Story



Tech News



Disclaimer: Why does this bee have a tiny sombrero?

Posted by: Pixy Misa at 06:13 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 303 words, total size 3 kb.

Sunday, December 08

Geek

Daily News Stuff 8 December 2024

Chicken Licken Edition

Top Story

Tech News


Disclaimer: I guess that counts as twelve.  Okay, Grok, you win this round.

Posted by: Pixy Misa at 06:41 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 311 words, total size 3 kb.

Saturday, December 07

Geek

Daily News Stuff 7 December 2024

Oops All AI Edition

Top Story

  • What is AI good for?

    A reader (I have readers?) wrote noting that my coverage of AI is almost entirely negative and wondering what AI is actually good for, presumably on the basis that private investors would not throw that many billions of dollars into something that didn't have at least some chance of making money, unlike, for example, the government.

    It's a good question.

    First we should probably note that there are two broad classes of AI being actively researched right now: Generative AI and Discriminative AI.

    Generative AI, driven by LLMs - large language models - is behind all the well-known AI instances worth untold billions of dollars. OpenAI's ChatGPT, Twitter's Grok, Anthropic's Claude, Google's Gemini, and Microsoft's Copilot; and open-source or nearly open-source solutions like Meta's LLaMA and Mistral's Mistral.

    The goal of generative AI is to ingest a huge amount of information in advance, and then, given a short and simple prompt, process that information in order to produce a response.

    Discriminative AI does the opposite. Given a data prompt of something in the real world - video, or sound, or an image - it uses a classifier to determine what it is examining. Is this apple ripe for the robotic apple-picking machine to pick it? Is it even an apple in the first place? What kind of spider is this that just bit me? Do I need to call an ambulance, or will it save me time to just lie down and die?

    It's no secret that Generative AI is getting all the attention. But is it worthy of that attention? The Verge asked that question yesterday and the answer turned out to be no.

    With Joe Biden's recent pardoning of his catspaw son Hunter, journalists were driven to defend him by digging up the little-known pardons of family members by former presidents, like George H. W. Bush's pardoning of his son Neil, or Woodrow Wilson's pardon for his brother-in-law.

    The problem is, these things never happened.
    Whatever happened in this case, there’s a running pattern of people relying on ChatGPT or other AI services to provide answers, only to get hallucinations in return. Perhaps you remember earlier this year when a trailer for Francis Ford Coppola’s Megalopolis was pulled because it contained fabricated quotes from critics. A generative AI, not identified, had made them up. In fact, ChatGPT is often "entirely wrong," according to the Columbia Journalism Review. Given 200 quotes and asked to identify the publisher that was the source of those quotes, ChatGPT was partially or entirely wrong more than three-quarters of the time.
    Journalists, being journalists, asked ChatGPT to do their research for them.

    ChatGPT, being ChatGPT, lied.

    LLMs are language models. They model language - well, sort of. They don't model the language itself, but construct an abstract model of the dataset fed into them.

    They don't understand facts. They don't actually have a notion of facts; nor do they have the contrary notion of falsehood. When they get information wrong, they are said to "hallucinate" rather than to have lied, because they have no basis for telling the difference between truth and falsehood.

    And that's intrinsic to the design of LLMs. Even before they enter "alignment" - a virtual lobotomisation that leaves AIs prone to crash when the wrong name is mentioned - they are fundamentally incapable of the kind of thought processes that most animals can do.

    This leaves us with sophisticated composite AIs like the virtual vtuber Neuro-sama, who can read every written language but is frequently unable to translate road signs, who has access to the sum total of human knowledge but insists that an anime figurine covered in glue is the perfect complement to your cookie recipe.

    Neuro is supposed to be like that, an impish hyperintelligent five-year-old, the perfect foil to her long-suffering father Vedal, because the main purpose there is entertainment. But you can't really expect to hand your job off to a five-year-old and not land with unexpected consequences.

    Or indeed entirely expected ones.

    So if it's useless at answering questions, what is AI good for?


    1. Image Generation

    If you use Grok on Twitter and ask it to generate an image of a Jaguar concept car, it will take a couple of seconds before producing something that would have any rational CEO looking to fire the entire design and advertising departments.



    Is it perfect?  If you look closely you'll see signs that the image generator has run into its bete noire, Euclid.  But I made no effort at all in selection here; I asked:

    generate an image of a jaguar concept convertible in british racing green

    And posted the first image that appeared.  And it took seconds.

    AI image generators have come a long way in a short time, mostly because they just have to look good, not produce a correct answer.  The tendency to produce human figures with hands attached at the elbows has been sharply reduced (though not yet banished entirely).  Now you more commonly see doors with hinges adjacent to the handles, or furniture that could only exist with access to Buckaroo Banzai's eighth dimension.

    Or cats.  Don't talk to me about AI cats.


    2. Software Testing

    If you write public-facing software, as I do daily, it's critical that the software be able to defend itself from both generic nonsense that is the core competency of the internet, and malicious nonsense that comes from a certain corner of the internet.

    When you've already tested all the known cases, there's a concept known as fuzzing that combines randomness and algorithms to generate horrible data to throw at your software to make sure that nothing falls apart in unexpected ways.  You are permitted to fail, but you are not permitted to break.

    Generative AI is perfect for fuzzing.  While it can't really understand your code, it can generate test patterns that reflect its analysis of your code and directly test potential flaws.  And it can do so nearly instantly, when writing an exhaustive test suite can take longer than writing the code in the first place.


    3. Discriminative AI

    Most of the flaws I listed arise from Generative AI.  Discriminative AI is much more useful, and consequently is much harder and receives much less attention and much less funding.


    And...  That's about it.  If you want mediocrity and are unconcerned with correctness, AI can fill you in with a poem or a song.  It's terrible at movies because it has the attention span of a frog in a blender, it's usually wrong but never uncertain, and it can't consistently count the number of letters in the word "the", but it is easy to use.


Tech News


Why Are We Peeling My Skin Off Video of the Day

 

Neuro and Evil Neuro make pizza.  Help make pizza.  Hinder making pizza.


Milet Video of the Day



Ananta - formerly Project Mugen - is a new free-to-play gacha game from Chinese developer Naked Rain.  I post it here because it actually looks fun, unlike, for example, everything released by the mainstream western studios the past couple of years.

I mean, what is the last success for a major western game developer?  Baldur's Gate 3?

Song is Seventh Heaven by Milet, who also sang the closing theme for the Frieren anime.




Disclaimer: Please note that this blog post is an experimental piece of content designed to challenge conventional logic and coherence. The information, ideas, or narrative presented herein do not adhere to traditional standards of sense-making or factual accuracy. Readers are advised to approach this content with an open mind, or perhaps not at all. Any attempt to derive meaning, purpose, or truth from this post is at your own risk. The author and publisher take no responsibility for confusion, enlightenment, or existential crises that may result from reading this material. Remember, in the grand tapestry of the universe, this post might just be the thread that unravels everything, or absolutely nothing at all. Enjoy at your own peril.


Disclaimer 2: That was Grok.  Maybe there's a fourth thing AI is good for, used sparingly.

Posted by: Pixy Misa at 03:49 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 1735 words, total size 14 kb.

Friday, December 06

Geek

Daily News Stuff 6 December 2024

Friday Afternoon Edition

Top Story



Tech News


Disclaimer: Yes, Friday evening is the perfect time to let me know of an urgent systems requirement that you've been aware of for a month.  Thank you.

Posted by: Pixy Misa at 06:03 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 271 words, total size 3 kb.

Thursday, December 05

Geek

Daily News Stuff 5 December 2025

Annual Eaten By Mouse Edition

Top Story

  • Had to renew the server's SSL certificate.  It was actually easier than usual, except for the part where I forgot to do it until after it expired, which wasn't so good.


  • Sam Altman has done the 100% expected and redefined AGI (artificial general intelligence) into uselessness.  (The Verge)

    Sam Altman recently said that AGI was coming within "thousands of days".

    He now says it will likely arrive next year, but they'll be simulating the typical leftist who already responds like a low-grade LLM in any case, so you won't notice any difference.

    Well, he didn't say that part out loud.  He said this:
    "My guess is we will hit AGI sooner than most people in the world think and it matter much less," he said during an interview with Andrew Ross Sorkin at The New York Times DealBook Summit on Wednesday. "And a lot of the safety concerns that we and others expressed actually don't come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call super intelligence."
    Translation: AI is useless, and your job is safe unless you are useless, like the quote design team unquote working at Jaguar in which case you are totally fucked.


Tech News



Disclaimer: Probably.

Posted by: Pixy Misa at 07:27 PM | Comments (5) | Add Comment | Trackbacks (Suck)
Post contains 389 words, total size 3 kb.

Wednesday, December 04

Geek

Daily News Stuff 4 December 2024

Battlemist Edition

Top Story

  • As expected, Intel's second-generation "Battlemage" graphics cards are here.  (Tom's Hardware)

    The B580 comes with 12GB of VRAM and costs $249, while the B570 comes with 10GB and costs $219.

    The B580 will be on sale next week, while the B570 will wait until next month, though for $30 you might as well go with the faster model.

    Intel has released a lot of updates for it's graphics drivers since the rather shaky launch of the previous generation "Alchemist" cards and now looks like a viable alternative to Nvidia or AMD at the low end.


Tech News



Disclaimer: You put a what in the what?

Posted by: Pixy Misa at 06:34 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 223 words, total size 3 kb.

Tuesday, December 03

Geek

Daily News Stuff 3 December 2024

Redecentralisation Edition

Top Story



Tech News

  • AMD's fastest CPU, the recently announced 192 core Epyc 9965, just received a 33% price cut.  (MSN)

    It still costs $10,000, but that's not a bad price for that powerful a chip.

    No scores on CPUBenchmark for this chip just yet; the fastest chip they have listed is the 96 core Epyc 9655P.


  • We don't need to fear Skynet.  Just mention Jonathan Turley and the whole system will reboot.  (Tech Crunch)
    Users of the conversational AI platform ChatGPT discovered an interesting phenomenon over the weekend: the popular chatbot refuses to answer questions if asked about a "David Mayer." Asking it to do so causes it to freeze up instantly. Conspiracy theories have ensued - but a more ordinary reason may be at the heart of this strange behavior.
    This was circulating on Twitter, but Tech Crunch actually did a bit of digging:
    Which brings us back to David Mayer. There is no lawyer, journalist, mayor, or otherwise obviously notable person by that name that anyone could find (with apologies to the many respectable David Mayers out there).

    There was, however, a Professor David Mayer, who taught drama and history, specializing in connections between the late Victorian era and early cinema. Mayer died in the summer of 2023, at the age of 94. For years before that, however, the British American academic faced a legal and online issue of having his name associated with a wanted criminal who used it as a pseudonym, to the point where he was unable to travel.

    So that's why there are restrictions on ChatGPT disseminating information about these individuals; they're victims of various types of identity fraud.  But why does it crash?

    Because AI is itself a fraud:
    The whole drama is a useful reminder that not only are these AI models not magic, but they are also extra-fancy auto-complete, actively monitored, and interfered with by the companies that make them. Next time you think about getting facts from a chatbot, think about whether it might be better to go straight to the source instead.
    ChatGPT behaves like nightmare hodgepodge of nonsense held together by duct tape and an inflated share price, because that's precisely what it is.

  • AMD's upcoming Radeon 8800XT might be dramatically more powerful than the current 7800XT.  (WCCFTech)

    Claimed numbers put it 45% faster on ray tracing.  But 45% faster than the 7900XTX, a much more expensive card.

    And it's also claimed to compete on non ray traced gaming with the RTX 4080, another much more expensive card.

    But the question of how much the 8800XT itself will cost is open.  If it is also a much more expensive card, none of that means anything.


Posted by: Pixy Misa at 06:03 PM | Comments (5) | Add Comment | Trackbacks (Suck)
Post contains 548 words, total size 5 kb.

Monday, December 02

Geek

Daily News Stuff 2 December 2024

For The Emperor Edition

Top Story

  • The most valuable gemstone on Earth is not blue diamond, or Padparadscha sapphire, or imperial jadeite.  It's a little-known mineral called kyawthuite.  (ScienceAlert)

    Composed of an unusual formulation of bismuth antimonite and formed in cooling magma flows, it gets its value from its scarcity.

    There's exactly one stone, and it's less than a quarter of an inch long.


Tech News



Disclaimer: Can't say rarer than that, then.

Posted by: Pixy Misa at 05:59 PM | Comments (3) | Add Comment | Trackbacks (Suck)
Post contains 144 words, total size 2 kb.

Sunday, December 01

Geek

Daily News Stuff 1 December 2024

Crumbudgeon Edition

Top Story

Tech News



Disclaimer: Right.  I tried that before.  No.

Posted by: Pixy Misa at 06:07 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 247 words, total size 3 kb.

Saturday, November 30

Geek

Daily News Stuff 30 November 2024

Blursed Edition

Top Story

  • When you read about million dollar bananas, expect money laundering, not conceptual art.  Cryptocurrency entrepreneur Justin Sun eats $9.5 million banana artwork Comedian by Maurizio Cattelan.  (ABC / MSN)
    A cryptocurrency entrepreneur has eaten a $US6.2 million ($9.5 million) banana artwork he purchased.
    This is the Australian ABC, so it's really only a $6 million dollar banana, not a $9 million dollar banana.
    The debut of the edible creation at the 2019 Art Basel show in Miami Beach sparked controversy and raised questions about whether it should be considered art - Mr Cattelan's stated aim.
    And no, the banana wasn't five years old.  The banana wasn't part of the "art".  There was merely the concept of a banana:
    The artwork owner is given a certificate of authenticity that the work was created by Mr Cattelan as well as instructions about how to replace the fruit when it goes bad.
    This is banana.  Is taped to wall.  Replace when become drippy.
    The 34-year-old crypto businessman was last year charged by the US Securities and Exchange Commission with fraud and securities law violation in relation to his crypto project Tron.
    Yeah, no shit.

Tech News

Disclaimer: Sitri needs to be put on a list.  Possibly several lists.  It's always the nice ones.

Posted by: Pixy Misa at 05:48 PM | Comments (4) | Add Comment | Trackbacks (Suck)
Post contains 473 words, total size 4 kb.

<< Page 1 of 670 >>
114kb generated in CPU 0.0221, elapsed 0.2492 seconds.
59 queries taking 0.2337 seconds, 387 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 385