Wednesday, June 21

Geek

Daily News Stuff 21 June 2023

Shark Day Edition

Top Story

  • A new LLM (large language model, the same sort of AI as ChatGPT) called phi-, with 1.3 billion tokens, scores over 50% on the HumanEval problem set.  (Twitter)

    GPT-4 scores 67% - but uses 1.7 trillion tokens.

    How did they achieve this miracle?  They trained phi-1 on textbooks rather than on the internet.

    And what does it means?  It means you can produce an AI that is smart enough to perform simple tasks and small enough to run on your laptop - and probably your phone.

    What else does it mean?  It means to score 85% on that test using the same approach as GPT-4 you'd need something like 2 quadrillion tokens, which would cost billions of dollars to train even if you could find that much data.  And then years to "align", that is, to get it to stop giving obviously wrong answers because you stuffed it full of nonsense.

    Garbage in, garbage out.

    phi-1 took four days to train.  (Arxiv)

    Also, speaking of garbage, don't use textbooks published after 2010 or so.

Tech News



Disclaimer: Instead of office chair, package contained live shark.  Not complaining, but do you have any more of these?

Posted by: Pixy Misa at 06:06 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 420 words, total size 4 kb.




Apple pies are delicious. But never mind apple pies. What colour is a green orange?




50kb generated in CPU 0.0151, elapsed 0.1099 seconds.
56 queries taking 0.0997 seconds, 339 records returned.
Powered by Minx 1.1.6c-pink.