This accidentally fell out of her pocket when I bumped into her. Took me four goes.

Friday, July 12

Geek

Daily News Stuff 12 July 2024

Dead Intranet Theory Edition

Top Story

  • Could AIs become conscious? Right now, we have no way to tell.  (Ars Technica)

    Because we haven't properly defined the terms.

    What kind of AI?  SHRDLU from 1968 (yes, I keep mentioning that) displayed more evidence of consciousness than ChatGPT, and examining its abilities we see clear indications of a primitive level of conscious.

    There's a term in AI, sphexishness, referring to the golden digger wasp, Sphex ichneumoneus.  Like many wasps, the sphex wasp will paralyse its prey to serve as food for its larvae.  In this case it will paralyse an insect, drag it back to its burrow (it's a digger wasp), check that the burrow is safe, and then put the prey in the burrow to serve as a larder for the baby sphexes.

    The thing is, if you move the paralysed insect while the sphex is checking the burrow, it will move it back next to the burrow, inspect the burrow again, and then drag it into the burrow.

    And if you move the insect again while the sphex is re-checking the burrow, the cycle will repeat.  No matter how many times you do this, the sphex will not change its behaviour.

    SHRDLU was able to explain why it did what it did, step by step.  It didn't get angry at being given repeated nonsensical orders, but it could account for every action that it took.

    Anyway, can computers become conscious?  Well, they are already more conscious than insects, and have been for decades.

    Can computers become conscious with all the complexity as human consciousness?  Certainly not currently; they are not remotely powerful enough.

    Can LLMs become conscious?  A single LLM, no.  LLMs are designed specifically to avoid that.

    A pair of LLMs in a feedback loop?  Maybe, yes.  But likely also psychotic.


Tech News


Disclaimer: Node.js is the prostate cancer of Computer Science.

Posted by: Pixy Misa at 06:35 PM | Comments (3) | Add Comment | Trackbacks (Suck)
Post contains 564 words, total size 5 kb.

Thursday, July 11

Geek

Daily News Stuff 11 July 2024

Megabels Edition

Top Story

  • Construction of a Bitcoin "mine" in a Texas town has caused a massive outbreak of hypochondria and innumeracy.  (Time)
    On an evening in December 2023, 43-year-old small business owner Sarah Rosenkranz collapsed in her home in Granbury, Texas and was rushed to the emergency room. Her heart pounded 200 beats per minute; her blood pressure spiked into hypertensive crisis; her skull throbbed. "It felt like my head was in a pressure vise being crushed," she says. "That pain was worse than childbirth."
    Sounds nasty, but what does that have to do with Bitcoin?
    Over the course of several months in 2024, TIME spoke to more than 40 people in the Granbury area who reported a medical ailment that they believe is connected to the arrival of the Bitcoin mine: hypertension, heart palpitations, chest pain, vertigo, tinnitus, migraines, panic attacks.
    Over several months, dozens of people had a variety of symptoms ranging from the purely physical to the neurological to the psychological.  In a town of 11,000, and since they say "the Granbury area", let's note that the county is home to 60,000 people.
    "I’m sure it increases their cortisol and sugar levels, so you’re getting headaches, vertigo, and it snowballs from there," Bhaloo says. "This thing is definitely causing a tremendous amount of stress. Everyone is just miserable about it."
    You're sure.  Great.
    Not all data centers make noise.
    Yeah, this is not a serious article.
    Jenna Hornbuckle, 38, lost hearing in her right ear and was diagnosed with heart failure; ear exams document her hearing loss along with that of her 8-year-old daughter Victoria, who contracted ear infections that forced doctors to place a tube in her ear.
    Bitcoin is bad in many ways.  It does not cause heart failure or ear infections.
    As rock music blares from the speakers and other patrons chatter away, Rosenkranz pulls out her phone and clocks 72 decibels on a sound meter app—the same level that she records in Indigo’s bedroom in the dead of night. In early 2023, her daughter began waking up, yelling and holding her ears. Indigo’s room directly faces the mine, which sits about a mile and a half away.
    Okay, there's a tiny problem there.
    Shirley sticks his recorder out the window and the numbers on it flicker up and down as the roar washes over it. Eventually, the recorder caps out at 91 decibels, which the CDC estimates as roughly in between the output of a lawnmower and a chainsaw.
    91 decibels is pretty loud.  Even on a still night, at a distance of a mile and a half, it would be barely louder than leaves rustling in the trees.

    To register 72 decibels at that distance, the Bitcoin "mine" would have to be as loud as a Shuttle launch, 24 hours a day, 7 days a week.

Tech News



Disclaimer: That seems like a you problem.

Posted by: Pixy Misa at 06:11 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 597 words, total size 5 kb.

Wednesday, July 10

Geek

Daily News Stuff 10 July 2024

AGL Edition

Top Story

  • The FTC has banned messaging app NGL (not gonna lie) from hosting minors after catching them systematically lying. (Washington Post)

    Surprise, surprise, surprise.
    The complaint alleged that NGL tricked users into paying for subscriptions by sending them computer-generated messages appearing to be from real people and offering a service for as much as $9.99 a week to find out their real identity. People who signed up received only "hints" of those identities, whether they were real or not, enforcers said.
    After users complained about the "bait-and switch tactic," executives at the company "laughed off" their concerns, referring to them as "suckers," the FTC said in an announcement.
    And now NGL has to fork over five million dollars.

    Suckers indeed.

Tech News



Disclaimer: Unplugging it works too.

Posted by: Pixy Misa at 06:17 PM | Comments (4) | Add Comment | Trackbacks (Suck)
Post contains 334 words, total size 3 kb.

Tuesday, July 09

Geek

Daily News Stuff 9 July 2024

Polonium Enema Edition

Top Story

  • Goldman Sachs released a report on generative AI - the hot new thing pushing stock valuations of key tech companies into the trillions. In short: It's trash. (Where's Your Ed)

    The article linked above is not short but it is a good read. The full report is available for download and it doesn't pull any punches either:
    The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies, and utilities to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers. And even the stock of the company reaping the most benefits to date - Nvidia - has sharply corrected.
    From the article:
    In essence, on top of generative AI not having any killer apps, not meaningfully increasing productivity or GDP, not generating any revenue, not creating new jobs or massively changing existing industries, it also requires America to totally rebuild its power grid, which Janous regrettably adds the US has kind of forgotten how to do.
    There is that, yes.
    Generative AI is not going to become AGI, nor will it become the kind of artificial intelligence you've seen in science fiction. Ultra-smart assistants like Jarvis from Iron Man would require a form of consciousness that no technology currently - or may ever - have - which is the ability to both process and understand information flawlessly and make decisions based on experience, which, if I haven't been clear enough, are all entirely distinct things.
    Right. Although the understanding doesn't have to be flawless, merely good enough for the task at hand, and cheap enough that it's not simpler to just train a human and pay them to do it.

    Generative AI doesn't understand anything - it is a language model, not a fact model; doesn't gain experience, at least not in its current form, which is trained once at enormous expense and then left to rot; and doesn't make decisions.

    It's not AGI and has no path to become AGI. Terry Winograd's SHRDLU from 1968 is in important respects more sophisticated than ChatGPT, even though it was written by a single grad student on an 18-bit computer more than fifty years ago.


Tech News



Bottom Story



Disclaimer: Overhead, without any fuss, the stars were going out.

Posted by: Pixy Misa at 06:22 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 746 words, total size 7 kb.

Monday, July 08

Geek

Daily News Stuff 8 July 2024

Rhino Barn Edition

Top Story

  • Boeing is planning to plead guilty to criminal fraud.  (CNBC)

    No, not for that.  Not for that, either.  Yes, for the two fatal 737 Max crashes, and more specifically, for the flawed flight control system that caused those crashes.

    Boeing will be fined $240 million and be required at least $450 million in new compliance and safety programs, as well as having government compliance officials operating directly within its facilities.

    Much as I loathe the administrative state, the alternative looks like murder trials, sooner or later.


Tech News

  • China has seen the creation of one hundred competing LLMs.  (Tom's Hardware)

    Resulting in a massive outflow of money from China to Nvidia for AI hardware, before 98 of those projects (plus or minus five) inevitably crater.


  • Current leading AI models cost around $100 million to train.  The next generation currently in development could cost closer to $1 billion.  (Tom's Hardware)

    So a hundred of those would cost...  Carry the twelve...  A lot.
    "Right now, 100 million. There are models in training today that are more like a billion." Amodei also added, "I think if we go to ten or a hundred billion, and I think that will happen in 2025, 2026, maybe 2027, and the algorithmic improvements continue a pace, and the chip improvements continue a pace, then I think there is in my mind a good chance that by that time we'll be able to get models that are better than most humans at most things."
    For a hundred billion dollars, you get something that is incapable of learning (LLMs are trained once) and is better than humans mostly at things that aren't particularly useful.

    Hooray.  We're saved.


  • The looming spectre of Mt Gox paying back its users wiped $170 billion off the global crypto market.  (CNBC)

    Pay back users?  What insanity is this?


  • Fedora Linux 41 will retire Python 2.7.  (Fedora Project)

    Python 2.7 still works, but it was released in 2.7, and support ended in 2020.  The current version is 3.12, with 3.13 in beta.

    You can actually still get a supported release of Python 2.7 in the form of PyPy, a Python compiler written in Python.  Since it's written in Python 2.7 and can compile Python 2.7 (as well as more recent versions like 3.10), they are planning to support Python 2.7 indefinitely.


  • What has it got in its pockets?  (Liliputing)

    An eight core Ryzen 8840U, 32GB of RAM, an M.2 2230 SSD, USB4, and wifi, all packed into a folding keyboard.

    It even has the Four Essential Keys, sort of.  Dedicated Home and End, and four keys marked L1 through L4.  It's a little cramped, but if you want a powerful computer that can fit in your coat pocket, it is one.


Disclaimer: And it still is if you don't.

Posted by: Pixy Misa at 06:35 PM | Comments (4) | Add Comment | Trackbacks (Suck)
Post contains 473 words, total size 5 kb.

Sunday, July 07

Geek

Daily News Stuff 7 July 2024

Revoked Edition

Top Story

  • Merle Meyers did not kill himself: A former Boeing inspector says parts marked for scrap ended up being built into planes. (CNN)
    Meyers, a 30-year veteran of Boeing, described to CNN what he says was an elaborate off-the-books practice that Boeing managers at the Everett factory used to meet production deadlines, including taking damaged and improper parts from the company’s scrapyard, storehouses and loading docks.
    This, if true, should result in felony convictions.
    Beginning in the early 2000s, Meyers says that for more than a decade, he estimates that about 50,000 parts "escaped" quality control and were used to build aircraft. Those parts include everything from small items like screws to more complex assemblies like wing flaps. A single Boeing 787 Dreamliner, for example, has approximately 2.3 million parts.

    Most of the parts that were meant to be scrapped were often painted red to signify they were unsuitable for assembly lines, Meyers said. Yet, in some cases, that didn’t stop them from being put into planes being assembled, he said.
    Lots of felony convictions.

     


Tech News



Almost Relevant Music Video of the Day



Disclaimer: She's a model and she's spewing nonsense. (Keyboard riff.)

Posted by: Pixy Misa at 04:40 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 568 words, total size 5 kb.

Saturday, July 06

Geek

Daily News Stuff 6 July 2024

Daily News Stuff Edition

Top Story

  • OpenAI got hacked, but definitely not by Chinese spies.  (New York Times)  (archive site)

    In April.  2023.

    And they didn't report it until now.  In fact, they didn't report it at all.  One of their employees alluded to the event on a podcast, and was fired for his trouble.
    Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security wasn’t strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
    OpenAI denies this - well, not the hack, nor the firing, nor the fact that Aschenbrenner leaked the information, but the causal connection, in an announcement apparently written by a special release of ChatGPT trained exclusively on modified limited hangouts:
    "We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation," an OpenAI spokeswoman, Liz Bourgeois, said.
    Liz Bourgeois?  Obviously a made up name.
    Referring to the company's efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, "While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company."
    OpenAI is making no efforts to build AGI, safe or otherwise.  OpenAI's focus is building something and then redefining the term AGI to claim they have achieved it.



Tech News



Bagger 288 Video of the Day



It was a simpler, more innocent time.


Disclaimer: And we've got another 287 where that came from.

Posted by: Pixy Misa at 05:52 PM | Comments (7) | Add Comment | Trackbacks (Suck)
Post contains 651 words, total size 6 kb.

Friday, July 05

Geek

Daily News Stuff 5 July 2024

Cinco De Julio Edition

Top Story

Tech News

Disclaimer: Scrape sixteen sites, and what do you get?  Half a terabyte of furry porn and the worst Joe Biden deepfakes yet. 

Posted by: Pixy Misa at 05:17 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 315 words, total size 3 kb.

Thursday, July 04

Geek

Daily News Stuff 4 July 2024

Happy Freedom Day

Top Story

Tech News


Perception Check Music Video of the Day



Every character in the video is a Hololive member. The human bard is Kureiji Ollie of Hololive Indonesia, who on hearing the song announced ME IN DnD. The long-suffering DM is Calliope Mori who is in fact the long-suffering DM of all of Hololive English's tabletop games.


Twitter has fixed embeds for real, it seems. They were broken for months, and then flaky for many more months.


Disclaimer: Happy Freedom Day. Strictly one eagle per customer. No rainchecks, no refunds.

Posted by: Pixy Misa at 05:42 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 694 words, total size 7 kb.

Wednesday, July 03

Geek

Daily News Stuff 3 July 2024

Centrifugal Bumblefuck Edition

Top Story

  • Everything new is old again: A critical vulnerability in OpenSSH that was fixed all the way back in 2006 is back again.  (ZDNet)

    Oops.  Also, fuck.

    Dubbed regreSSHion - it has a cute name, so you know it's serious the bug lets you log into a server by not logging into it.

    That is, you start the login process repeatedly - a hundred times in parallel, if you can - and never complete it, and attach a sneaky payload that has a tiny chance of blowing up on the target server when your login times out.

    On older 32-bit systems it takes a few hours on average for this to work.

    On 64-bit systems it's more complicated to exploit and would take a week or more of constant effort; since the bug has only just been reported nobody has demonstrated a successful attack against a 64-bit system yet, so it may take even longer.

    Reviewing all the servers at work turned up one vulnerable system; every other server was properly locked down.  I don't know who set it up, but I curse their name.  Whatever it is.


Tech News



Disclaimer: We choose to nuke the Moon and do the other things, not because it is easy, but because fuck you, Gandhi.

Posted by: Pixy Misa at 06:32 PM | Comments (5) | Add Comment | Trackbacks (Suck)
Post contains 471 words, total size 4 kb.

<< Page 2 of 656 >>
117kb generated in CPU 0.0297, elapsed 0.8966 seconds.
60 queries taking 0.879 seconds, 391 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 389