I'm in the future. Like hundreds of years in the future. I've been dead for centuries.
Oh, lovely, you're a cheery one aren't you?

Saturday, July 13


Daily News Stuff 13 July 2024

Fast News Day Edition

Top Story

  • That's going to leave a mark: Call and text records for the past six months for almost all AT&T customers have been stolen in the slow-motion train wreck that is the Snowflake breach.  (Tech Crunch)

    Snowflake was an online data analytics platform with some major customers including Ticketmaster and Santander Bank.  Technically I think Snowflake is still operating but I wouldn't expect them to be around for much longer.

    While Ticketmaster is international and the number of customers affected far outweighs AT&T, in that breach the hackers got your name, address, last four digits of your credit card, that kind of stuff.

    In this hack they got the phone numbers of everyone 110 million people called or texted over a period of six months.  And it's all out there, forever.

    It does not (according to AT&T) include the contents of the text messages, and calls even if recorded for whatever reason would not be stored in this kind of database.  I hope.  That would raise it from a mere disaster to a catastrophe.

    And if you're not an AT&T customer you might still be affected if your phone company uses the AT&T network behind the scenes.

Tech News

  • Meta has dropped the special restrictions it had placed on Donald Trump's Facebook and Instagram accounts.  (The Verge)

    You can enjoy the screaming in the comments there, if you like.

  • Emmanuel Goldstein's X faces big EU fines as paid checkmarks are ruled deceptive.  (Ars Technica)

    Sorry, I mean Elon Musk.  It's pure coincidence that every Ars Technica article mentioning SpaceX, Twitter, Tesla, or Starlink has the name ELON MUSK in it as a signal for the zombies to swarm.

    Anyway, the headline is a lie.  There is no such ruling.

    But EU Bookburner General Terry Britain claims that blue checkmarks used to denote trustworthy sources of information which is either a direct lie or a sign of galloping early-onset Alzheimer's.

    Musk fired back asserting that the EU offered Twitter an illegal secret deal under which they would hold off on fines if the company would enact secret censorship under their direction.

    Terry Britain claimed that the EU doesn't do that, but...  It does.

    Also interesting to see on that article, Ars Technica's creative director Aurich Lawson getting savaged by his own mob.

  • Don't run your Ryzen 9 9950X at 60W.  (WCCFTech)

    More leaked benchmarks, but there's something interesting here because the same tests were run at power levels of 60W, 90W, 120W, 160W, and 230W, with the last of those being the maximum boost power setting without manual overclocking.

    At 60W the CPU ran cool at 41C and managed 4GHz on all cores.  Bumping it up to 90W pushed the temperature to 49C but the clock to just over 5GHz.  At 160W it reaches 5.55GHz and at 230W 5.6GHz, so there's no real point in going over 160W.

  • Emmanuel Gold - sorry.  SpaceX's Falcon 9 has been grounded after issues with a second stage caused the latest cluster of Starlink satellites to miss their orbital target.  (Reuters)

    They are trying to boost the satellites into the correct orbit with the on-board ion drives - really - but even if that works it will reduce their service life.

    This comes after, if you weren't keeping track, 365 consecutive Falcon 9 launches without issue.

  • Looking to buy a Z80 computer before supply of the chip runs out forever?  Tindie (who) and Zeal have you covered.  (Tindie)

    For $180 you get a 10MHz Z80, 256k of flash storage, 512k of RAM, VGA graphics, and four voice sound synthesis.

    It's kind of neat if you're into retrocomputing.

  • Speaking of retrocomputing the German navy is working on phasing out eight inch floppy disks.  (Ars Technica)

    I have no idea why they are used because the ships involved were commissioned in the mid-nineties, by which time even 3.5" floppies had been around for a decade, and eight inch models had been dead for years.

  • A "red team" from CISA broke into another federal agency and had free rein in its network, undetected, for five months.  (The Register)

    Wait, the federal government noticed a massive problem in only five months?

  • Peer review is essential for science.  Unfortunately it's fucked.  (Ars Technica)

    Well, they use the term broken, but I felt it needed a little more oomph.
    The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the "wrongness" of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.

    That's now largely impossible with the modern scientific enterprise so reliant on computers.
    Specifically because scientific papers very rarely include the code used to analyse the data.  All the assumptions - and the errors - in that code are hidden from reviewers.

    Noting that peer review is f - broken, the author, not a journalist but real life astrophysicist Paul Sutter, moves on to science's second line of defense, replication, where things are no better:
    There's a complex swirling septic tank of problems that all contribute to the replication crisis, but the first issue is that replication isn't sexy. You don't get to learn new things about the world around us; you just get to confirm whether someone else learned new things about the world around us. As an added ugly bonus, non-results often don't even get published. Novelty is seen as a virtue, and if you run an experiment and it doesn’t provide a positive result, journals are less likely to be interested in your manuscript. Additionally, because replication isn't seen as sexy, when it is done, it isn't read. Replication studies do not get published in high-impact-factor journals, and authors of replication studies do not get as many citations for their work. This means that their h-index is lower, which lowers their chances of getting grants and promotions.
    What's the solution?  I'm glad you asked:
    Altogether, the scientific and technical publishing industry rakes in about ten billion dollars a year, often with double-digit profit margins. Of course they don't want this ship to change course. I can't really blame them; they’re just playing by the accepted rules of their own game.
    Yes, there isn't one.

  • Despite selling literally dozens of units, several of which were not returned, developers are ignoring Apple's wildly expensive and basically useless Vision Pro AR headset.  (Ars Technica)

    Not even porting existing iOS apps.

  • OpenAI is reportedly working on an AI that can reason, instead of just regurgitating pre-digested Reddit posts and telling you that backpacks are as safe as parachutes when jumping out of a plane.  (Ars Technica)

    Well, first up, Terry Winograd, SHRDLU, 1968.
    Person: Had you touched any pyramid before you put the green one on the little cube?
    Computer: YES, THE GREEN ONE.
    Person: When did you pick it up?
    Person: Why?
    Computer: TO GET RID OF IT.
    Person: Why did you do that?
    Person: Why did you clear off that cube?
    Person: Why did you do that?
    Person: Why did you do that?
    Second, that parachutes and backpacks thing is really an answer given by Google's AI Overview.  (Fast Company)

    And it's based on a real result from a real scientific study.  (NPR)

    But it's a study of whether people actually read scientific papers.

    The plane was parked on the ground.

Disclaimer: The author has not made this disclaimer available in your location.

Posted by: Pixy Misa at 04:59 PM | Comments (9) | Add Comment | Trackbacks (Suck)
Post contains 1338 words, total size 11 kb.

Friday, July 12


Daily News Stuff 12 July 2024

Dead Intranet Theory Edition

Top Story

  • Could AIs become conscious? Right now, we have no way to tell.  (Ars Technica)

    Because we haven't properly defined the terms.

    What kind of AI?  SHRDLU from 1968 (yes, I keep mentioning that) displayed more evidence of consciousness than ChatGPT, and examining its abilities we see clear indications of a primitive level of conscious.

    There's a term in AI, sphexishness, referring to the golden digger wasp, Sphex ichneumoneus.  Like many wasps, the sphex wasp will paralyse its prey to serve as food for its larvae.  In this case it will paralyse an insect, drag it back to its burrow (it's a digger wasp), check that the burrow is safe, and then put the prey in the burrow to serve as a larder for the baby sphexes.

    The thing is, if you move the paralysed insect while the sphex is checking the burrow, it will move it back next to the burrow, inspect the burrow again, and then drag it into the burrow.

    And if you move the insect again while the sphex is re-checking the burrow, the cycle will repeat.  No matter how many times you do this, the sphex will not change its behaviour.

    SHRDLU was able to explain why it did what it did, step by step.  It didn't get angry at being given repeated nonsensical orders, but it could account for every action that it took.

    Anyway, can computers become conscious?  Well, they are already more conscious than insects, and have been for decades.

    Can computers become conscious with all the complexity as human consciousness?  Certainly not currently; they are not remotely powerful enough.

    Can LLMs become conscious?  A single LLM, no.  LLMs are designed specifically to avoid that.

    A pair of LLMs in a feedback loop?  Maybe, yes.  But likely also psychotic.

Tech News

Disclaimer: Node.js is the prostate cancer of Computer Science.

Posted by: Pixy Misa at 06:35 PM | Comments (3) | Add Comment | Trackbacks (Suck)
Post contains 564 words, total size 5 kb.

Thursday, July 11


Daily News Stuff 11 July 2024

Megabels Edition

Top Story

  • Construction of a Bitcoin "mine" in a Texas town has caused a massive outbreak of hypochondria and innumeracy.  (Time)
    On an evening in December 2023, 43-year-old small business owner Sarah Rosenkranz collapsed in her home in Granbury, Texas and was rushed to the emergency room. Her heart pounded 200 beats per minute; her blood pressure spiked into hypertensive crisis; her skull throbbed. "It felt like my head was in a pressure vise being crushed," she says. "That pain was worse than childbirth."
    Sounds nasty, but what does that have to do with Bitcoin?
    Over the course of several months in 2024, TIME spoke to more than 40 people in the Granbury area who reported a medical ailment that they believe is connected to the arrival of the Bitcoin mine: hypertension, heart palpitations, chest pain, vertigo, tinnitus, migraines, panic attacks.
    Over several months, dozens of people had a variety of symptoms ranging from the purely physical to the neurological to the psychological.  In a town of 11,000, and since they say "the Granbury area", let's note that the county is home to 60,000 people.
    "I’m sure it increases their cortisol and sugar levels, so you’re getting headaches, vertigo, and it snowballs from there," Bhaloo says. "This thing is definitely causing a tremendous amount of stress. Everyone is just miserable about it."
    You're sure.  Great.
    Not all data centers make noise.
    Yeah, this is not a serious article.
    Jenna Hornbuckle, 38, lost hearing in her right ear and was diagnosed with heart failure; ear exams document her hearing loss along with that of her 8-year-old daughter Victoria, who contracted ear infections that forced doctors to place a tube in her ear.
    Bitcoin is bad in many ways.  It does not cause heart failure or ear infections.
    As rock music blares from the speakers and other patrons chatter away, Rosenkranz pulls out her phone and clocks 72 decibels on a sound meter app—the same level that she records in Indigo’s bedroom in the dead of night. In early 2023, her daughter began waking up, yelling and holding her ears. Indigo’s room directly faces the mine, which sits about a mile and a half away.
    Okay, there's a tiny problem there.
    Shirley sticks his recorder out the window and the numbers on it flicker up and down as the roar washes over it. Eventually, the recorder caps out at 91 decibels, which the CDC estimates as roughly in between the output of a lawnmower and a chainsaw.
    91 decibels is pretty loud.  Even on a still night, at a distance of a mile and a half, it would be barely louder than leaves rustling in the trees.

    To register 72 decibels at that distance, the Bitcoin "mine" would have to be as loud as a Shuttle launch, 24 hours a day, 7 days a week.

Tech News

Disclaimer: That seems like a you problem.

Posted by: Pixy Misa at 06:11 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 597 words, total size 5 kb.

Wednesday, July 10


Daily News Stuff 10 July 2024

AGL Edition

Top Story

  • The FTC has banned messaging app NGL (not gonna lie) from hosting minors after catching them systematically lying. (Washington Post)

    Surprise, surprise, surprise.
    The complaint alleged that NGL tricked users into paying for subscriptions by sending them computer-generated messages appearing to be from real people and offering a service for as much as $9.99 a week to find out their real identity. People who signed up received only "hints" of those identities, whether they were real or not, enforcers said.
    After users complained about the "bait-and switch tactic," executives at the company "laughed off" their concerns, referring to them as "suckers," the FTC said in an announcement.
    And now NGL has to fork over five million dollars.

    Suckers indeed.

Tech News

Disclaimer: Unplugging it works too.

Posted by: Pixy Misa at 06:17 PM | Comments (4) | Add Comment | Trackbacks (Suck)
Post contains 334 words, total size 3 kb.

Tuesday, July 09


Daily News Stuff 9 July 2024

Polonium Enema Edition

Top Story

  • Goldman Sachs released a report on generative AI - the hot new thing pushing stock valuations of key tech companies into the trillions. In short: It's trash. (Where's Your Ed)

    The article linked above is not short but it is a good read. The full report is available for download and it doesn't pull any punches either:
    The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies, and utilities to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers. And even the stock of the company reaping the most benefits to date - Nvidia - has sharply corrected.
    From the article:
    In essence, on top of generative AI not having any killer apps, not meaningfully increasing productivity or GDP, not generating any revenue, not creating new jobs or massively changing existing industries, it also requires America to totally rebuild its power grid, which Janous regrettably adds the US has kind of forgotten how to do.
    There is that, yes.
    Generative AI is not going to become AGI, nor will it become the kind of artificial intelligence you've seen in science fiction. Ultra-smart assistants like Jarvis from Iron Man would require a form of consciousness that no technology currently - or may ever - have - which is the ability to both process and understand information flawlessly and make decisions based on experience, which, if I haven't been clear enough, are all entirely distinct things.
    Right. Although the understanding doesn't have to be flawless, merely good enough for the task at hand, and cheap enough that it's not simpler to just train a human and pay them to do it.

    Generative AI doesn't understand anything - it is a language model, not a fact model; doesn't gain experience, at least not in its current form, which is trained once at enormous expense and then left to rot; and doesn't make decisions.

    It's not AGI and has no path to become AGI. Terry Winograd's SHRDLU from 1968 is in important respects more sophisticated than ChatGPT, even though it was written by a single grad student on an 18-bit computer more than fifty years ago.

Tech News

Bottom Story

Disclaimer: Overhead, without any fuss, the stars were going out.

Posted by: Pixy Misa at 06:22 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 746 words, total size 7 kb.

Monday, July 08


Daily News Stuff 8 July 2024

Rhino Barn Edition

Top Story

  • Boeing is planning to plead guilty to criminal fraud.  (CNBC)

    No, not for that.  Not for that, either.  Yes, for the two fatal 737 Max crashes, and more specifically, for the flawed flight control system that caused those crashes.

    Boeing will be fined $240 million and be required at least $450 million in new compliance and safety programs, as well as having government compliance officials operating directly within its facilities.

    Much as I loathe the administrative state, the alternative looks like murder trials, sooner or later.

Tech News

  • China has seen the creation of one hundred competing LLMs.  (Tom's Hardware)

    Resulting in a massive outflow of money from China to Nvidia for AI hardware, before 98 of those projects (plus or minus five) inevitably crater.

  • Current leading AI models cost around $100 million to train.  The next generation currently in development could cost closer to $1 billion.  (Tom's Hardware)

    So a hundred of those would cost...  Carry the twelve...  A lot.
    "Right now, 100 million. There are models in training today that are more like a billion." Amodei also added, "I think if we go to ten or a hundred billion, and I think that will happen in 2025, 2026, maybe 2027, and the algorithmic improvements continue a pace, and the chip improvements continue a pace, then I think there is in my mind a good chance that by that time we'll be able to get models that are better than most humans at most things."
    For a hundred billion dollars, you get something that is incapable of learning (LLMs are trained once) and is better than humans mostly at things that aren't particularly useful.

    Hooray.  We're saved.

  • The looming spectre of Mt Gox paying back its users wiped $170 billion off the global crypto market.  (CNBC)

    Pay back users?  What insanity is this?

  • Fedora Linux 41 will retire Python 2.7.  (Fedora Project)

    Python 2.7 still works, but it was released in 2.7, and support ended in 2020.  The current version is 3.12, with 3.13 in beta.

    You can actually still get a supported release of Python 2.7 in the form of PyPy, a Python compiler written in Python.  Since it's written in Python 2.7 and can compile Python 2.7 (as well as more recent versions like 3.10), they are planning to support Python 2.7 indefinitely.

  • What has it got in its pockets?  (Liliputing)

    An eight core Ryzen 8840U, 32GB of RAM, an M.2 2230 SSD, USB4, and wifi, all packed into a folding keyboard.

    It even has the Four Essential Keys, sort of.  Dedicated Home and End, and four keys marked L1 through L4.  It's a little cramped, but if you want a powerful computer that can fit in your coat pocket, it is one.

Disclaimer: And it still is if you don't.

Posted by: Pixy Misa at 06:35 PM | Comments (4) | Add Comment | Trackbacks (Suck)
Post contains 473 words, total size 5 kb.

Sunday, July 07


Daily News Stuff 7 July 2024

Revoked Edition

Top Story

  • Merle Meyers did not kill himself: A former Boeing inspector says parts marked for scrap ended up being built into planes. (CNN)
    Meyers, a 30-year veteran of Boeing, described to CNN what he says was an elaborate off-the-books practice that Boeing managers at the Everett factory used to meet production deadlines, including taking damaged and improper parts from the company’s scrapyard, storehouses and loading docks.
    This, if true, should result in felony convictions.
    Beginning in the early 2000s, Meyers says that for more than a decade, he estimates that about 50,000 parts "escaped" quality control and were used to build aircraft. Those parts include everything from small items like screws to more complex assemblies like wing flaps. A single Boeing 787 Dreamliner, for example, has approximately 2.3 million parts.

    Most of the parts that were meant to be scrapped were often painted red to signify they were unsuitable for assembly lines, Meyers said. Yet, in some cases, that didn’t stop them from being put into planes being assembled, he said.
    Lots of felony convictions.


Tech News

Almost Relevant Music Video of the Day

Disclaimer: She's a model and she's spewing nonsense. (Keyboard riff.)

Posted by: Pixy Misa at 04:40 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 568 words, total size 5 kb.

Saturday, July 06


Daily News Stuff 6 July 2024

Daily News Stuff Edition

Top Story

  • OpenAI got hacked, but definitely not by Chinese spies.  (New York Times)  (archive site)

    In April.  2023.

    And they didn't report it until now.  In fact, they didn't report it at all.  One of their employees alluded to the event on a podcast, and was fired for his trouble.
    Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security wasn’t strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
    OpenAI denies this - well, not the hack, nor the firing, nor the fact that Aschenbrenner leaked the information, but the causal connection, in an announcement apparently written by a special release of ChatGPT trained exclusively on modified limited hangouts:
    "We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation," an OpenAI spokeswoman, Liz Bourgeois, said.
    Liz Bourgeois?  Obviously a made up name.
    Referring to the company's efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, "While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company."
    OpenAI is making no efforts to build AGI, safe or otherwise.  OpenAI's focus is building something and then redefining the term AGI to claim they have achieved it.

Tech News

Bagger 288 Video of the Day

It was a simpler, more innocent time.

Disclaimer: And we've got another 287 where that came from.

Posted by: Pixy Misa at 05:52 PM | Comments (7) | Add Comment | Trackbacks (Suck)
Post contains 651 words, total size 6 kb.

Friday, July 05


Daily News Stuff 5 July 2024

Cinco De Julio Edition

Top Story

Tech News

Disclaimer: Scrape sixteen sites, and what do you get?  Half a terabyte of furry porn and the worst Joe Biden deepfakes yet. 

Posted by: Pixy Misa at 05:17 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 315 words, total size 3 kb.

Thursday, July 04


Daily News Stuff 4 July 2024

Happy Freedom Day

Top Story

Tech News

Perception Check Music Video of the Day

Every character in the video is a Hololive member. The human bard is Kureiji Ollie of Hololive Indonesia, who on hearing the song announced ME IN DnD. The long-suffering DM is Calliope Mori who is in fact the long-suffering DM of all of Hololive English's tabletop games.

Twitter has fixed embeds for real, it seems. They were broken for months, and then flaky for many more months.

Disclaimer: Happy Freedom Day. Strictly one eagle per customer. No rainchecks, no refunds.

Posted by: Pixy Misa at 05:42 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 694 words, total size 7 kb.

<< Page 2 of 656 >>
129kb generated in CPU 0.0229, elapsed 0.506 seconds.
60 queries taking 0.4894 seconds, 395 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 393