Oh, lovely, you're a cheery one aren't you?
Sunday, December 04
Well, that was exciting.
The migration was complicated by the old server disappearing for 24 hours just before I was set to start doing this.
Also, looks like I can stop paying for that backup server, since it's kind of deceased.
Wednesday, September 28
Server took an unscheduled nap.
Saturday, September 03
We recive complain. If not be resolved after 24H your services will be closedAs if I needed another reason to get off this server.
Sunday, April 24
Finally have a script running to finish restoring all the posts from the backup into the live database after the Big Mess last year when the other datacenter caught fire and everyone had to squeeze onto a single $50 server for three weeks.
It's moving pretty quickly so if your posts from 2012-2016 aren't back yet they should be within the next few hours.
Monday, January 17
The site has been fixed.
I have to get us moved.
Saturday, September 25
I mean, not relevant to this particular post, it is relevant o the world at large and the frauds in charge.
This is one of my favorite troll responses. These fake Russia stories were The Biggest Thing On Earth, for years, when the crooks behind them still hoped they'd work. Now they just want to get away clean, and people like you say, "Can't you just let it go?" https://t.co/KFvYTksorGâ€” Matt Taibbi (@mtaibbi) September 23, 2021
Read your own tweet: the Beacon originally funded the "research firmâ€ that created the dossier, not the dossier itself. Timeline: Beacon drops Fusion, Perkins Coie hires Fusion, Fusion hires Steele. No one disputes this. Itâ€™s been testified to countless times. https://t.co/xDnRVXTvDgpic.twitter.com/8Z2WvQvPwpâ€” Matt Taibbi (@mtaibbi) September 24, 2021
Every single person who works at the media corporations that spread the CIA lie that the Biden archive was "Russian disinformation" knows they lied to protect Biden.â€” Glenn Greenwald (@ggreenwald) September 24, 2021
But they also know their audience doesn't care if they get caught lying as long as it's for the right Party.
Also, this is literally the objectification of women.
Our new issue is here! On the coverâ€”'Periods on display' and the cultural movement against menstrual shame and #PeriodPoverty.â€” The Lancet (@TheLancet) September 24, 2021
Plus, @WHO air quality guidelines, low #BackPain management, community-acquired bacterial #meningitis, and more. Read: https://t.co/eP1Lx7D116pic.twitter.com/DchfiHnYEs
Wednesday, August 04
This server is crashing almost every day right now.
I have a new server, and I have complete and up-to-date backups on the new server.
What I haven't had so far is any time to configure this system on the new server.
Should happen in the next few days. I pulled about a forty-hour week just between 5PM Friday and 9AM Monday, but that was the last drama from that product launch, and the next launch isn't for, oh, a week at least.
It's not until October that things will get really crazy.
I did get a raise, and we've hired a bunch of new staff, so it should get less crazy by the end of the year. I just need to survive long enough to see it...
Thursday, May 20
The server was getting overloaded with crappy requests again, but I couldn't see any difference between the crappy requests overloading the server and the usual crappy requests that only take about 50 milliseconds and cause no problems at all.
Except that we were also getting indexed by Google and the Google bot was tracking links to RSS feeds in places where RSS feeds don't really belong but the server will do its best to fulfil anyway.
So I blocked a couple of those. Not all of them, just a couple.
And the problem was resolved.
Sunday, April 18
Yes, Akane is back - our shiny Ryzen 3700X, with its 64GB of ECC RAM and enterprise NVMe storage - in a shade under two weeks.
Doing a full offsite backup, followed by software updates, then we can return to something approaching normality.
Meanwhile I'm getting errors on the backup drive I had them swap into the new server at my day job, which is annoying because I'll need to re-take or re-verify 11TB of backups, but nothing is actually down which is refreshing.
Backups of the backups of the work server are ongoing, since there's 60x as much data over there. (11TB vs. 180GB.)
That server is running LXD virtualisation on ZFS. This gives you two ways to do backups:
lxc snapshotwhich is simple and instantaneous and uses minimal disk space but is stored on the local drive
lxc exportwhich gives you a complete portable backup in a single file but by default backs up your everything straight into your root filesystem
You can configure it not to do that, but it's not very well documented, and by not very well documented I mean have fun trawling through Stack Overflow, sucker.
Anyway, since right now everyone is on this server and that server is free, I thought I'd try updating the software and configuring proper backups with
Tried it on a small container - around 1GB - with pigz (parallelised GZip) compression, and it completed in 15 seconds. Great!
Oh, and it doesn't give you any progress information, not even Microsoft level where the indicator sometimes runs backwards.
And you can't stop a running backup.
This is garbage.
pigz; the backup process will abort cleanly.
Monday, April 12
Deployed an Nginx instance configured as a caching proxy and it seems to be helping out a lot. Load average has dropped from 40 to - right now - 2. Wait, 10. Wait, 7. It's still bouncing around a but but not getting out of control as it was earlier.
That's a combination of (1) disabling sessions on static files, (2) caching said static files, and (3) people not impatiently hitting F5 when the site is slow to load because the site mostly isn't slow to load.
I didn't much enjoy this bit, though:
2021/04/12 13:29:43 [emerg] 4954#4954: "proxy_busy_buffers_size" must be less than the size of all "proxy_buffers" minus one buffer in /etc/nginx/nginx.conf:66
55 queries taking 0.3037 seconds, 235 records returned.
Powered by Minx 1.1.6c-pink.