...it stands for "Beige Box of Misery".
I won't get all skiffy in telling the tale, as RX did, but I too just had a UPS adventure, which threatened to run through the night and into my Monday morning.
The server for Gumbyware is normally plugged into an APC Smart-UPS 1250, which, when it's working properly, gives about 4 hours of backup time.
A couple of months ago, the UPS started complaining of a bad battery. This seems odd, as I'd last replaced the batteries, according to the sticker, March 11 of 2011 - rather less than 2 years ago.
Anyway, it was running awfully warm, which presumably doesn't do the batteries any good, and every few hours it would emit an annoying alarm sound for a couple of minutes.
Well, it seems that when the battery is going dead, that particular beige box ceases to be an uninterruptible power supply and becomes an interrupting power supply, 'cause every so often - getting to be every couple of days, lately - it would, without any line dropout sufficient to bother any of the unprotected computers, spontaneously interrupt the server's power for some fraction of a second, causing a reboot.
...And, because of a configuration issue which I still haven't quite sorted out, every server reboot requires manual intervention to get Apache running. I know, conceptually, where the problem is, but I haven't quite got the hang of LSB-style init scripts.
Anyway: after a late-afternoon unsolicited reboot, I decided it was time to pull the UPS pending a battery change, or even complete replacement of the UPS. (If I decide to keep it, maybe I'll have a go at adding a cooling fan.)
So... into the computer kennel, rummaging among the dusty cabling. Finally get all the appropriate plugs plugged into an outlet strip plugged directly into the wall socket. And, we're on the air again! Except...
I try making what I think is the right change to the Apache init script. And reboot. And the BIOS hangs.
There follows a panicky period of about two hours in which I fiddle around with disconnecting things, bypassing the KVM switch, pulling cards, re-seating RAM modules, clearing the nonvolatile memory....
Eventually, it started working again. I still don't know why the BIOS was hanging just after enumerating the SATA disk drives, but it started working again after I moved the CMOS-memory power jumper to OFF and then back to ON - while the jumper was in the OFF position, the BIOS still got wedged, apparently in the same place.
While I was at it, I swapped in a new gigabit Ethernet switch, based on (a) a sneaking suspicion that some performance issues I've been seeing might be network-related, and (b) the new switch having been on sale, cheap, a couple of weeks ago. The performance issues appear unaffected, so I need to look for some other culprit.
As a final insult, when I brought the workstation back up, it couldn't find the server. Eek? My cellphone could find the server via the WiFi access point, which plugs into the same network, so... I swapped the cables for workstation and access point, and now both are working.