Had you checked Jim’s comments, you would know that the server couldn’t be rebooted, neither remotely, nor by the “server farm’s” staff.
It was an equipment problem.
Believe me, I did check the comments on two different backup boards.
...you would know that the server couldnt be rebooted, neither remotely...
Did you even read what I wrote??
...nor by the server farms staff.
I didn't read that, nor do I believe it, unless the "server farm's staff" was trying to remotely boot it from a different IP block. The server farm's staff can physically walk over to the rack that holds the server and toggle what we used to laughingly call "the real non-maskable interrupt", a.k.a. the power switch.
So I find it difficult to believe they couldn't, although I can fully believe that they wouldn't (lazy), although that tends to lose the data center some customers with that kind of attitude. Unless they were just too d@mn busy with the whole /23 block down, trying to fix it. Now that I can believe.
But the fact is that turning the power off and on to reboot the specific machine won't do anything if the whole /23 or /24 CIDR block is offline for some other reason. And I can assure you that my traceroutes all ended at the router with the address of mg-1.a01.mlpsca01.us.da.verio.net (129.250.24.195). That includes traceroutes to 209.157.64.201,209.157.64.224,209.157.64.8. Well, the traceroute to 209.157.63.254 didn't even get that far: 208.175.175.26 (208.175.175.26)
So what I'm saying is that literally hundreds of IP addresses were inaccessible, not just FR's.
But I will, of course, defer completely to John Robinson's opinions and thoughts on the matter.