This is not my site. But, again, the webmaster has not expressed any objection to saving of the pages of the site for personal use, and he HAS been involved in the thread about the shutdown. Multiple members have been saving individual pages with copy/paste, etc. But even for a small forum that’s not practical to save the whole thing or maintain any structure.
It’s ALMOST as if I heard FR was shutting down and I wanted to archive it, except that FR is MUCH, MUCH bigger. (Probably 10,000x if I had to guess.)
SFAIK it’s not a WordPress type site, but I am not sure. It IS a forum.
If it’s a forum, 99.9% chance is has a database for storing posts. Most any website scraper/down-loader will just grab the html that a browser gets from that and save it as html pages, plus images.
I’m on Linux/Ubuntu and I’ve used httrack for that but like someone mentioned about a different tool. You have to be careful what depth of links you grab. Might need 2-3 depending on how the site works. If the forum allows embedded youtube videos it could get big unless you figure out how to filter that out.
Like everything tech the answer is, it depends on some pesky thing like variables.
Plug the url into builtwith.com and you might get an idea what it’s ‘built with’. At any rate, you’re only going to be able to scrape the html pages that get rendered for the browser.
Go to the URL of your site and add “/wp-admin” to the end. For example, if your site is www.example.com, you would go to www.example.com/wp-admin. If you get a login page it is a wordpress site. If it is then there are wordpress plugins that do full backups.
Paul,
I started downloading the site, and after successfully downloading a dozen or so pages, the server blocked my access with a “Code 401 (forbidden).”
The server admin / webmaster would have to tweak the server’s security settings to allow the site to be downloaded by something like SiteSucker.