It is a little bit interesting that 3 major crawlers happened to crawl one specific domain on the same day, but that can hardly be considered “scrubbing”. If changes occurred between the last crawl and the one today, then they would go unnoticed by a bot because of that and not be cached. Does that have the same effect as “scrubbing data”? I suppose in essence you have the same result, but correlation does not equal causation.
What I would find more likely is that either the webmaster / designer made a request to the bots (”SO’s” [sic] as you call them - you might have to help me with that acronym) to crawl the domain at a given time, or the webmaster may know the schedule (pretty easy to figure out based on what the web cache providers publicly show as the date of the last several crawls). That would leave them free to toy around when they knew it wouldn’t get cached.
My question would be as follows... Is the most recent crawl (today) in alignment with what has occurred in the recent past as far as the timeline goes for the respective bots?
I did just check http://obamaphone.net/robots.txt and there is a robots.txt file there. It is minimalistic and uses some tags that are not contained under the original development of the standard. They are as follows:
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
Much more info is available at: http://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html
Doubtless there are several other tags and methods to deal with bots of this nature and to impress control over how they index a given site.
follow the links and read (there are x-robots tags as well as other meta tags that can be used to force a bot’s content to expire).
I do not subscribe to coincidence. The odds related to the timing of the crawls is astronomical. Surely you would see that. Your question above is thought provoking. As crawls do not follow a specific pattern or timing, the true answer would be founded in probability based on a pattern. In short, my reply would be no. When looking at the recent patterns, the odds do not favor the bots crawling the same domain not just on the same day, but within hours of each other. The crawls were directed and purposeful.