Er, no. The only way that a higher mutation rate could "assist" the expression of a mutant recessive allele is if the mutation rate is so insanely high that there's a decent chance of the exact same mutation occurring within the population within a few generations, and a mutation rate *that* high is so vastly huge that it's just as likely to destroy the original mutation as to add another one like it (not to mention making successful reproduction impossible due to mutation load).
Instead, single mutations (recessive or otherwise) actually have a counter-intuitively high chance of "fixing" in the population through chance alone. "Fixing" means that the mutation (which originally starts as a single copy in a single individual, of course) eventually reaches the point where it has not only "found" another copy of itself (in subsequent generations, due to parents producing multiple offspring with the mutation), but has actually managed to *replace* every alternate version (allele) of the same gene.
The odds of a new mutation eventually fixing in the population works out to 1/N, where N is the size of the breeding population. That's for the case where the mutation is entirely neutral (which is of course the case for recessive genes "masked" by a dominant version) - for mutations which confer a benefit, the odds of fixation are even greater, of course, because then natural selection kicks in to "help" the mutant relative to non-mutant copies of the gene.
So although it might seem intuitive that "Otherwise, of course, the recessive gene disappears after just a few generations" if it is not aided by natural selection, it actually has a decent chance of not just persisting, but of actually becoming *ubiquitous* in the population. In a breeding population of 1000 individuals, for example, a non-harmful mutation has a 0.1% chance of spreading through the entire population and replacing all "competitors", which is far higher than most people would guess (many would presume it would have an effectively zero chance of persisting, as you have).
An interesting corollary is that while the odds of any one mutation "fixing" in the population drop as the population size rises, the *number* of novel mutations per generation goes up (since there are more individuals in which mutations can occur), in a way that exactly cancels the increased "difficulty" of a particular mutation fixing. The end result is that for populations of ALL sizes, the number of new (neutral) mutations achieving fixation per generation (on average) is *exactly* equal to the rate of new mutations per individual. So for example in a breeding population of 1,000,000 individuals, if 2 new mutations occur in each individual, that's 2,000,000 new mutations per generation in the whole population, each of which has a 1/1,000,000 chance of reaching fixation by pure chance, meaning that 2 of those 2 million new mutations will eventually "take over" the whole population and become ubiquitous. And likewise for the mutations in the next generation, etc. So while a lot of mutations get "shuffled out", a significant number in every new generation always "make it" and eventually become universal in the population.
The only effect that higher mutation rates (due to stress, or whatever) will have on that is to increase the total number of new mutations in each generation (and ultimately the number of mutations which will achieve fixation), but won't increase the odds of any particular mutation "making it", because that depends *only* on population size (and in the case of beneficial mutations, on natural selection as well).
That certainly is counter-intuitive. Can you give me a readily-accessible source for that? I'm not challenging what you say, but would like to understand how that result comes about.
Very neat. It somehow feels the same as a game of musical chairs. But there must be a better model.