I did run stressCPU2 on my folding machine and it ran for 24h without errors. I will try to start it up again b/c it just may be the new projects.
However, my new PC which I have overclocked will also run stressCPU2 just fine, but will not even complete old WU’s like the supervillans. The OC is stable with everything else, so I just don’t use this one for folding. One day I will take it back to stock setting just to see if it was the OC. This is the machine I have the nVidia 8800 on which I wish I could use.
I have read several threads and flame wars on the subject of NaNs. Its just like back in my programming PM days, trying to keep engineers and programmers from coming to blows across the table.
My opinion is to keep running F@H, and if you see too many NaNs just stop the service, erase and start over.
One of my systems had about 80 EUEs a few weeks ago. I rebooted, it had a few more then suddenly everything was fine.
I know that F@H is an excellent system tester, catching hardware failures early on. I also know that F@H will eventually find every buggy device driver in your system, and a few in the underlying GROMACS and AMBER code.
New proteins always have about 50% EUEs as the software completes the boundary testing the hard way - compute until illegal.
To all nVidia owners, from an nVidia developer forum, by Mark Harris, nVidia:
“Actually, we never thought it was a bug in brook — the question was briefly raised, but it didn’t take long to verify that this was not the case. We have been cooperating with Mike Houston (Stanford) in tracking down the bug; the application is complex and required someone familiar with the application code to narrow the problem down to something that was easier to isolate. Mike has done that and I have filed a bug. Unfortunately, our driver engineers are busy with many other application issues to solve and features to implement. They will get to this issue in due course.”
“I too would like to see F@H on NVIDIA GPUs, but our engineering managers have to set priorities appropriately based on many factors competing for their team members’ time.
“We’re currently focused on CUDA for GPGPU applications, because the architecture and programming model (especially the on-chip shared memory and thread synchronization) have clear and proven benefits to performance compared to pure “streaming” approaches (aka GPGPU via OpenGL or Direct3D) for many parallel computations.
Mark”