Doing something appropriate is not crashing and instead giving me a useable number that's effectively rounded to infinity. It's wasteful to make the code do it, and it causes issues then the programmer forgets.
The value 0.1 cannot be represented in floating point, and this doesn't cause anyone any greef when I use a rounded value instead. I'm of the opinion that computer hardware should do the same for divide by zero exceptions, and return a rounded value of infinity that operates in succeding calculations. Returning a NAN merely trashes everything afterward, and it's not a useable option.
Then you haven't been trying hard enough. A certain large scale system that I worked on needs every accurate software timing, about 15 mSec. In order to do this, it would read a very accurate time standard on initialization and reckon the number seconds since some epoch, storing the result in a 64 bit floating point number. Ten times a second, the time standard would interupt it and the software would respond by incrementing it's time by "0.1" seconds. (Stop me if you see where this is going.)
The system is only initialized once every few months in normal operation. When it was young, the "epoch" was close to "now". With time, the size of the number in seconds grew since "then" (the epoch) and "now" grew. The addition of two floating point numbers, one representing the number of seconds since epoch and the other representing "now" grew. Remember this operation occurs 864,000 times a day and the round off error is the same each time, there is no error cancellation.
Ten months after epoch, it would start to lose more than 0.02 seconds a DAY! The fix was to cast time times ten as an as an integer, add one and cast the result as float and divide by ten. Of course, if we had started representing time as a 64 bit integer, LSB = 0.1 seconds, we would have run out of numbers in about 1,312,870,859 centuries.