What does that have to do with the value of nullity?
Either that, or we determine that the lowest value for a given variable or its highest possible value (either 2,147,483,648 or 2,147,483,647 since we're using 32 bit signed integers) to be a divide by zero condition.
By "divide by zero condition" do you mean "nullity"? If that's the case, we have to do that for all types (signed and unsigned 8-bit, 16-bit, 32-bit, 64-bit integer values, floats, doubles, etc.). Now all of a sudden "nullity" is represented by many different values depending on the data type. That just doesn't make sense!
It doesn't fall on the number line, so it is impossible to represent using binary unless you special case the value, which is what you're doing in your code now to avoid divide by zero errors. So defining "nullity" doesn't help you at all in your programs. You're better off doing precondition checking to insure you don't divide by zero, or handling it after the fact by catching the exception.