Python can tell you
>>> a = 20 >>> b = 1 >>> c = a + b >>> type(c) <type 'int'>
Python doesn't care, it's all an int, or a float if there's a decimal point.
No decimal point required:
>>> a = float(2) >>> type(a) <type 'float'> >>> print a 2.0
This looseness is also what I hate about Visual Basic and most other scriptable languages. Sure it lets you prototype faster, but there's a lot more to programming than the prototype. This is my main complaint in my recent foray into Objective C. I like the strong, static, safe, nominal typing of c#. But then I'm a control freak when it comes to programming.
But you're okay with this behavior?
main() { char a[] = "When in the Course of human events " "it becomes necessary for one people to " "dissolve the political bands which have " "connected them with another"; char b, *c; b = 'a'; c = a + b; printf("%s\n", c); } # gcc test.c # ./a.out bands which have connected them with another
I figured being dynamic Python would automatically assign it as a float with a decimal.
But you're okay with this behavior?
I don't much like C. There's a reason why they have an obfuscated C competition -- the language lends itself well to that.