Posted on 11/08/2005 8:48:52 AM PST by RightWingAtheist
Our brains have become too small to understand math, says a rebel mathematician from Britain. Or rather, math problems have grown too big to fit inside our heads. And that means mathematicians are finally losing the power to prove things with absolute certainty.
Math has been the only sure form of knowledge since the ancient Greeks, 2,500 years ago.
You can't prove the sun will rise tomorrow, but you can prove two plus two equals four, always and everywhere.
But suddenly, Brian Davies of King's College London is shaking the foundations of certainty.
He says our brains can't grasp today's complex, computer-generated math proofs.
"We are beginning to see the limits of our ability to understand things. We are animals, and our brains have a certain amount of capacity to understand things, and there are parts of mathematics where we are beginning to reach our limit.
"It is almost an inevitable consequence of the way mathematics has been done in the last century," he said in an interview.
Mathematicians work in huge groups, and with big computers.
A few still do it the old-fashioned way, he says: "By individuals sitting in their rooms for long periods, thinking.
"But there are other areas where the complexity of the problems is forcing people to work in groups or to use computers to solve large bits of work, ending up with the computer saying: 'Look, if you formulated the problem correctly, I've gone through all the 15 million cases and they all are OK, so your theorem's true'."
But the human brain can't grasp all this. And for Davies, knowing that a computer checked something isn't what matters most. It's understanding why the thing works that matters.
"What mathematicians are trying to get is insight and understanding. If God were to say, 'Look, here's your list of conjectures. This one's true, then false, false, true, true,' mathematicians would say: 'Look, I don't care what the answers are. I want to know why (and) understand it.' And a computer doesn't understand it.
"This idea that we can understand anything we believe is gradually disappearing over the horizon."
One example is the Four Colour Theorem.
Imagine a mapmaker wants to produce a colour map, where each country will be a different colour from any country touching it. In other words, France and Germany can't both be blue. That would be confusing.
So, what's the smallest number of colours that will work?
A kid can work out you need four colours. But can you prove it? Can anyone be certain, as with two-plus-two?
The answer turns out to be a hesitant Yes, but the proof depends on having a computer to work through page after page of stuff so complex that no single person can take it all in.
And it's getting worse, Davies writes in an article called "Whither Mathematics?" in today's edition of Notices of the American Mathematical Society, a math journal.
Math has tried to write a grand scheme for classifying "finite simple groups," a range of mathematical objects as basic to this discipline as the table of the elements is to chemistry -- but much bigger.
The full body of work runs to some 10,000 difficult pages. No human can ever understand all of it, either.
A year ago, Britain's Royal Society held a special symposium to tackle this question of certainty.
But many in the math community still shrug off the issue, Davies says. "Basically, mathematicians are not very good philosophers."
"Which finger do you use to represent 0 in base 10? If you don't need it in base 10, you don't need it in base 8 either."
OK. I get you. 1,2,3,4,5,6,7,10. You're right. I have a cold, so I'm not thinking as clearly as usual today.
There are 10 kinds of people in this world. Those that understand binary math, and those that don't! ;-P
I was interested to find that the Sumerians and Babylonians used Base 60 math. There's an interesting web site that discusses this:
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Babylonian_numerals.html
There are only two kinds of people in the world: those who think there are only two kinds of people in the world and those who don't.
"Gödel is best known for his proof of "Gödel's Incompleteness Theorems". In 1931 he published these results in Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme. He proved fundamental results about axiomatic systems, showing in any axiomatic mathematical system there are propositions that cannot be proved or disproved within the axioms of the system. In particular the consistency of the axioms cannot be proved. This ended a hundred years of attempts to establish axioms which would put the whole of mathematics on an axiomatic basis. One major attempt had been by Bertrand Russell with Principia Mathematica (1910-13). Another was Hilbert's formalism which was dealt a severe blow by Gödel's results. The theorem did not destroy the fundamental idea of formalism, but it did demonstrate that any system would have to be more comprehensive than that envisaged by Hilbert. Gödel's results were a landmark in 20th-century mathematics, showing that mathematics is not a finished object, as had been believed. It also implies that a computer can never be programmed to answer all mathematical questions."
I was interested to find that the Sumerians and Babylonians used Base 60 math. There's an interesting web site that discusses this:
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Babylonian_numerals.html
I've never been wrong.
Wait,...once I thought I was wrong, and THEN I realized I was wrong.
Heh, What's your mathematician opinion about this?
And I can prove that there are no uninteresting numbers.
Oops! Meant to send #88 to someone else!
True 2 + 2 does equal 11 in base 3, but it still does equal 4.
Mathematician's opinion?
"Or rather, math problems have grown too big to fit inside our heads"
Paging Seymour Cray!
LOL
This must be a derivative of the same formula for establishng a liberal world view.
Joke (sort of):
A couple of guys are sitting around bored.
"What's the biggest number you can think of, Joe?"
"Um...er...two. You?"
"Uh...three!"
Likewise, there are legends of African tribes whose math system amounts to "1, 2, 3, many".
Sounds dumb, right? Have you ever seriously thought about how you count things at a glance? Given a bunch of stuff to count, aside from "1,2,3,...", the best most people can do is visually lump the items into groups of 2 or 3, then conceptually add those numbers up. Go ahead: drop 5 pennies on the desk in front of you and count them, paying strict attention to how you count them; you might go "1,2,3,4,5" and effectively perceive only 1, or you might mentally divide the group into subgroups of 2 and 3 then add 2+3=5, however you do it you're isolating subsets no larger than a size beyond which your mind cannot purely reference ... strictly speaking, the biggest number you can really think of is 3!
So if the human mind can REALLY only perceive "3" as the largest number, how do we get to pi, infinity, sqrt(-1), and proving the 4-color theorem? Same way we get to "5": divide, isolate, symbolize, combine, repeat - and thru use of tools.
Once we have a mental set of symbols representing whatever it is we are mathematically doing, we again rapidly run out of mental processing space. Wasn't long in human history before Og the caveman went from counting stuff to marking rocks so he could be reminded of what he counted and what the number was. From mud on cave walls to impressions on clay tablets to abacuses (sp?) to ink on parchment to pencil on paper to magnetic field directions to transistor states to holes on DVDs, humans have used external tools to store expressions of mental symbolism. And it's not cheating, and it's not considered the end of mathematics.
So on we go to the next phase: tool-assisted computation - and it's not new either. Where one mathematician might have decided that proving something required checking all the permutations of a problem, he would go thru an algorithmic process to check all the possibilities. Given a big enough problem, he might hire other people - professionally called "computers" - to work as a team to solve the problem. Then Charles Babbage and Lady Ada concocted a machine to do what people were doing (at which point a politician, clueless as ever in history, asked "if you put the wrong figures in, will you get the right figures out?" - but I digress). Von Neuman extended the concept into a theory of electromechanical computation, and soon the modern electronic digital computer was born, followed by huge networks of incredibly fast sub-nanosecond-cycle computing devices solving enormously complicated mathematical problems in shockingly short periods of time (Euclid spent decades trying to compute the first fractal, now trivially done in milliseconds).
So what's the upshot? We are not outside the limit of the human mind with advanced math. We have simply figured out, again, how to divide up a problem, assign symbols to the pieces and their relationships, create an algorithm to process them, and rather than doing the processing by hand turn it over to a human-built machine for rapid processing.
I'm a software engineer with an affinity to mathematics. Trust me (as it's my profession): it's all simple, it's just a matter of breaking down the problem into manageable chunks - ultimately to rearranging 1s and 0s - and telling a machine in detail what process to follow. It's all still understandable by humans, we just get a machine to do the hard/tedious work for us. When someone claims we've hit the human limit, that's usually time to get ready for a whole new way of doing things that goes far beyond that limit.
Which, by the way, spent 128 pages proving 1+1=2.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.