Actually you do know how the decision is made: you have programmed the computer to recognize and translate this or that jumble of ones and zeros as "m" or "w". All computers have this symbolic translation capabilty and it was put there by human beings. The machine did not 'reason' out that 11000110 = "m",for example (btw my assembler is rusty so I don't even know if that statement was actually mathematically valid!), someone told it that should recognize it that way and it does. The machine did no autonomous 'thinking' of it's own would be incapable of doing so unless someone told it how to think.
Since we do not know exact mechanics of thinking, we can only approximate it by the application of logic, which is not thinking,per se, but merely the result of the thought process. Therefoe, until we understand the mechanics of thought, AI is only a dream, held back by human failing. Until we no longer have this shortcoming, we cannot create an autonomous machine, Star Trek notwithstanding.
You have a very simplistic understanding of intelligent systems. Logic has nothing to do with intelligence, and all generally intelligent systems natively express forms of non-axiomatic reasoning. In other words, your entire argument is a strawman based ill-informed assumptions that haven't been updated since Reagan was president.
You might want to bone up on the rather extensive advances in mathematics and theoretical computer science in this area. We know far more about intelligent systems today than you imagine, and it is not like anything you seem to believe. The problems these days are theoretically obscure and difficult engineering ones, not fundamental mathematics or theory.