Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: Alamo-Girl
The Kolmogorov complexity doesn't strongly depend on the type of computer. There is a theorem that says that all universal Turing machines compute any function with a difference of only an additive constant. What's important is that this constant doesn't depend on the length of the input. Thus, complexities with a one-instruction machine or with a 256 instruction machine will be essentially the same.
497 posted on 06/21/2003 10:02:05 PM PDT by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 495 | View Replies ]


To: Doctor Stochastic
Thank you so much for your reply!

However, I continue to disagree. In the example of an “instruction” which you gave, the program was using a location being “pointed at” for accumulation. In a Universal Turing Machine, it would read through. Even though both could perform the same function, a RAM machine is not the same for figuring Kolmogorov Complexity.

In other words, before introducing Kolmogorov Complexity – the computer/machine we are speaking of must be normalized or else it is apples and oranges. If a RAM machine can do it in one instruction because it is hard-wired, but a Turing machine requires 1,000 instructions – then the Kolmogorov Complexity of the result will depend on whether we are speaking of the “instruction” of the RAM computer or the equivalent Universal Turing Machine.

Also, it occurs to me we ought to be looking at biological autonomous self-organizing complexity with the same kind of “lens” that we use in quantum information theory – e.g. introduce von Neumann entropy for computational density matrices. IMHO, the variance in nature may be closer to quantum states than classical physics.

499 posted on 06/21/2003 11:04:07 PM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 497 | View Replies ]

To: Doctor Stochastic
There is a theorem that says that all universal Turing machines compute any function with a difference of only an additive constant. What's important is that this constant doesn't depend on the length of the input.

One of the things I've had to prove in the last year to the satisfaction of some others (I am lazy, lazy, lazy about actually doing proper rigorous publication of mathematics) is that this theorem allows a greater degree of freedom than actually exists.

For all finite TMs, the additive constant for a given function implementation is the same, making the intrinsic Kolmogorov complexities of the system identical. It is undefined for the UTM case, but then the intrinsic Kolmogorov complexity of a proper Universal Turing Machine is infinite, making the analysis a meaningless exercise. The assumption of infinite memory confused the usefulness of the theorem.

Therefore, all equivalent finite computational systems have the same Kolmogorov complexity even if the control functions vary. One of my Big Things has been rewriting computational information theory from the assumption of purely finite systems and treating the infinite case as an edge case rather than the assumed case. I've come across a number of small but important differences in the basic theorems of computational information theory by removing the assumptions of infinities, enough so that I've managed to drag some previously skeptical mathematicians into this exercise.

507 posted on 06/22/2003 10:48:43 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 497 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson