However, I continue to disagree. In the example of an instruction which you gave, the program was using a location being pointed at for accumulation. In a Universal Turing Machine, it would read through. Even though both could perform the same function, a RAM machine is not the same for figuring Kolmogorov Complexity.
In other words, before introducing Kolmogorov Complexity the computer/machine we are speaking of must be normalized or else it is apples and oranges. If a RAM machine can do it in one instruction because it is hard-wired, but a Turing machine requires 1,000 instructions then the Kolmogorov Complexity of the result will depend on whether we are speaking of the instruction of the RAM computer or the equivalent Universal Turing Machine.
Also, it occurs to me we ought to be looking at biological autonomous self-organizing complexity with the same kind of lens that we use in quantum information theory e.g. introduce von Neumann entropy for computational density matrices. IMHO, the variance in nature may be closer to quantum states than classical physics.
One of the things I've had to prove in the last year to the satisfaction of some others (I am lazy, lazy, lazy about actually doing proper rigorous publication of mathematics) is that this theorem allows a greater degree of freedom than actually exists.
For all finite TMs, the additive constant for a given function implementation is the same, making the intrinsic Kolmogorov complexities of the system identical. It is undefined for the UTM case, but then the intrinsic Kolmogorov complexity of a proper Universal Turing Machine is infinite, making the analysis a meaningless exercise. The assumption of infinite memory confused the usefulness of the theorem.
Therefore, all equivalent finite computational systems have the same Kolmogorov complexity even if the control functions vary. One of my Big Things has been rewriting computational information theory from the assumption of purely finite systems and treating the infinite case as an edge case rather than the assumed case. I've come across a number of small but important differences in the basic theorems of computational information theory by removing the assumptions of infinities, enough so that I've managed to drag some previously skeptical mathematicians into this exercise.