1) Randomness is, in a sense, a measure of compressibility. A physically random system is one whose dynamics we cannot predict and that we must simulate in order to know. It is temporally incompressible. Of course, in reality, physical processes tend to exhibit degrees of randomness, so that we can say some things about them, but perhaps only a limited number of things up to a limited accuracy for a limited amount of time.
Is “that which cannot be predicted except by a full-scale simulation” a proper definition of a random physical process? How does this relate to Kolmogorov complexity?
2) We need a measure of how difficult it is to recognize the order in an apparently random process. Kolmogorov complexity is a good description of the compressibility of a process, but it doesn’t tell us how hard it is to recognize that minimal description. It may be that we think a process is quite random, requires a long description length, and thus has high Kolmogorov complexity but, with just one little insight, we will find some beautiful underlying structure and our minimal description length (and Kolmogorov complexity) drop enormously. Or it could take centuries of great thoughts to make such a simplification. Or it might not even be possible. That Kolmogorov complexity says nothing about the distribution of descriptions in “description space” and the ease/difficulty of discovering the minimal one seems to me to be a serious shortcoming. A better understanding of the “difficulty of description” might give us some interesting information about scientific theory and what types are “easier” or “harder” than others to discover.
These thoughts are inspired by a lunchtime conversation with http://www.mit.edu/~hyuen/ today on the nature and importance of randomness and a conversation last week with the USC theoretical computer science lunch bunch over a long-lost, unpublished paper by Len Adleman about randomness.