A problem of the incremental approach is obviously local minima in compression. Is it possible that the probability to end up in a local minimum decreases if the first compression step is large? It would be very cool, if that could be proved. What if greediness is even the optimal thing to do in this context? That would be sheerly amazing. What does “local minimum” actually mean in this context? Let’s say, we have an encoding (with those length is between the optimal and the original: . A local minimum would be present if you can not compress the encoding further. But what is ? Can it be random, i.e. ? No, after all, if the encoding is invertible, we can get from . And since we can get from the optimal program with , it follows that , thus can generate as well. However, if only such a crook is possible, is it not what is mean by a local minimum? That the decoding path from to does not pass through but instead goes via and inverse encodings? Yes, that is exactly what is meant.

Thus, the question is whether there are suboptimal codes such that they cannot be compressed further without going back to Of course, abundantly. Imagine a string of zeros and a suboptimal code that splits it into two blocks by index , filling each with zeros. The optimal code takes about while the suboptimal one takes . Since is arbitrary, there is not way it can be compressed further: a truly random number can be taken around and the loss is maximal.

Thus, there are plenty of local minima. And that is maybe already the hint: if a compression step is suboptimal then it looks more random than the optimal compression step. But is this really true? Can we not find examples where finding a small and a big regularity are exchangeable? There are many such examples, exactly when we talk about orthogonal features that reflect compression steps of different size. For example, if the points are on a circle of fixed radius and the angle goes from 0° to 90°, we can first define the quadrant (upper left or so) and the define the angle or vice versa. Fully interchangeable. But we are not talking about features or partial compression. One step in the hierarchy is a fully generative model of the whole data set, not of a partial aspect or feature of it.

It is somehow like this: if you make a suboptimal compression step, you could have captured more regularity, but you did not do so, thereby introducing some randomness in our code, which will have to remain uncompressed. On the other hand, consider the sequence 1,2,3,…,n,1,2,3,…,n. Representing this as two concatenated incremental functions leads to (1,1,n), (1,1,n) which takes around , and we can keep going. Representing it as constants defined on leads in a first step to which is much larger. But ultimately both can be compressed as successfully. We need an example that would run into a local minimum, a dead end. This “introducing randomness” concept needs clarification. Why given a sequence we introduce randomness if we split it into and ? This has actually happened in the current compressor version. Trying a random node on a sequence leads to such separations. Then we’d have to go back and try something different. There are many ways, a suboptimal path could be taken.

Maybe the line of argument should be that the longer the string is the more probable it is to get into a dead end? Therefore, choose to reduce the length of the string as much as possible? The number of different partitions definitely increases heavily with the string length (Bell number). But how should the probability of a dead end be computed? Via the number of programs able to print the sequence? Via the probability to get a random number? That is a number with low compressibility? Another argument may be that the number of random numbers is much greater for longer strings, since most are random anyway. But they are not truly random, they are just dead ends. How shall dead ends be defined?

I could still be in a dead end if I partly reconstruct the original. Hence, the criterion of incompressibility without going back to the original is not appropriate. A dead end could be if the only thing you can do is to unpack at least part of the data again in order to compress things further. Hence, in order to represent dead end, one has to break down compression into a set of programs. Let be the original sequence to be compressed. Incremental compression is defined as the process of finding a list of programs such that for :

and

The compression is optimal if . What is a dead end? If the length of some program has to increase again temporarily. However, this temporary operation can always be subsumed into a single operation. Also the second condition can be always made true by subsuming non-decreasing program lengths into larger blocks.

This boils down into a more basic question: why would one use all those steps in the first place? Because it seems much simpler to find partial compression than the optimal one. Why is this so? Is it the nature of things? When doing practical compression, often one considers only a small part of the sequence and tries to find regularities there and tries to extend them to a larger part of the sequence, partitioning it on the way if necessary. Finding a compressing representation of a small part is much easier than for the whole sequence. However, then using universal induction, it is fairly probable that the sequence can be predicted at least to some extent and one gets more parts “for free”. And universal induction can predict sequences optimally! I think that something can come out of it.

The line of argument could be the following. Since it is easier to compress small subsequences of a sequence, it is reasonable to partition the sequence into such easily compressible subsets. The respective programs can then be concatenated and form a new sequence to be compressed. Doing so recursively may substantially decrease the time complexity of the compression / inversion algorithm. Let’s try some numbers. Assume a sequence of length to be divided into subsets of length . Finding an optimal program for the th subset that minimizes Levin complexity takes . Since we want to take small and simple subsets may be very small making that search tractable. Ignoring the encoding of the subset positions, we continue. The new sequence will consist of concatenated programs of length , with the time to find them being . We keep going like this recursively until . Assume that at every recursion step, the length decreases by factor . Then the number of recursion steps needed is restricted by , thus . The total computation time then amounts to

This can be approximates further since . Thus we get

Thus, it is obvious that this approach is much better since can be picked fairly small in practice. Yeah, that’s what my intuition told me: this approach should be exponential in the size of the small problems and otherwise grow linearly with the number of recursion levels, string length and execution time. This is much, much, much faster than Levin search.

This could be generalized fairly easily. The crux of the problem however, is to show that such shorter sets programs exist. In particular it is important to show that there always exists a partition of the sequence such that each subsequence can be compressed. But for that I will need universal induction, I guess. And I have to learn the theory much more thoroughly.

#### Let’s collect what we may need.

Definition 4.5.3 says: Monotone machines compute partial functions : such that for all we have that is a prefix of .

Consider a sequence , consisting of subsequences and . Then, the subadditive property of prefix complexity dictates

by Example 3.1.2. However, we need a partition where we have roughly equality. But equality is only reached in Theorem 3.9.1:

It is clear that the reason is the K-complexity of information in about (Definition 3.9.1), which is basically the algorithmic version of mutual information, except that it is not symmetric:

If it is zero, we get equality. Hence, if we want incremental compression of subsequences, we have to find partitions with minimal mutual information. But we have to take the complexity of the position sets into account as well. We may define a subsequence exactly as one minimizing mutual information between it and the rest plus the complexity of the position set on which it is defined. However, even if this leads to a subsequence not identical to the whole sequence, there is no guarantee that that kind of compression may lead to further incremental compression.

Could it be that the fractal and self-similar nature of the world may be exactly the sort of data that is incrementally compressible? Maybe the complexity of the world has been built up in “slices”?

Why is the math of SOC systems so horrendously complex? Maybe, because there is no short description of those phenomena?! If mathematics is a description then only simple and regular things can be described mathematically, otherwise, we don’t get our heads around it. On the other hand, there is chaos, which is used to model randomness. Why are chaotic systems good generators of pseudo-random numbers? After all, the law is often very simple, hence the true complexity is fairly low. Thus we get numbers that seems very random / complex, but are in fact very simple.

It looks like the term “complexity” is not really captured by Kolmogorov’s definition. A random number is not complex. Let’s read about “logical depth”. “Both gases and crystals are structurally trivial” (p. 589) is fairly revealing. But their Kolmogorov complexity is fairly different. It is about structure. “A deep object is something really simple but disguised by complicated manipulations of nature or computation by computer.”

That reflects my intuition. We need to restrict ourselves to the “deep” subset of strings with low algorithmic complexity. Deep and simple strings.

One way of relating incremental compression to string depth is to acknowledge that it takes time to recursively unpack a representation. There is a lot of recursive reuse of computation output.

I started to consider partitions but partitions are only one way of identifying different components of a string. For example, ICA identifies that a data set is the sum of several components. This is also a type of compression, of course. Just thinking of partitions is too restrictive. However, it could be useful for our function network approach to define the scope of the representation and to derive an expression for the time complexity of the algorithm.

It’s funny, ICA just identifies a stimulus as the sum of simple stimuli. Let’s say, we ignore that and research interval partitions first, then general partitions. What should be done is to investigate what fraction of all sequences are covered this way.

It depends on the number of levels . If then the program generates the output directly. This corresponds to all sequences anyway. If , the one intermediate level is necessary to generate the output . Here, we impose . What does that mean? Obviously, it means that the reference machine is able to create the output from . In the context of output reuse, one could think of the number of times that a square of a single-tape machine has been read and rewritten before the machine halts. We can restrict that to stage-wise computation by requiring that a square is not read-written the st time before all other square have not been read-written the th time. This is how stage-wise computation can be defined.

We can define the “usage” of a square if it is read for the first time after it has been written.

Actually, the computation process of any deep string can be expressed as stage-wise computation. After all, if before writing to a square the st time, read the content of all square at stage and write the very same content into them (that is without changing them). This way, they arrive at stage . This proves that long computation time is equivalent to a computation with many stages!

What else would I like to prove? That compressing deep sequences is much faster than Levin search.

***

Let’s pose the question differently. Let’s say the shortest program generating is . And let be an intermediate program, , with and . Usually, Levin search allows finding in time . The crucial question is whether the existence of intermediate stage allows finding faster.

The intuition is as follows. Levin search is so slow, because partial progress does not help in any way to achieve further progress. This is the case since is random being the shortest program. Therefore, after having found, say, a partial program with , the search for is in no way easier, since does not contain any information about , otherwise would not be random.

However, what if knowing made things faster? That would require that the knowledge of could in any way accelerate finding . How so? After all, knowing does not accelerate finding through Levin search. Levin search for is also deadly, since . What if we split with and that generates which is part of : , on a monotone machine. In that case, Levin search will find fairly quickly. Now, is not random and can be predicted given a program generating . The correct program would be , of course. Hmm…

How can ever be synthesized, if it is random? It has to depends on . If could be concatenated from two independently searchable programs, then the cost of finding would reduce drastically. Why can it not be done on a monotone machine? Is this not always the case on a monotone machine? No. can not generate unless has been run before. This temporal contingency is mediated by the work tape. That’s why. No, the reason is different. Based on there is no guarantee that we will find . We will likely find a shorter program , which is not a prefix of . Thus, it won’t find ever that way unless it tries all strings.

That doesn’t help either. In my demonstrator, it does come to mind, that higher levels do get increasingly simpler, the entropy decreases. Maybe, is easier to crack, since it is “simpler” than ? That does not make sense since and have got the same algorithmic complexity, which is ! But in the demonstrator, the numbers tend to get smaller and the intervals narrower.

Let’s turn back to predicting . Finding from scratch is too difficult, since it takes combinations. But one could find a smaller program that not only explains but also a part of . That’s the whole point. The hierarchy is split up like a tree: every program part generates several parts. Therefore, the notation has to be different: , and . Of course, and . But it is also true that , . What follows? We can use to find . From we can already hypothesized about and potentially find it. Or find a different program that extends to correctly. The point is, that comes for free given . And are then predicted readily. Even could be enough to find . There has to be a synergy between those layers! Just like in the hierarchical Bayesian approach. And here is the synergy. In the bottom-up direction, although is necessary to find , narrows down the set of possible programs generating . In the top-down direction, given we can generate , even if we only know .

The problem is, even if narrows down the set of possible programs , does it really relieve us from having to loop through all combinations? No, certainly not in the beginning. And we also don’t get around looping through or just combinations. But how does it help to find ? Or even , given that we found first? It would be already very helpful if we could search for independently. And if it does it should do so only because of the presence of the intermediate layer.

***

The crux of the problem is that Levin search does not “use” the information in the sequence that it tries to compress. It only loops through all programs until one of it generates the sequence. That’s the most stupid thing you can do. What the demonstrator does is to detect regularities and uses the to compress the sequence incrementally. If detecting the regularities is sufficiently cheap, then this might lead to a substantial decrease in the time complexity of the algorithm. Cheapness is exactly what follows from the large amount of stages. We want to have those stages exactly because each such stage transition is much cheaper than finding the final shortest program at one step.

Can we use the formalization of the -test in order to measure the partial regularities at each stage? According to Definition 2.4.1 we require for the -test

for all . For a uniform test, we have

$latex d\left\{ x:\delta(x)\ge m,l(x)=n\right\} \le2^{n-m}$

The statement being, for a sequence of length drawn randomly from a uniform distribution, any feature occurs more than times with probability less than .

If we want a universal test for randomness, then we have to consider a dominating \textit{all} such ‘s. However, if we fix a particular sequence , then only a subset is enough to enumerate all nonrandom features and a much smaller is enough to measure randomness. What if at each stage a different nonrandom feature is eliminated until a fully random shortest program is reached? The problem is that the ‘s are just tests for randomness and not full fledged representations. I have to show somehow that I can eliminate one nonrandom feature at a time.

***

I somehow have to formulate the intuition that it is much simpler to make an incremental compression step than to find the shortest program immediately. Why is this so anyway? Because the compression goes along the lines of the unpacking, generating of the sequence. It is just the reverse trajectory that is passed. Returning to the previous reasoning, the reason why it is simpler, is that one can take a fraction of sequence , say and use it to find large parts of , say . It would be much harder to find any part of , because of the intricacy of the computation. But is exhaustive universal search not what we do for small fractions like ? Is that not the process of detecting regularities? Consider a sequence . Taking differences of neighbors leads to , which in turn leads to implementing an alternation function and a incremental one, neglecting lengths. In my demonstrator, the criterion for adapting is the decrease in entropy. The hunch is that sequences with lower entropy are “easier” to predict. But is this true? After all, the algorithmic complexity has remained the same. The word “ease” is used here in the sense of the extent to which a small part of the sequence can be used to predict large parts of it. In our case, it is quite often the case. After all, is much easier to predict than . Which means that a fairly small program comes to mind quickly in the first case. Does it mean that small parts of low entropy sequences have lower complexity than equally small parts of high entropy ones?

That’s a very interesting question. If it is true, then one could set up a function to characterize the situation. For low level sequences such as our , would increase quickly to the complexity of the full sequence: for small already, while for it takes longer. What does it mean? After all, the complexity is equal: . After all, easy to predict means that the initial complexity of the prefix is low. Or does it just mean that the depth of the sequence goes down? Probably the latter. After all, the high level sequences like are more shallow than low level ones like . But that’s true by definition. “Easier” means that it takes less time to find given than given . After all, the only way to find from is via , unless one uses universal search. But why? It must be because trying to explain a small fraction of leads to finding more probably than the corresponding fraction of .

I have confused something. The program generating is of course simpler and shorter than the one generating via ! After all, it has to encode how to unpack after having generated it! Probably, one can just concatenate those programs. Thus, we have with and , on a monotone machine. does not generate directly. It requires an additional program that tells it to execute whatever it outputs recursively.

Let’s imagine things concretely. Let’s say we have a universal monotone machine with one-directional, input and output tapes and , respectively, and to read and write bi-directional work tapes and . A program enters the input tape and it includes a fixed subprogram which tells the machine to execute a program recursively times. Thus, when we write , at each counter value the machine takes the current program from (or from if ), copies it to and treats it as input. It executes it, writes it to and increases the counter . When the machine executes the program on , writes it to and halts. This procedure is encoded in .

Thus, in our 2-stage case, we have , while and . That’s the real relationship. Therefore, . And generally, . But is fairly small and basically constant, thus the complexities of are still roughly equal: .

How about the following example: . Detecting the regularity that two neighbors are always equal makes things simpler and reduces us to plus a short program telling to copy every entry, which we are going to neglect. This is much easier to compress to than to do that with directly. But should we really neglect the copying program? Is it not exactly the one making things easier? Is it not the one striping away complexity? It is. It is exactly one of the decomposing factors of the sequence that is represented incrementally.

Let’s chose a different representation then and say that a program consists of an operator and a parameter , where the parameters are the part of the program being compressed further. In the above example, consists of that is the incremental function that uses to create , where is simply copied! Then, is the copying/constant function/operator and . Together, they create . Remarkably, the operators are probably not compressed any simply copied into the level above. In that case is a part of : . It is the parameters that devour most of the entropy / program length. Thus, the general rule is to compress like , until we arrive at the highest compression level . If is decomposable in such a way, then it seems plausible that a compression algorithm tries to unravel this nested computation.

The idea was to somehow argue that a single compression step

is “easier” in terms of time complexity that to find the shortest program in a brute force way, requiring time steps. Could it mean that it is easier to find features of the data than a whole feature basis? Maybe that is the relation to features. Are features the correct term? Usually, those mean partial descriptions of data. I really mean full bases/ descriptions stacked on top of each other.

An important aspect of the demonstrator is that, in the previous notation, is inferred from together with a function **computing** from . Given that, all the rest of can be computed from . That’s what makes things easy in uncovering the intermediate layer. And the only criterion that I have for such a function is that it create a shorter description than the original one: , with . Therefore, already fairly simple functions can do the job. Assume, we can find through Levin search such that and invert the computation to get , with being sufficiently short, even much shorter than , that it is possible fairly quickly. Then apply this function to other parts of , if possible. This will lead to a shorter description . If that’s possible, then we can be essentially as fast as or so, where is the number of levels.

Li and Vitányi write on page 403 “we identify a property of elements of with a subset of consisting of all elements having the property”. That means that if is such a property, then the statement that has property simply means that . Of course, the description length is immediately bounded not just by , but now by . The subsequent discovery of such properties with each level means that has many properties and is in their intersection: . Typically, the size of each such subset is exponentially smaller than the size of . Hence, the central question is how quickly we can find a property. Let be defined by , which basically means that . The idea is that a small part of can be used to find . Why so? Because it is improbable that a property holding for does not hold for any other part of . The intriguing part is that those other parts of can be computed directly from those other parts of without reverting to ! Why is it possible? But there is no guarantee that it is possible. What one can try is to extend the current explanation as far as possible and for the rest of the sequence and start afresh. I think, the significant compression is achieved by being able to extend the current little explanation to other parts of , like , such that can generate , while having been found using only. Why is this possible? Or probable? Well, because of universal induction. If some compression is achieved, chances are, that it is predictive. Yes.

And why is this not possible directly with ? By definition. Since is deep, the only way to generate it from is through or other intermediate stages! We could, for a start, restrict the set of sequences to hierarchies, with and and , thus rendering the different parts of a sequence independent. Notice, that the ‘s are mapped to ‘s which may be different partitions than those used for further computation. Hence, those will have to be looked for independently, taking up . In total for a level, we get , which is much, much less than . However, this is a trivial result, given the restriction. Hence, I get just what I have put in. Damn it.

Maybe I can just learn from it that if independent parts of a sequence occur, then things become easy very quickly.

It has to be related to those damn feature bases as a direct mapping from the data to a slightly compressed description , the parameters of which can be compressed further.