Mining extensible motifs
The procedure of motif extraction that is described in Table 1 essentially constructs the inexact suffix tree of [1] implicitly, in a different order. The input is a string s of size n and two positive integers, K and D.
The extensibility parameter D is interpreted in the sense that up to D (or 1 to D) dot characters between two consecutive solid characters are allowed. The output is all maximal extensible (with D spacers) patterns that occur at least K times in s. Incidentally, the algorithm can be adapted to extract rigid motifs as a special case. For this, it suffices to interpret D as the maximum number of dot characters between two consecutive solid characters.
The algorithm works by converting the input into a sequence of possibly overlapping cells: A cell is the smallest substring in any pattern on s, that has exactly two solid characters: one at the start and the other at the end position of this substring. A maximal extensible pattern is a sequence of cells.
Initialization phase
The cell is the smallest extensible component of a maximal pattern and the string can be viewed as a sequence of overlapping cells. If no don't care characters are allowed in the motifs then the cells are non-overlapping. The initialization phase has the following steps.
Step 1: Construct patterns that have exactly two solid characters in them and separated by no more than D spaces or "." characters. This is done by scanning the string s from left to right. Further, for each location we store start and end position of the pattern. For example, if s = abzdabyxd and K = 2, D = 2, then all the patterns generated at this step are: ab, a.z, a..d, bz, b.d, b..a, zd, z.a, z..b, da, d.b, d..y, a.y, a..x, by, b.x, b..d, yx, y.d, xd, each with its occurrence list. Thus
ab
= {(1, 2), (5, 6)},
a.z
= {(1, 3)} and so on.
Step 2: The extensible cells are constructed by combining all the cells with at least one dot character and the same start and end solid characters. The location list is updated to reflect the start and end position of each occurrence. Continuing the previous example, b-d is generated at this step with
b-d
= {(2, 4), (6, 9)}. All cells m with |
m
| <K are discarded. In the example, the only surviving cells are ab, b-d with
ab
= {(1, 2), (5, 6)} and
b-d= {(2, 4), (6, 9)}
Iteration phase
Let B be the collection of cells. If m = Extract(B), then m ∈ B and there does not exist m' ∈ B such that m' ∗ m holds: m1 ∗ m2 if one of the following holds: (1) m1 has only solid characters and m2 has at least one non-solid character (2) m2 has the "-" character and m1 does not, and, (3) m1 and m2 have d1, d2 > 0 dot characters respectively and d1 <d2.
Further, m1 is ~-compatible with m2 if the last solid character of m1 is the same as the first solid character of m2. Further if m1 is ~-compatible with m2, then m = m1 ~ m2 is the concatenation of m1 and m2 with an overlap at the common end and start character and
m
= {(x, y)|(x, l) ∈
}. For example if m1 = ab and m2 = b.d then m1 is ~-compatible with m2 and m1 ~ m2 = ab.d. However, m2 is not ~-compatible with m1.
NodeInconsistent(m) is a routine that checks if the new motif m is non-maximal w.r.t. earlier non-ancestral nodes by checking the location lists. The procedure is best described by the pseudocode shown in Table 1. Steps G:18–19 detect the suffix motifs of already detected maximal motifs. Result is the collection of all the maximal extensible patterns.
A tight time complexity for the procedure is not easy to come by, however, if we consider M to be the number of extensible maximal motifs and S to be the size of the output – i.e. the sum of the sizes of the motifs and the sizes of the corresponding location lists – then the time taken by the algorithm is O(SM log M). In experiments of the kind described later in the paper, at 3 GHz clock, time ranged typically from few minutes to half an hour.
Compression by extensible motifs
Traditionally, the design of codebooks used in compression proceeds from specifications that are either statistical or syntactic. The quintessential statistical approach is represented by Huffman codes, in which symbols are ranked according to their frequencies and then assigned in order of decreasing probability to longer and longer codewords. In a syntactic approach, the codebook is built out of patterns that display certain features, e.g., of robustness in the face of noise, loss of synchronization, etc. The focal point in these developments is the structure of the codewords. For instance, a codeword is a pattern w of length m such that any other codeword must be at a distance of d from w, the distance being measured in terms of errors of a certain type. We can have only substitutions in the Hamming variant, substitutions, insertions and deletions in the Levensthein variant, and so on. Of course, the two aspects blend in the final code. With Huffmann codes, for instance, once the characters are statistically ranked a code with certain syntactic characteristics, notably, obeying the prefix property, is built. Likewise, once the codebook of an error correcting code is designed, the statistics of the source is taken into account for encoding. However, these two stages are, as a rule, carried out somewhat independently.
The notion of a motif that we adopt tightly combines the structure of the motif pattern, as described by its syntactic specification, with the statistical measure of its occurrence count. This supports a notion of saturation that finds natural use in the dual contexts of structural inference and compression. As said, this saturation condition mandates that motifs that could be made more specific without altering their set of occurrences do not bear interest and may be discarded.
In this Section, we present lossy off-line data compression techniques by textual substitution in which the patterns used in compression are chosen among the extensible motifs that are found to recur in the textstring with a minimum pre-specified frequency. As mentioned, motif discovery and motif-driven parses of various kinds have been previously introduced and used in [5], however, the motifs considered in those studies are "rigid".
The transition from rigid to extensible motifs requires a complete restructuring of the combinatorial and computational tools for their extraction and implementation. Specifically, one needs:
-
An algorithm for the extraction of flexible motifs.
-
A criterion for choosing and encoding the motifs to be used in compression.
-
A new suite of software programs implementing the whole.
The orchestration of these ingredients are briefly described next. We regard the motif discovery process as distributed on two stages, where the first stage unearths motifs endowed with a certain set of properties and the second implements them in the compression. The first part was dealt with in the preceding section. Like with rigid motifs in [5], the flexible ones presented here may be restored at the receiver using information about gap filling, to be transmitted separately. In images, for instance, a tremendous amount of compression is attained, albeit with a large loss such as 40% or so, yet simple predictors in the form of linear interpolation restores more than 95% of the original.
The methods presented here belong to a class of off-line textual substitution that try to reap through greedy approximation the benefits of otherwise intractable optimal macro schemes [9]. The specific heuristic followed here is based on a greedy iterative selection (see e.g., [10]) which consists of identifying and using, at each iteration, a substring w of the text x such that encoding all instances of w in x yields the highest possible contraction of x. This process may be also interpreted as learning a "straight line" grammar of minimum description length for the sourcestring, for which we refer to [5, 11, 12] and references therein. Off-line methods are not always practical and can be computationally imposing even in approximate variants. They do find use in contexts and applications, such as mass production of CD-ROMs, backup archiving, etc. (see, e.g., [13]). Paradigms of steepest descent approximations have delivered good performances in practice and also appear to be the best candidates in terms of the approximation achieved to optimum descriptor sizes [14].
Our steepest descent paradigm performs a number of phases consisting each in the selection of the pattern to be used for compression followed by the actual substitution and encoding. The process stops when no further compression is achieved. The sequence representation at the outset is finally pipelined into some of the popular encoders and the best one among the overall scores thus achieved is retained. Clearly, at any stage it is impossible to choose the motif on the basis of the actual compression eventually conveyed by that motif. The decision must be based on an estimate, that takes in to account the mechanics of encoding. In practice, we estimate at log(i) the number of bits needed to encode the integer i (we refer to, e.g., [4] for reasons that legitimate this choice). In one scheme [10], one eliminates all occurrences of m, and record in succession m, its length, and the total number of its occurrences followed by the actual list of such occurrences. Letting |m| to denote the length of m, D
m
denotes the number of extensible characters in m, f
m
the number of occurrences of m in the textstring, s
m
the number of characters occupied by the motif m in all its occurrences on s, |Σ| the cardinality of the alphabet and n the size of the input string, the compression brought about by m is estimated by subtracting from the s
m
log |Σ| bits originally encumbered by this motif on s, the expression |m| log |Σ| + log |m| + f
m
D
m
log D + f
m
log n + log f
m
charged by encoding, thereby obtaining:
G(m) = (s
m
- |m|) log |Σ| - log |m| - f
m
(D
m
log D + log n) - log f
m
This is accompanied by a loss L(m) represented by the total number of don't cares introduced by the motif, expressed as a percentage of the original length. If d
m
is the total number of such gaps introduced across all its occurrences, this would be: L(m) = d
m
/s
m
.
Other encodings are possible (see, e.g., [10]). In one scheme, for example, every occurrence of the chosen pattern m is substituted by a pointer to a common dictionary copy, and we need to add one bit to distinguish original characters from pointers. The original encumbrance posed by m on the text is in this case (log |Σ| + 1)s
m
, from which we subtract |m| log |Σ| + f
m
D
m
log D + log |m| + f
m
(log r + 1), where r is the size of the dictionary, in itself a parameter to be either fixed a priori or estimated.