To improve the success rate of LCM, a shape decomposition method is needed that computes feasible fragments, i.e. fragments that fulfill certain constraints in size and morphology. We propose an algorithm for constrained polygon decomposition using a skeletonbased approach.
Skeletonization
Our approach is based on the medial axis or skeleton of the shape. The medial axis is defined as the set of points that have more than one closest point on the boundary of the shape. The medial axis was introduced for the description of biological shapes [20, 21] but is now widely used in other applications such as object recognition, medical image analysis and shape decomposition (see [10] for a survey). An important property is that the medial axis represents the object and its geometrical and topological characteristics while having a lower dimension [22, 23].
Formally, the medial axis of a shape D is defined as the set of centers of maximal disks in D. A closed disk \(B\subset D\) is maximal in D if every other disk that contains B is not contained in D. A point s is called skeleton point if it is the center of a maximal disk B(s) (see Fig. 2). For a skeleton point s, we call the points where B(s) touches the boundary the contact points—every skeleton point has at least two contact points. A skeleton S is given as a graph consisting of connected arcs \(S_k\), which are called skeleton branches and meet at branching points. Given a simple polygon without holes the skeleton is an acyclic graph.
There are various methods for the computation of the medial axis in practice [10]. In general, the medial axis is very sensitive to noise in the boundary of object. This is a problem that often occurs in digital images and leads to spurious skeleton branches. Procedures that remove these uninformative branches are known as pruning methods. Pruning can be applied after skeletonization [24,25,26] or is included in the computation of the skeleton [27,28,29,30]. For our application, we utilize the skeletonization and pruning method of Bai et al. [29], which was previously used for other bioimaging applications [12, 14]. This algorithm produces a discrete and pruned skeleton, which consists of a finite number of skeleton pixels as our skeleton points. This is favorable for our practical application as we have a discrete input and a discrete output is expected. Furthermore, the computed skeleton has the property that every branching point has a degree of exactly three.
Skeletonbased polygon decomposition
We consider the following problem: Given a simple polygon P, compute an optimal feasible decomposition of P. A decomposition is feasible if every subpolygon is feasible, in the sense that it fulfills certain conditions on for instance its size and shape. We present an algorithmic framework that allows the integration of various criteria for both feasibility and optimization, which are discussed later. As for now, we only consider criteria that are locally evaluable.
In our skeletonbased approach, we only allow cuts that are line segments between a skeleton point and its corresponding contact points. Thus, the complexity of our algorithm mainly depends on the number of skeleton points rather than the number of boundary points of the polygon. Every subpolygon in our decomposition is generated by two or more skeleton points. We present two decomposition algorithms: One in which we restrict the subpolygons to be generated by exactly two skeleton points and a general method. In the first case, each subpolygon belonging to a skeleton branch can be decomposed on its own and in the second case the whole polygon is decomposed at once.
Decomposition based on linear skeletons
First, we consider the restriction that the subpolygons are generated by exactly two skeleton points. In this case, the corresponding skeleton points have to be on the same skeleton branch \(S_k\). In our computed skeleton, a branching point belongs to exactly three branches and thus has three contact points. Each combination of two out of the three possible cut line segments corresponds to one of these branches. Due to the Domain Decomposition Lemma (see Fig. 3, proof in [22]) and the following corollary, we can decompose each skeleton branch on its own.
Theorem 1
(Domain Decomposition Lemma) Given a domain D with skeleton S(D), let \(p\in S(D)\) be some skeleton point and let B(p) be the corresponding maximal disk. Suppose \(A_1,A_2,\ldots ,A_k\) are the connected components of \(D\setminus B(p)\). Define \(D_i = A_i \cup B(p)\) for all i. Then:
$$\begin{aligned} S(D) = \bigcup _{i=1}^k S(D_i). \end{aligned}$$
Moreover, we have
$$\begin{aligned} S(D_i) \cap S(D_j) = {p} \;\forall \; i\ne j. \end{aligned}$$
Corollary 2
Let \(p\in S(D)\) and \(A_1,A_2,\ldots ,A_k\) be as above. For each skeleton point \(q\ne p\) exists an i such that all contact points of q are contained in \(A_i\).
Let \(S_k\) be a skeleton branch with a linear skeleton of size \(n_k\) and let \(P_k\) be the polygon belonging to this branch. By \(P_k(i,j)\), we denote a subpolygon that is generated by two skeleton points i and j on \(S_k\) (see Fig. 4). Thus, we have \(P_k(1,n_k)=P_k\). First, we consider the decision problem, which can be solved by using dynamic programming. For each skeleton point i from \(n_k\) to 1, we determine X(i). X(i) is True if there exists a feasible decomposition of the polygon \(P_k(i,n_k)\). This is the case if either

a)
\(P_k(i,n_k)\) is feasible or

b)
there exists \(j>i\) such that \(P_k(i,j)\) is feasible and \(P_k(j,n_k)\) has a feasible decomposition.
This is illustrated in Fig. 5. By choosing optimal points j during the computation, we can include different optimization goals. If X(1) is True, the entire polygon has a feasible decomposition, which can be computed via backtracking.
Lemma 3
Given a subpolygon \(P_k\) with a linear skeleton \(S_k\) consisting of \(n_k\) points, one can compute a feasible decomposition of \(P_k\) based on \(S_k\) in time \({\mathcal {O}}({n_k}^2F)\), with F being a factor depending on the feasibility criteria.
Proof
We initialize \(X(n_k)=\texttt {True}\). For every skeleton point i, for \(i=n_k1\) down to 1, we compute X(i) such that X(i) equals True if there exists a feasible decomposition of \(P_k(i,n_k)\). To compute X(i), we consider \({\mathcal {O}}(n_k)\) other values X(j) for \(i<j\le n_k\) and check in time \({\mathcal {O}}(F)\) if the polygon \(P_k(i,j)\) is feasible. The correctness follows inductively. \(\square\)
The factor F is determined by the runtime it takes to decide whether a subpolygon is feasible. This factor depends on for instance the number of points in the skeleton or in the boundary of the polygon. We discuss examples in the following "Feasibility constraints and optimization" section. After computing decompositions for each subpolygon corresponding to a skeleton branch, we can combine those to obtain a decomposition of the entire polygon. This leads to the following result.
Theorem 4
Given a simple polygon P with skeleton S consisting of n points, one can compute a feasible decomposition of P based on the skeleton branches of S in time \({\mathcal {O}}(n^2F)\), with F being a factor depending on the feasibility criteria.
Note that there might not exist a feasible decomposition of the entire polygon or for certain subpolygons. By using this method, we are able to obtain partial decompositions. Thus, this approach can be favorable in practice.
General decomposition
In the general setting, subpolygons are allowed to be generated by more than two skeleton points. In this paper, we will briefly explain the idea of our method (see [31] for a more detailed description and the corresponding formulas). Recall that our skeleton is an acyclic graph consisting of a finite number of vertices, i.e. skeleton points. The skeleton computed for our application (method of Bai et al. [29]) has the property that the maximal degree of a skeleton point is three. We represent the skeleton as a rooted tree by selecting one branching point as the root (see Fig. 6). Since branching points belong to three different branches, these nodes are duplicated in the skeleton tree such that each node corresponds to the cut edges on the respective branch. Our method and its runtime are based on two main observations.
Observation 5
The maximal number of skeleton points that can generate a subpolygon is equal to the number of endpoints in the skeleton, i.e. the number of leaves in the skeleton tree.
Observation 6
Every subpolygon can be represented as the union of subpolygons generated by just two skeleton points.
Let i be a node in the skeleton tree and \(T_i\) the subtree rooted in i. By P(i), we denote the subpolygon ending in the skeleton point i. This polygon corresponds to the subtree \(T_i\) in the given tree representation (see Fig. 7). For each node i (bottomup), we compute if there exists a feasible decomposition of the polygon P(i). Such a decomposition exists if either

a)
P(i) is feasible or

b)
There exists a feasible polygon \(P'\) ending in i and feasible decompositions of the connected components of \(P(i)\setminus P'\).
Thus, we have to consider all different combinations of skeleton points that together with i can form such a polygon \(P'\). In a topdown manner, we consider the different combinations of nodes \(\left[ i_1,i_2,\ldots ,i_l\right]\) such that \(i_j\in T_i\) and \(T_{i_j} \cap T_{i_{j'}}= \emptyset\) for all \(j\ne j'\). The polygon \(P'\) corresponds to the subtree rooted in i with \(i_1,i_2,\ldots , i_l\) as the leaves, depicted in blue in Fig. 8. Note that we can compute \(P'\) as a union of subpolygons iteratively. We check if \(P'\) is feasible and if we have feasible decompositions for each \(P(i_j)\), meaning every subtree \(T_{i_j}\) (gray in Fig. 8). Because of Observation 5, we know that \(l\le k\), for k being the number of leaves in the skeleton tree. We have a feasible decomposition of the whole polygon if there exists one of the polygon P(r). This computation dominates the runtime with the maximum number of combinations to consider being in \({\mathcal {O}}(n^k)\). Note that this approach does not depend on the initial choice of the root node.
Theorem 7
Given a simple polygon
P
with skeleton
S
consisting of
n
points with degree at most three, one can compute a feasible decomposition of
P
based on
S
in
\({\mathcal {O}}(n^kF)\)
time, with
k
being the number of leaves in the skeleton tree and
F
as above.
Feasibility constraints and optimization
The proposed polygon decomposition method is a versatile framework that can be adjusted for different feasibility constraints and optimization goals. With regard to the application in LCM, we considered criteria based on size and shape. As stated before, it is assumed that the main cause for unsuccessful dissections lies in an incorrect size or morphology of the considered fragments. In LCM, a laser separates a tissue fragment from its surrounding sample leaving a small connecting bridge as the impact point of a following laser pulse, which catapults the fragment into a collecting device. As the laser burns part of the boundary, the fragment has to have a certain minimal size to ensure that enough material is supplied to be analyzed. On the other hand, the size cannot be too large or otherwise the force of the laser pulse does not suffice for the transfer process. Furthermore, observations show that the dissection often fails due to an irregular shape of the fragment. Specifically, elongated shapes or fragments with narrow regions (bottlenecks) seem to be problematic. The tissue can tear at these bottlenecks and is only transferred partly or not at all because the laser pulse is concentrated on only a small part of the boundary.
In the following, we describe the implementation of different constraints that we considered based on our application. For simplicity, we limit the description to the decomposition algorithm for linear polygons, but all mentioned constraints can be applied to the general decomposition method as well.
Feasibility constraints
For the size constraint, we restricted the area of the subpolygons. Given two bounds l and u, a polygon P is feasible if \(l\le A(P)\le u\), for A(P) being the area of the polygon. One could also apply this constraint on the number of boundary points instead of the area. We implemented different shape constraints. On the one hand, we considered approximate convexity. Then, a polygon P is feasible if every inner angle lies between two given bounds. As this criterion does not prevent elongated shapes, we considered fatness instead. Fatness can be used as a roundness measurement and is defined by the aspect ratio AR(P) of a polygon, which is the ratio between its width and its diameter [32,33,34]. For a simple polygon, the diameter is defined as the diameter of the minimum circumscribed circle and the width as the diameter of the maximum inscribed circle. A polygon P is called \(\alpha\)fat if \(AR(P)\ge \alpha\). For the fatness constraint, we define a polygon as feasible if it is \(\alpha\)fat for some given parameter \(\alpha \in \left( 0,1\right]\). Higher values of \(\alpha\) result in fragments that are more circular and less elongated in shape.
For area as well as approximate convexity, we can compute the required values incrementally if the values for all subsequent subpolygons are given beforehand. Therefore, we can check the feasibility in constant time. Thus, a feasible decomposition using these criteria can be computed in time \({\mathcal {O}}(n^2+m)\) for n being the number of skeleton points and m the number of boundary vertices. If the fatness criterion is used, one has to calculate the aspect ratio of each polygon, which takes \({\mathcal {O}}(m\log m)\) time and therefore results in a runtime of \({\mathcal {O}}(n^2m\log m)\).
Optimization goals
The algorithm computes the value X(i) for each skeleton i. For the decision problem, we defined X(i) to be True if there exists a feasible decomposition of the polygon P(i, n). With a redefinition of X(i), we can implement a variety of optimization goals. For a point i, let I be the set of points j such that P(i, j) are feasible. One possible optimization goal is finding a minimal decomposition by minimizing the number of fragments (MinNum). We define X(i) as the number of subpolygons in an optimal feasible decomposition of P(i, n), set \(X(n) = 0\) and compute \(X(i) = \min _{j\in I}X(j)+1\). By minimizing the length of cut edges (MinCut), shorter cuts are preferred and thus preferably placed at bottlenecks in the polygon. Since every skeleton point i is the center of a maximal disk, we can obtain the cut length by the corresponding radius r(i). We can define X(i) either as the length of the longest cut or as the sum of cut lengths in an optimal decomposition of P(i, n) and compute \(X(i) = \max \{\min _{j\in I}X(j), r(i)\}\) or \(X(i)=\min _{j\in I}X(j)+r(i)\). The runtime for both MinNum and MinCut is the same as for the decision problem. Furthermore, we considered maximizing the fatness (MaxFat) as an optimization goal. A decomposition is optimal if the smallest aspect ratio is maximized. We define x(i) as the value of the smallest aspect ratio and compute \(X(i) = \max _{j\in I}\{\min \{X(j),AR(P(i,j))\}\}\). Applying fatness as a feasibility constraint or an optimization goal results in the same runtime because both approaches require the calculation of the aspect ratios of all subpolygons.
Comparison of criteria
Our algorithm facilitates the use of a wide range of feasibility criteria and optimization goals, which can be combined with each other. Note that for certain combinations other (faster) methods might exist. One example is finding the minimal (MinNum) decomposition in which the area of the subpolygons is bounded. For polygons with linear skeletons this can be modeled as finding the minimal segmentation of a weighted trajectory (in \({\mathcal {O}}(n\log n)\) time [35]). For general polygons, this problem can be modeled as computing the minimal (l, u)partition of a weighted cactus graph (in \({\mathcal {O}}(n^6)\) time [36, 37]).
The selection of constraints used for the algorithm obviously affects the resulting decomposition. The number of subpolygons as well as the position of cuts varies noticeably. Depending on the underlying application, one might choose suitable constraints. In the following, we present decompositions for different combinations of criteria and assess their suitability for our specific application. The typical results are exemplified using a ROI polygon of a lung tissue sample (see Fig. 9).
Panel A and B in Fig. 9 illustrate the effect of the size criterion. Having a larger upper bound obviously results in fewer subpolygons. The MinNum optimization goal minimizes the number of subpolygons, but the solutions are not necessarily unique and one optimal decomposition is chosen arbitrarily. This can be observed at the bottommost skeleton branch in both these decompositions as the cuts in A would be feasible with the constraints from B as well. In panel C and D of Fig. 9, the fatness constraint was applied in form of a lower bound on the aspect ratio of subpolygons. In comparison to the decomposition depicted in A, this criterion avoids the tendency towards elongated fragments. However, tighter bounds do not necessarily result in better outcomes as a feasible decomposition might not exist at all. This case is illustrated in panel D, where the algorithm did not find a feasible decomposition for the polygon parts that are depicted in gray. For our application, this would not be favorable as it reduces the amount of extracted tissue material.
We applied the different optimization goals denoted by MinNum (panel A), MinCut (panel E) and MaxFat (panel F) with the same feasibility constraint. Choosing a different optimization goal will not influence the amount of area in the decomposition, but the quantity and positions of cut edges may change considerably. These changes are expected to affect the amount of successfully dissected tissue fragments in the microdissection. When looking at the decompositions of the top left skeleton branch in those three polygons, one notices that the ones in A and E have the same number of fragments, but with MinCut a cut with a lower length is chosen. Maximizing the fatness usually results in a higher number of subpolygons. As can be seen in panel F, the resulting subpolygons are less elongated and more circular in shape. We expect these to be the desired shapes for our application. Hence, we used the area constraint in combination with the MaxFat optimization in our experiments and the following comparison of decomposition methods for LCM.