Automata networks
Definition 1 introduces the formalism of automata networks (AN) [6] (see Fig. 1) which allows to model a finite number of discrete levels, called local states, into several automata. A local state is denoted \(a_i\), where a is the name of the automaton, corresponding usually to a biological component, and i is a level identifier within a. At any time, exactly one local state of each automaton is active, modeling the current level of activity or the internal state of the automaton. The set of all active local states is called the global state of the network.
The possible local evolutions inside an automaton are defined by local transitions. A local transition is a triple noted \(a_i \overset{\ell }{\rightarrow } a_j\) and is responsible, inside a given automaton a, for the change of the active local state (\(a_i\)) to another local state (\(a_j\)), conditioned by the presence of a set \(\ell \) of local states belonging to other automata and that must be active in the current global state. Such a local transition is playable if and only if \(a_i\) and all local states in the set \(\ell \) are active. Thus, it can be read as “all the local states in \(\ell \) can cooperate to change the active local state of a by making it switch from \(a_i\) to \(a_j\)”. It is required that \(a_i\) and \(a_j\) are two different local states in automaton a, and that \(\ell \) contains no local state of automaton a. We also note that \(\ell \) should contain at most one local state per automaton, otherwise the local transition is unplayable; \(\ell \) can also be empty.
Definition 1
(Automata network) An Automata network is a triple \((\Sigma ,\mathcal {S},\mathcal {T})\) where:
-
\(\Sigma = \{a, b,\ldots \}\) is the finite set of automata identifiers;
-
For each \(a \in \Sigma \), \(\mathcal {S}_a = \{a_i,\ldots ,a_j\}\) is the finite set of local states of automaton a; \(\mathcal {S}= \prod _{a \in \Sigma }\mathcal {S}_a\) is the finite set of global states; \(\user2{LS} = \cup _{{a \in \Sigma }} {\mathcal{S}}_{a} \) denotes the set of all the local states.
-
For each \(a \in \Sigma \), \(\mathcal {T}_a = \{ a_i \overset{\ell }{\rightarrow } a_j \in \mathcal {S}_a \times \wp (\user2{LS} \setminus \mathcal {S}_a) \times \mathcal {S}_a \mid a_i \ne a_j \}\) is the set of local transitions on automaton a; \(\mathcal {T}= \bigcup _{a \in \Sigma } \mathcal {T}_a\) is the set of all local transitions in the model.
For a given local transition \(\tau = a_i \overset{\ell }{\rightarrow } a_j\), \(a_i\) is called the origin or \(\tau \), \(\ell \) the condition and \(a_j\) the destination, and they are respectively noted \(\mathsf {ori}(\tau )\), \(\mathsf {cond}(\tau )\) and \(\mathsf {dest}(\tau )\).
Example 1
Figure 1 represents an AN \((\Sigma , \mathcal {S}, \mathcal {T})\) with 4 automata (among which two contain 2 local states and the two others contain 3 local states) and 12 local transitions:
-
\(\Sigma = \{a, b, c, d\}\),
-
\(\mathcal {S}_a = \{a_0, a_1\}\), \(\mathcal {S}_b = \{b_0, b_1, b_2\}\), \(\mathcal {S}_c = \{c_0, c_1\}\), \(\mathcal {S}_d = \{d_0, d_1, d_2\} \),
-
\(\mathcal {T}= \{ \begin{array}[t]{ll} a_0 \overset{\{c_1\}}{\longrightarrow } a_1, a_1 \overset{\{b_2\}}{\longrightarrow } a_0, &{} b_0 \overset{\{d_0\}}{\longrightarrow } b_1, b_0 \overset{\{a_1, c_1\}}{\longrightarrow } b_2, b_1 \overset{\{d_1\}}{\longrightarrow } b_2, b_2 \overset{\{c_0\}}{\longrightarrow } b_0, \\ c_0 \overset{\{a_1, b_0\}}{\longrightarrow } c_1, c_1 \overset{\{d_2\}}{\longrightarrow } c_0, &{} d_0 \overset{\{b_2\}}{\longrightarrow } d_1, d_0 \overset{\{a_0, b_1\}}{\longrightarrow } d_2, d_1 \overset{\{a_1\}}{\longrightarrow } d_0, d_2 \overset{\{c_0\}}{\longrightarrow } d_0 \}\text {.} \end{array}\)
The local transitions given in Definition 1 thus define concurrent interactions between automata. They are used to define the general dynamics of the network, that is, the possible global transitions between global states, according to a given update scheme. In the following, we will only focus on the (purely) asynchronous and (purely) synchronous update schemes, which are the most widespread in the literature. The choice of such an update scheme mainly depends on the considered biological phenomena modeled and the mathematical abstractions chosen by the modeler.
Update schemes and dynamics of automata networks
As explained in the previous section, a global state of an AN is a set of local states of automata, containing exactly one local state of each automaton. In the following, we give some notations related to global states, then we define the global dynamics of an AN.
The active local state of a given automaton \(a \in \Sigma \) in a global state \(\zeta \in \mathcal {S}\) is noted \({\zeta [a]}\). For any given local state \(a_i \in {\mathbf{LS}} \), we also note: \(a_i \in \zeta \) if and only if \({\zeta [a]} = a_i\), which means that the biological component a is in the discrete expression level labeled i within state \(\zeta \). For a given set of local states \(X \subseteq \mathbf {LS} \), we extend this notation to \(X \subseteq \zeta \) if and only if \(\forall a_i \in X, a_i \in \zeta \), meaning that all local states of X are active in \(\zeta \).
Furthermore, for any given local state \(a_i \in \mathbf {LS} \), \(\zeta \Cap a_i\) represents the global state that is identical to \(\zeta \), except for the local state of a which is substituted with \(a_i\): \({(\zeta \Cap a_i)[a]} = a_i \wedge \forall b \in \Sigma{\setminus}\{ a \}, {(\zeta \Cap a_i)[b]} = {\zeta [b]}\). We generalize this notation to a set of local states \(X \subseteq \mathbf {LS} \) containing at most one local state per automaton, that is, \(\forall a \in \Sigma , |X \cap \mathcal {S}_a| \le 1\) where \(|S|\) is the number of elements in set S; in this case, \(\zeta \Cap X\) is the global state \(\zeta \) where the local state of each automaton has been replaced by the local state of the same automaton in X, if there exists: \(\forall a \in \Sigma , (X \cap \mathcal {S}_a = \{ a_i \} \Rightarrow {(\zeta \Cap X)[a]} = a_i) \wedge (X \cap \mathcal {S}_a = \emptyset \Rightarrow {(\zeta \Cap X)[a]} = {\zeta [a]})\).
In Definition 2 we formalize the notion of playability of a local transition which was informally presented in the previous section. Playable local transitions are not necessarily used as such, but combined depending on the chosen update scheme, which is the subject of the rest of the section.
Definition 2
(Playable local transitions) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network and \(\zeta \in \mathcal {S}\) a global state. The set of playable local transitions in \(\zeta \) is called \(P_\zeta \) and defined by: \(P_\zeta = \{ a_i \overset{\ell }{\rightarrow } a_j \in \mathcal {T}\mid \ell \subseteq \zeta \wedge a_i \in \zeta \}\).
The dynamics of the AN is a composition of global transitions between global states, that consist in applying a set of local transitions. Such sets are different depending on the chosen update scheme. In the following, we give the definition of the asynchronous and synchronous update schemes by characterizing the sets of local transitions that can be “played” as global transitions. The asynchronous update sets (Definition 3) are made of exactly one playable local transition; thus, a global asynchronous transition changes the local state of exactly one automaton. On the other hand, the synchronous update sets (Definition 4) consist of exactly one playable local transition for each automaton (except the automata where no local transition is playable); in other words, a global synchronous transition changes the local state of all automata that can evolve at a time. Empty update sets are not allowed for both update schemes. In the definitions below, we willingly mix the notions of “update scheme” and “update set”, which are equivalent here.
Definition 3
(Asynchronous update scheme) Let \(\mathcal {AN}= (\Sigma , \mathcal {S}, \mathcal {T})\) be an automata network and \(\zeta \in \mathcal {S}\) a global state. The set of global transitions playable in \(\zeta \) for the asynchronous update scheme is given by:
$$\begin{aligned} U^{\mathsf {asyn}}(\zeta ) = \{ \{ a_i \overset{\ell }{\rightarrow } a_j\} \mid a_i \overset{\ell }{\rightarrow } a_j\in P_\zeta \}. \end{aligned}$$
Definition 4
(Synchronous update scheme) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network and \(\zeta \in \mathcal {S}\) a global state. The set of global transitions playable in \(\zeta \) for the synchronous update scheme is given by:
$$\begin{aligned} U^{\mathsf {syn}}(\zeta )&= \{ u \subseteq \mathcal {T}\mid u \ne \emptyset \wedge \forall a \in \Sigma , (P_\zeta \cap \mathcal {T}_a = \emptyset \Rightarrow u \cap \mathcal {T}_a = \emptyset ) \wedge \\& \quad(P_\zeta \cap \mathcal {T}_a \ne \emptyset \Rightarrow |u \cap \mathcal {T}_a| = 1) \}. \end{aligned}$$
Once an update scheme has been chosen, it is possible to compute the corresponding dynamics of a given AN. Thus, in the following, when it is not ambiguous and when results apply to both of them, we will denote by \(U^{}\) a chosen update scheme among \(U^{\mathsf {asyn}}\) and \(U^{\mathsf {syn}}\). Definition 5 formalizes the notion of a global transition depending on a chosen update scheme \(U^{}\).
Definition 5
(Global transition) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network, \(\zeta _1, \zeta _2 \in \mathcal {S}\) two states and \(U^{}\) an update scheme (i.e., \(U^{}\in \{ U^{\mathsf {asyn}}, U^{\mathsf {syn}}\}\)). The global transition relation between two states \(\zeta _1\) and \(\zeta _2\) for the update scheme represented by \(U^{}\), noted \(\zeta _1 \rightarrow _{U^{}} \zeta _2\), is defined by:
$$\begin{aligned} \zeta _1 \rightarrow _{U^{}} \zeta _2 \Longleftrightarrow \exists u \in U^{}(\zeta _1), \quad \zeta _2 = \zeta _1 \Cap \{ \mathsf {dest}(\tau ) \in \mathbf LS \mid \tau \in u \}. \end{aligned}$$
The state \(\zeta _2\) is called a successor of \(\zeta _1\).
We note that in a deterministic dynamics, each state has only one successor. However, in case of non-deterministic dynamics, such as the asynchronous and synchronous update schemes of this paper, each state may have several possible successors.
Example 2
Figures 2 and 3 illustrate respectively the asynchronous and synchronous update schemes on the model of Fig. 1. Each global transition is depicted by an arrow between two global states. Only an interesting subset of the whole dynamics is depicted in both figures.
At this point, it is important to remind that the empty set never belongs to the update schemes defined above: \(\forall \zeta \in \mathcal {S}, \emptyset \notin U^{\mathsf {asyn}}(\zeta ) \wedge \emptyset \notin U^{\mathsf {syn}}(\zeta )\). The consequence on the dynamics is that a global state can never be its own successor. In other words, even when no local transition can be played in a given global state (i.e., \(P_\zeta = \emptyset \)), we do not add a “self-transition” on this state. Instead, this state has no successors and is called a fixed point, as defined later in this section.
Definition 6 explains what are in-conflict local transitions, which are interesting in the scope of the synchronous update scheme. Two local transitions are in-conflict if they belong to the same automaton and produce some non-determinism inside this automaton. Such phenomenon arises when both local transitions have the same origin and compatible conditions, but their destinations are different; or, in other words, there exists a global state in which they are both playable. In such a case, they allow the automaton to evolve in two different possible local states from the same active local state, thus producing a non-deterministic behavior. This definition will be used in the discussion of the next section and in "Length n attractors enumeration".
Definition 6
(In-conflict local transitions) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network, \(a \in \Sigma \) an automaton and \(\tau _1, \tau _2 \in \mathcal {T}_a\) two local transitions in this automaton. \(\tau _1\) and \(\tau _2\) are said in-conflict if and only if:
$$\begin{aligned} \mathsf {ori}(\tau _1) = \mathsf {ori}(\tau _2) \wedge \mathsf {dest}(\tau _1) \ne \mathsf {dest}(\tau _2) \wedge \exists \zeta \in \mathcal {S}\quad \text{ such that } \tau _1 \in P_\zeta \wedge \tau _2 \in P_\zeta . \end{aligned}$$
Finally, Definition 7 introduces the notions of path and trace which are used to characterize a set of successive global states with respect to a global transition relation. Paths are useful for the characterization of attractors that are the topic of this work. The trace is the set of all global states traversed by a given path (thus disregarding the order in which they are visited). We note that a path is a sequence and a trace is a set.
Definition 7
(Path and trace) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network, \(U^{}\) an update scheme and \(n \in \mathbb {N}\setminus \{ 0 \}\) a strictly positive integer. A sequence \(H = ( H_i )_{i \in \llbracket 0; n \rrbracket } \in \mathcal {S}^{n+1}\) of global states is a path of length n if and only if: \(\forall i \in \llbracket 0; n-1 \rrbracket , H_i \rightarrow _{U^{}} H_{i+1}\). H is said to start from a given global state \(\zeta \in \mathcal {S}\) if and only if: \(H_0 = \zeta \). Finally, the trace related to such a path is the set of the global states that have been visited: \(\mathsf {trace}(H) = \{ H_j \in \mathcal {S}\mid j \in \llbracket 0; n \rrbracket \}\).
In the following, when we define a path H of length n, we use the notation \(H_i\) to denote the ith element in the sequence H, with \(i \in \llbracket 0; n \rrbracket \). We also use the notation \(|H| = n\) to denote the length of a path H, allowing to write: \(H_{|H|}\) to refer to its last element. We also recall that a path of length n models the succession of n global transitions, and thus features up to n + 1 states (some states may be visited more than once).
Example 3
The following sequence is a path of length 6 for the asynchronous update scheme:
$$\begin{aligned} H&= ( \langle a_1, b_2, c_1, d_1 \rangle ; \langle a_0, b_2, c_1, d_1 \rangle ; \langle a_1, b_2, c_1, d_1 \rangle ;\\ &\quad \langle a_1, b_2, c_1, d_0 \rangle ; \langle a_0, b_2, c_1, d_0 \rangle ; \langle a_0, b_2, c_1, d_1 \rangle ;\\ &\quad \langle a_1, b_2, c_1, d_1 \rangle ) \end{aligned}$$
We have: \(\mathsf {trace}(H) = \{ \langle a_1, b_2, c_1, d_1 \rangle , \langle a_0, b_2, c_1, d_1 \rangle , \langle a_1, b_2, c_1, d_0 \rangle , \langle a_0, b_2, c_1, d_0 \rangle \}\) and: \(|\mathsf {trace}(H)| = 4\). We note that \(H_0 = H_2 = H_6\) and \(H_1 = H_5\).
When there is one or several repetitions in a given path of length n (i.e., if a state is visited more than once), its trace is then of size strictly lesser than n + 1. More precisely, one can compute the size of the trace corresponding to a given path by subtracting the number of repetitions in that path (Lemma 1). For this, we formalize in Definition 8 the notion of repetitions in a path, that is, the global states that are featured several times, designated by their indexes.
Definition 8
(Repetitions in a path) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network, \(n \in \mathbb {N}{\setminus}\{0\}\) a strictly positive integer and H a path of length n. The set of repetitions in H is given by:
$$\begin{aligned} \mathsf {sr}(H) = \{ i \in \llbracket 1; n \rrbracket \mid \exists j \in \llbracket 0; i-1 \rrbracket , H_j = H_i \}. \end{aligned}$$
Lemma 1
(Size of a trace) Let
H
be a path of length
n. The number of elements in its trace is given by:
$$\begin{aligned} |\mathsf {trace}{(H)}| = n + 1 - |\mathsf {sr}(H)|. \end{aligned}$$
Proof of Lemma 1
By definition of a set, and knowing that \(|\mathsf {sr}(H)|\) counts the number of states that exist elsewhere in H with a lesser index. \(\square \)
We note that if there is no repetition in a path of length n (\(\mathsf {sr}(H)=\emptyset \Rightarrow |\mathsf {sr}(H)|=0\)), then the number of visited states is exactly: \(|\mathsf {trace}{(H)}| = n+1\).
Example 4
We can check Lemma 1 on the path H given in Example 3. Indeed, \(\langle a_1, b_2, c_1, d_1 \rangle \) is featured 3 times at \(H_0\), \(H_2\) and \(H_6\). Then, according to the Definition 8, this state is repeated twice at \(H_2\) and \(H_6\) because the first visit of this state is not computed in \(\mathsf {sr}(H)\). In addition, the state \(\langle a_0, b_2, c_1, d_1 \rangle \) is featured twice in this path, at \(H_1\) and \(H_5\), therefore it is considered as repeated once at \(H_5\). Thus, \(\mathsf {sr}(H)=\{2,6,5\}\), \(|\mathsf {sr}(H)|=3\) and \(|\mathsf {trace}(H)| = 6 + 1 - 3 = 4\).
Determinism and non-determinism of the update schemes
In the general case, in multi-valued networks, both the asynchronous and synchronous update schemes are non-deterministic, which means that a global state can have several successors.
In the case of the asynchronous update scheme, the non-determinism may come from in-conflict local transitions, but it actually mainly comes from the fact that exactly one local transition is taken into account for each global transition (see Definition 3). Thus, for a given state \(\zeta \in \mathcal {S}\), as soon as \(|P_\zeta | > 1\), several successors may exist. In the model of Fig. 1, for example, the global state \(\langle a_1, b_2, c_0, d_1 \rangle \) (in green on Fig. 2) has three successors: \(\langle a_1, b_2, c_0, d_1 \rangle \rightarrow _{U^{\mathsf {asyn}}} \langle a_0, b_2, c_0, d_1 \rangle \), \(\langle a_1, b_2, c_0, d_1 \rangle \rightarrow _{U^{\mathsf {asyn}}} \langle a_1, b_0, c_0, d_1 \rangle \) and \(\langle a_1, b_2, c_0, d_1 \rangle \rightarrow _{U^{\mathsf {asyn}}} \langle a_1, b_2, c_0, d_0 \rangle \).
In the case of the synchronous update scheme (see Definition 4), however, the non-determinism on the global scale is only generated by in-conflict local transitions (see Definition 6), that is, local transitions that create non-determinism inside an automaton. For example, the model of Fig. 1 features two local transitions \(b_0 \overset{\{d_0\}}{\longrightarrow } b_1\) and \(b_0 \overset{\{a_1, c_1\}}{\longrightarrow } b_2\) that can produce the two following global transitions from the same state (depicted by red arrows on Fig. 3): \(\langle a_1, b_0, c_1, d_0 \rangle \rightarrow _{U^{\mathsf {syn}}} \langle a_1, b_1, c_1, d_0 \rangle \) and \(\langle a_1, b_0, c_1, d_0 \rangle \rightarrow _{U^{\mathsf {syn}}} \langle a_1, b_2, c_1, d_0 \rangle \). Note that for this particular case, these transitions also exist for the asynchronous scheme (also depicted by red arrows on Fig. 2).
Therefore, it is noteworthy that if every automaton contains only two local states (such a network is often called “Boolean”) then the synchronous update scheme becomes completely deterministic. Indeed, it is not possible to find in-conflict local transitions anymore because for each possible origin of a local transition, there can be only one destination (due to the fact that the origin and destination of a local transition must be different). This observation can speed up the computations in this particular case.
Fixed points and attractors in automata networks
Studying the dynamics of biological networks was the focus of many works, explaining the diversity of existing frameworks dedicated to modeling and the different methods developed in order to identify some patterns, such as attractors [9, 11, 17, 21, 22]. In this paper we focus on several sub-problems related to this: we seek to identify the steady states and the attractors of a given network. The steady states and the attractors are the two long-term structures in which any dynamics eventually falls into. Indeed, they consist in terminal (sets of) global states that cannot be escaped, and in which the dynamics always ends.
In the following, we consider a BRN modeled in AN \((\Sigma ,\mathcal {S},\mathcal {T})\), and we formally define these dynamical properties. We note that since the AN formalism encompasses Thomas modeling [8], all our results can be applied to the models described by this formalism, as well as any other framework that can be described in AN (such as Boolean networks, Biocham [23]...).
A fixed point is a global state which has no successor, as given in Definition 9. Such global states have a particular interest as they denote conditions in which the model stays indefinitely. The existence of several of these states denotes a multistability, and possible bifurcations in the dynamics [1].
Definition 9
(Fixed point) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network, and \(U^{}\) be an update scheme (\(U^{}\in \{ U^{\mathsf {asyn}}, U^{\mathsf {syn}}\}\)). A global state \(\zeta \in \mathcal {S}\) is called a fixed point (or equivalently steady state) if and only if no global transition can be played in this state:
$$\begin{aligned} U^{}(\zeta ) = \emptyset . \end{aligned}$$
It is notable that the set of fixed points of a model (that is, the set of states with no successor) is the same in both update schemes asynchronous and synchronous update [24, 25]: \(\forall \zeta \in \mathcal {S}, U^{\mathsf {asyn}}(\zeta ) = \emptyset \Longleftrightarrow U^{\mathsf {syn}}(\zeta ) = \emptyset .\)
Example 5
The state-transition graphs of Figs. 2 and 3 depict three fixed points colored in red: \(\langle a_1, b_1, c_1, d_0 \rangle \), \(\langle a_1, b_1, c_0, d_0 \rangle \) and \(\langle a_0, b_0, c_0, d_1 \rangle \). Visually, they can be easily recognized because they have no outgoing arrow (meaning that they have no successors). Although these figures do not represent the whole dynamics, but they allow to check that in both update schemes the fixed points are the same, at least on this subset of the overall behavior.
Another complementary dynamical pattern consists in the notion of non-unitary trap domain (Definition 10), which is a (non-singleton) set of states that the dynamics cannot escape, and thus in which the system indefinitely remains. In this work, we focus more precisely on (non-singleton) attractors (Definition 11), that are cyclic and minimal trap domains in terms of sets inclusion. In order to characterize such attractors, we use the notion of cycle (Definition 12), which is a looping path. Indeed, a cycle is a strongly connected component (Lemma 2), allowing us to give an alternative definition for an attractor (Lemma 3). Formally speaking, fixed points can be considered as attractors of size 1. However, in the scope of this paper and for the sake of clarity, we call “attractors” only non-unitary attractors, that is, only sets containing at least two states. This is justified by the very different approaches developed for fixed points and attractors in the next sections.
Definition 10
(Trap domain) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network and \(U^{}\) an update scheme. A set of global states \(\mathbf {T}\), with \(|\mathbf {T}| \ge 2\), is called a trap domain (regarding a scheme \(U^{}\)) if and only if the successors of each of its elements are also in \(\mathbf {T}\):
$$\begin{aligned} \forall \zeta _1 \in \mathbf {T} \wedge \forall \zeta _2 \in \mathcal {S}\text { if } \zeta _1 \rightarrow _{U^{}} \zeta _2 \quad \text{ then } \zeta _2 \in \mathbf {T}. \end{aligned}$$
Definition 11
(Attractor) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network and \(U^{}\) an update scheme. A set of global states \(\mathbf {A}\), with \(|\mathbf {A}| \ge 2\), is called an attractor (regarding scheme \(U^{}\)) if and only if it is a minimal trap domain in terms of inclusion.
Definition 12
(Cycle) Let \(\mathcal {AN}= (\Sigma ,\mathcal {S},\mathcal {T})\) be an automata network, \(U^{}\) an update scheme and \(\mathbf {C}\) a path of length n for this update scheme. \(\mathbf {C}\) is called a cycle of length n (regarding a scheme \(U^{}\)) if and only if it loops back to its first state:
$$\begin{aligned} \mathbf {C}_n = \mathbf {C}_0. \end{aligned}$$
Example 6
The path H of length 6 given in Example 3 is a cycle because \(H_0 = H_6\).
Lemma 2 states that the set of (traces of) cycles in a model is exactly the set of strongly connected components. Indeed, a cycle allows to “loop” between all states that it contains, and conversely, a cycle can be built from the states of any strongly connected component. This equivalence is used in the next lemma.
Lemma 2
(The traces of cycles are the SCCs) The traces of the cycles are exactly the strongly connected components (with respect to the global transition relation).
Proof of Lemma 2
(\(\Rightarrow \)) From any state of a cycle, it is possible to reach all the other states (by possibly cycling). Therefore, the trace of this cycle is a strongly connected component. (\(\Leftarrow \)) Let \(\mathbf {S} = \{ \zeta _{i} \}_{i \in \llbracket 0; n \rrbracket }\) be a strongly connected component in which the elements are arbitrarily labeled. Because it is a strongly connected component, for all \(i \in \llbracket 0; n \rrbracket \), there exists a path \(H^i\) made of elements of \(\mathbf {S}\) so that \(H^i_0 = \zeta _i\) and \(H^i_{|H^i|} = \zeta _{i+1}\) (or \(H^n_{|H^n|} = \zeta _0\) for \(i = n\)). We create a path \(\mathbf {C}\) by concatenation of all paths \(H^0, H^1, \ldots , H^n\) by merging the first and last element of each successive path, which is identical: \(\forall i \in \llbracket 0; n-1 \rrbracket , H^i_{|H^i|} = \zeta _{i+1} = H^{i+1}_0\). \(\mathbf {C}\) is a cycle, because \(\mathbf {C}_0 = H^0_0 = \zeta _0 = H^n_{|H^n|} = \mathbf {C}_{|\mathbf {C}|}\). Furthermore, \(\forall i \in \llbracket 0; n \rrbracket , \zeta _i = H^i_0 \in \mathsf {trace}(\mathbf {C})\), thus \(\mathbf {S} \subseteq \mathsf {trace}(\mathbf {C})\). Finally, only states from \(\mathbf {S}\) have been used to build \(\mathbf {C}\), thus \(\mathsf {trace}(\mathbf {C}) \subseteq \mathbf {S}\). Therefore, \(\mathsf {trace}(\mathbf {C}) = \mathbf {S}\). \(\square \)
In Definition 11, attractors are characterized in the classical way, that is, as minimal trap domains. However, we use an alternative characterization of attractors in this paper, due to the specifics of ASP: Lemma 3 states that an attractor can alternatively be defined as a trap domain that is also a cycle, and conversely. In other words, the minimality requirement is replaced by a cyclical requirement.
Lemma 3
(The attractors are the trap cycles) The attractors are exactly the traces of cycles which are trap domains.
Proof of Lemma 3
(\(\Rightarrow \)) By definition, an attractor is a trap domain. It is also a strongly connected component, and thus, from Lemma 2, it is the trace of a cycle. (\(\Leftarrow \)) Let \(\mathbf {C}\) be both a cycle and a trap domain. From Lemma 2, \(\mathbf {C}\) is also a strongly connected component. Let us prove by contradiction that \(\mathbf {C}\) is a minimal trap domain, by assuming that it is not minimal. This means that there exists a smaller trap domain \(\mathbf {D} \subsetneq \mathbf {C}\). Let us consider \(x \in \mathbf {D}\) and \(y \in \mathbf {C} \setminus \mathbf {D}\). Because \(\mathbf {D}\) is a trap domain, it exists no path between x and y; this is in contradiction with \(\mathbf {C}\) being a strongly connected component (as both x and y belong to \(\mathbf {C}\)). Therefore, \(\mathbf {C}\) is a minimal trap domain, and thus an attractor. \(\square \)
As explained before, Lemma 3 will beused in "Length n attractors enumeration". Indeed, directly searching for minimal trap domains would be too cumbersome; instead, we enumerate cycles of length n in the dynamics of the model and filter out those that are not trap domains. The remaining results are the attractors formed of cycles of length n. The previous lemma ensures the soundness and completeness of this search for a given value of n.
Lemma 4
(Characterization of non-attractors) Let
\(\mathbf {A} \subset \mathcal {S}\)
be a set of states. If
\(\exists \zeta _1 \in \mathbf {A}\)
and
\(\exists \zeta _2 \in \mathcal {S}\setminus \mathbf {A}\)
such that
\(\zeta _1 \rightarrow _{U^{}} \zeta _2\)
then
\(\mathbf {A}\)
is not an attractor.
Proof of Lemma 4
By definition, \(\mathbf {A}\) is not a trap domain (Definition 10) and thus it is not an attractor (Definition 11). \(\square \)
Example 7
The state-transition graphs of Figs. 2 and 3 feature different attractors:
-
\(\{ \langle a_0, b_1, c_0, d_0 \rangle , \langle a_0, b_1, c_0, d_2 \rangle \}\) is depicted in blue and appears in both figures. It is a cyclic attractor, because it contains exactly one cycle.
-
\(\{ \langle a_0, b_2, c_1, d_0 \rangle , \langle a_0, b_2, c_1, d_1 \rangle , \langle a_1, b_2, c_1, d_1 \rangle , \langle a_1, b_2, c_1, d_0 \rangle \}\) is only present for the asynchronous update scheme and is depicted in yellow on Fig. 2. It is a complex attractor, that is, a composition of several cycles.
-
\(\{ \langle a_1, b_2, c_1, d_1 \rangle , \langle a_0, b_2, c_1, d_0 \rangle \}\) is, on the contrary, only present for the synchronous update scheme and is depicted in gray on Fig. 3. It is also a cyclic attractor.
For each of these attractors, the reader can check that they can be characterized as cycles that are trap domains. For instance, the second attractor can be found by considering the following cycle:
$$\begin{aligned} \mathbf {A} = ( \langle a_0, b_2, c_1, d_0 \rangle ; \langle a_0, b_2, c_1, d_1 \rangle ; \langle a_1, b_2, c_1, d_1 \rangle ; \langle a_1, b_2, c_1, d_0 \rangle ; \langle a_0, b_2, c_1, d_0 \rangle ) \end{aligned}$$
and checking that its trace is a trap domain (which is visually confirmed in Fig. 2 by the absence of outgoing arrows from any of the yellow states).
On the other hand, the following cycle is not an attractor:
$$\begin{aligned} \mathbf {C} = ( \langle a_1, b_2, c_0, d_1 \rangle ; \langle a_1, b_2, c_0, d_0 \rangle ; \langle a_1, b_2, c_0, d_1 \rangle ). \end{aligned}$$
Indeed, although it is a cycle, it features outgoing transitions (such as, for instance, transition \(\langle a_1, b_2, c_0, d_0 \rangle \rightarrow _{U^{\mathsf {asyn}}} \langle a_0, b_2, c_0, d_0 \rangle \)) and thus is not a trap domain.
The aim of the rest of this paper is to tackle the enumeration of fixed points ("Fixed points enumeration") and attractors ("Length n attractors enumeration") in an AN. For this, we use ASP ("Answer set programming") which is a declarative paradigm dedicated to the resolution of complex problems.