image
imagewidth (px)
1k
1.76k
markdown
stringlengths
1
43.7k
# Calculating Valid Domains For Bdd-Based Interactive Configuration Tarik Hadzic, Rune Moller Jensen, Henrik Reif Andersen Computational Logic and Algorithms Group, IT University of Copenhagen, Denmark tarik@itu.dk,rmj@itu.dk,hra@itu.dk Abstract. In these notes we formally describe the functionality of Calculating Valid Domains from the BDD representing the solution space of valid configurations. The formalization is largely based on the CLab [1] configuration framework. ## 1 Introduction Interactive configuration problems are special applications of Constraint Satisfaction Problems (CSP) where a user is assisted in interactively assigning values to variables by a software tool. This software, called a configurator, assists the user by calculating and displaying the available, valid choices for each unassigned variable in what are called valid domains computations. Application areas include customising physical products (such as PC's and cars) and services (such as airplane tickets and insurances). Three important features are required of a tool that implements interactive configuration: it should be complete (all valid configurations should be reachable through user interaction), backtrack-free (a user is never forced to change an earlier choice due to incompleteness in the logical deductions), and it should provide real-time performance (feedback should be fast enough to allow real-time interactions). The requirement of obtaining backtrack-freeness while maintaining completeness makes the problem of calculating valid domains NP-hard. The real-time performance requirement enforces further that runtime calculations are bounded in polynomial time. According to userinterface design criteria, for a user to perceive interaction as being real-time, system response needs to be within about 250 milliseconds in practice [2]. Therefore, the current approaches that meet all three conditions use off-line precomputation to generate an efficient runtime data structure representing the solution space [3,4,5,6]. The challenge with this data structure is that the solution space is almost always exponentially large and it is NP-hard to find. Despite the bad worst-case bounds, it has nevertheless turned out in real industrial applications that the data structures can often be kept small [7,5,4]. ## 2 Interactive Configuration The input *model* to an interactive configuration problem is a special kind of Constraint Satisfaction Problem (CSP) [8,9] where constraints are represented as propositional formulas:
The significance of this demand is that it guarantees the user backtrack-free assignment to variables as long as he selects values from valid domains. This reduces cognitive effort during the interaction and increases usability. At each step of the interaction, the configurator reports the valid domains to the user, based on the current partial assignment ρ resulting from his earlier choices. The user then picks an unassigned variable xj ∈ X \ dom(ρ) and selects a value from the calculated valid domain vj ∈ D ρ j . The partial assignment is then extended to ρ ∪ {(xj , vj )} and a new interaction step is initiated. ## 3 Bdd Based Configuration In [5,10] the interactive configuration was delivered by dividing the computational effort into an *offline* and *online* phase. First, in the offline phase, the authors compiled a BDD representing the solution space of all valid configurations Sol = {ρ | ρ |= F}. Then, the functionality of *calculating valid domains* (**CV D**) was delivered online, by efficient algorithms executing during the interaction with a user. The benefit of this approach is that the BDD needs to be compiled only once, and can be reused for multiple user sessions. The user interaction process is illustrated in Fig. 2. InCo(Sol, ρ) 1: while |Solρ| > 1 2: compute D ρ = CVD(Sol, ρ) 3: report D ρ **to the user** 4: the user chooses (xi, v) **for some** xi 6∈ dom(ρ), v ∈ D ρ i 5: ρ ← ρ ∪ {(xi, v)} 6: return ρ Fig. 2. Interactive configuration algorithm working on a BDD representation of the solutions Sol reaches a valid total configuration as an extension of the argument ρ. Important requirement for online user-interaction is the guaranteed real-time experience of user-configurator interaction. Therefore, the algorithms that are executing in the online phase must be provably efficient in the size of the BDD representation. This is what we call the *real-time guarantee*. As the **CV D** functionality is NP-hard, and the online algorithms are polynomial in the size of generated BDD, there is no hope of providing polynomial size guarantees for the worst-case BDD representation. However, it suffices that the BDD size is small enough for all the configuration instances occurring in practice [10]. ## 3.1 Binary Decision Diagrams A reduced ordered Binary Decision Diagram (BDD) is a rooted directed acyclic graph representing a Boolean function on a set of linearly ordered Boolean variables. It has one or two terminal nodes labeled 1 or 0 and a set of variable nodes. Each variable node
is associated with a Boolean variable and has two outgoing edges low and *high*. Given an assignment of the variables, the value of the Boolean function is determined by a path starting at the root node and recursively following the high edge, if the associated variable is true, and the low edge, if the associated variable is false. The function value is *true*, if the label of the reached terminal node is 1; otherwise it is *false*. The graph is ordered such that all paths respect the ordering of the variables. A BDD is reduced such that no pair of distinct nodes u and v are associated with the same variable and low and high successors (Fig. 3a), and no variable node u has identical low and high successors (Fig. 3b). Due to these reductions, the number of nodes <image> in a BDD for many functions encountered in practice is often much smaller than the number of truth assignments of the function. Another advantage is that the reductions make BDDs canonical [11]. Large space savings can be obtained by representing a collection of BDDs in a single multi-rooted graph where the sub-graphs of the BDDs are shared. Due to the canonicity, two BDDs are identical if and only if they have the same root. Consequently, when using this representation, equivalence checking between two BDDs can be done in constant time. In addition, BDDs are easy to manipulate. Any Boolean operation on two BDDs can be carried out in time proportional to the product of their size. The size of a BDD can depend critically on the variable ordering. To find an optimal ordering is a co-NP-complete problem in itself [11], but a good heuristic for choosing an ordering is to locate dependent variables close to each other in the ordering. For a comprehensive introduction to BDDs and *branching programs* in general, we refer the reader to Bryant's original paper [11] and the books [12,13]. ## 3.2 Compiling The Configuration Model Each of the finite domain variables xi with domain Di = {0, . . . , |Di| − 1} is encoded by ki = ⌈log|Di|⌉ Boolean variables x i 0 , . . . , xiki−1 . Each j ∈ Di, corresponds to a
binary encoding v0 **. . . v**ki−1 denoted as v0 . . . vki−1 = enc(j). Also, every combination of Boolean values v0 **. . . v**ki−1 represents some integer j ≤ 2 ki − 1, denoted as j = dec(v0 **. . . v**ki−1). Hence, atomic proposition xi = v is encoded as a Boolean expression x i 0 = v0 ∧ **. . .** ∧ x i ki−1 = vki−1. In addition, *domain constraints* are added to forbid those assignments to v0 **. . . v**ki−1 which do not translate to a value in Di, i.e. where dec(v0 . . . vki−1) ≥ |Di|. Let the solution space Sol over ordered set of variables x0 **< . . . < x**k−1 be represented by a Binary Decision Diagram B(V, E, Xb**, R, var**), where V is the set of nodes u, E is the set of edges e and Xb = {0, 1, . . . , |Xb| − 1} is an ordered set of variable indexes, labelling every non-terminal node u with var(u) ≤ |Xb| − 1 and labelling the terminal nodes T0, T1 with index |Xb|. Set of variable indexes Xb is constructed by taking the union of Boolean encoding variables Sn−1 i=0 {x i 0 , . . . , xiki−1 } and ordering them in a natural layered way, i.e. x i1 j1 < xi2 j2 iff i1 < i2 or i1 = i2 and j1 < j2. Every directed edge e = (u1, u2) has a starting vertex u1 = π1(e) and ending vertex u2 = π2(e). R denotes the root node of the BDD. Example 2. The BDD representing the solution space of the T-shirt example introduced in Sect. 2 is shown in Fig. 4. In the T-shirt example there are three variables: x1, x2 and x3, whose domain sizes are four, three and two, respectively. Each variable is represented by a vector of Boolean variables. In the figure the Boolean vector for the variable xi with domain Diis (x 0 i , x1 i , **· · ·** x li−1 i), where li = ⌈lg |Di|⌉. For example, in the figure, variable x2 which corresponds to the size of the T-shirt is represented by the Boolean vector (x 0 2 , x12 ). In the BDD any path from the root node to the terminal node 1, corresponds to one or more valid configurations. For example, the path from the root node to the terminal node 1, with all the variables taking low values represents the valid configuration (black , small, MIB). Another path with x 0 1 , x11 , and x 0 2 taking low values, and x 12 taking high value represents two valid configurations: (black , medium, MIB) and (black , medium, STW ), namely. In this path the variable x 0 3 is a don't care variable and hence can take both low and high value, which leads to two valid configurations. Any path from the root node to the terminal node 0 corresponds to invalid configurations. ♦ ## 4 Calculating Valid Domains Before showing the algorithms, let us first introduce the appropriate notation. If an index k ∈ Xb corresponds to the j + 1-st Boolean variable x i j encoding the finite domain variable xi, we define var1(k) = i and var2(k) = j to be the appropriate mappings. Now, given the BDD B(V, E, Xb**, R, var**), Vi denotes the set of all nodes u ∈ V that are labelled with a BDD variable encoding the finite domain variable xi, i.e. Vi = {u ∈ V | var1(u) = i}. We think of Vi as defining a layer in the BDD. We define Inito be the set of nodes u ∈ Vi reachable by an edge originating from outside the Vi layer, i.e. Ini = {u ∈ Vi| ∃(u ′ , u) ∈ **E. var**1(u ′ ) < i}. For the root node R, labelled with i0 = var1(R) we define Ini0 = Vi0 = {R}. We assume that in the previous user assignment, a user fixed a value for a finite domain variable x = **v, x** ∈ X, extending the old partial assignment ρold to the current
<image> assignment ρ = ρold ∪ {(**x, v**)}. For every variable xi ∈ X, old valid domains are denoted as D ρold i, i = 0**, . . . , n** − 1. and the old BDD Bρold is reduced to the restricted BDD, Bρ(V, E, Xb**, var**). The **CV D** functionality is to calculate valid domains D ρ i for remaining unassigned variables xi 6∈ dom(ρ) by extracting values from the newly restricted BDD Bρ(V, E, Xb**, var**). To simplify the following discussion, we will analyze the isolated execution of the CV D algorithms over a given BDD B(V, E, Xb**, var**). The task is to calculate valid domains V Di from the starting domains Di. The user-configurator interaction can be modelled as a sequence of these executions over restricted BDDs Bρ, where the valid domains are D ρ i and the starting domains are D ρold i. The **CV D** functionality is delivered by executing two algorithms presented in Fig. 5 and Fig. 6. The first algorithm is based on the key idea that if there is an edge e = (u1, u2) crossing over Vj , i.e. var1(u1) **< j < var**1(u2) then we can include all the values from Dj into a valid domain V Dj ← Dj . We refer to e as a *long edge* of length var1(u2) − var1(u1). Note that it skips var(u2) − var(u1) Boolean variables, and therefore compactly represents the part of a solution space of size 2 var(u2)−var(u1). For the remaining variables xi, whose valid domain was not copied by **CV D** − Skipped, we execute CV D(**B, x**i) from Fig. 6. There, for each value j in a domain D′i we check whether it can be part of the domain Di. The key idea is that if j ∈ Dithen there must be u ∈ Vi such that traversing the BDD from u with binary encoding of j
CV D − **Skipped**(B) 1: for each i = 0 to n − 1 2: L[i] ← i + 1 3: T ← **T opologicalSort**(B) 4: for each k = 0 to |T | − 1 5: u1 ← T [k], i1 ← var1(u1) 6: for each u2 ∈ **Adjacent**[u1] 7: L[i1] ← max{L[i1]**, var**1(u2)} 8: S **← {}**, s ← 0 9: for i = 0 to n − 2 10: if i + 1 < L[s] 11: L[s] ← max{L[s], L[i + 1]} 12: else 13: if s + 1 < L[s] S ← S ∪ {s} 14: s ← i + 1 15: for each j ∈ S 16: for i = j to L[j] 17: V Di ← Di Fig. 5. In lines 1-7 the L[i] array is created to record longest edge e = (u1, u2) originating from the Vilayer, i.e. L[i] = max{var1(u ′ ) | ∃(u, u′) ∈ **E.var**1(u) = i}. The execution time is dominated by **T opologicalSort**(B) which can be implemented as depth first search in O(|E|+|V |) = O(|E|) time. In lines 8-14, the overlapping long segments have been merged in O(n) steps. Finally, in lines 15-17 the valid domains have been copied in O(n) steps. Hence, the total running time is O(|E| + n). CV D(**B, x**i) 1: V Di **← {}** 2: for each j = 0 to |Di| − 1 3: for each k = 0 to |Ini| − 1 4: u ← Ini[k] 5: u ′ ← T raverse(**u, j**) 6: if u ′ 6= T0 7: V Di ← V Di ∪ {j} 8: Return Fig. 6. Classical CVD algorithm. enc(j) denotes the binary encoding of number j to ki values v0**, . . . , v**ki−1. If T raverse(**u, j**) from Fig. 7 ends in a node different then T0, then j ∈ V Di.
will lead to a node other than T0, because then there is at least one satisfying path to T1 allowing xi = j. T raverse(**u, j**) 1: i ← var1(u) 2: v0, . . . , vki−1 ← enc(j) 3: s ← var2(u) 4: if **Marked**[u] = j **return** T0 5: **Marked**[u] ← j 6: while s ≤ ki − 1 7: if var1(u) > i **return** u 8: if vs = 0 u ← low(u) 10: else u ← **high**(u) 12: if **Marked**[u] = j **return** T0 13: **Marked**[u] ← j 14: s ← var2(u) Fig. 7. For fixed u ∈ V, i = var1(u), T raverse(**u, j**) iterates through Vi and returns the node in which the traversal ends up. When traversing with T raverse(**u, j**) we mark the already traversed nodes ut with j, **M arked**[ut] ← j and prevent processing them again in the future j-traversals T raverse(u ′ , j). Namely, if T raverse(**u, j**) reached T0 node through ut, then any other traversal **T raverse**(u ′ , j) reaching ut must as well end up in T0. Therefore, for every value j ∈ Di, every node u ∈ Viis traversed at most once, leading to worst case running time complexity of O(|Vi|·|Di|). Hence, the total running time for all variables is O(Pn−1 i=0 |Vi**| · |**Di|). The total worst-case running time for the two **CV D** algorithms is therefore O(Pn−1 i=0 |Vi|· |Di| + |E| + n) = O(Pn−1 i=0 |Vi**| · |**Di| + n). ## References 1. Jensen, R.M.: CLab: A C++ library for fast backtrack-free interactive product configuration. http://www.itu.dk/people/rmj/clab/ (2007) 2. Raskin, J.: The Humane Interface. Addison Wesley (2000) 3. Amilhastre, J., Fargier, H., Marquis, P.: Consistency restoration and explanations in dynamic CSPs-application to configuration. Artificial Intelligence 1-2 (2002) 199–234 ftp://fpt.irit.fr/pub/IRIT/RPDMP/Configuration/. 4. Madsen, J.N.: Methods for interactive constraint satisfaction. Master's thesis, Department of Computer Science, University of Copenhagen (2003) 5. Hadzic, T., Subbarayan, S., Jensen, R.M., Andersen, H.R., Møller, J., Hulgaard, H.: Fast backtrack-free product configuration using a precompiled solution space representation. In: PETO Conference, DTU-tryk (2004) 131–138 6. Møller, J., Andersen, H.R., Hulgaard, H.: Product configuration over the internet. In: Proceedings of the 6th INFORMS Conference on Information Systems and Technology. (2002) 7. Configit Software A/S. **http://www.configit-software.com** (online)
8. Tsang, E.: Foundations of Constraint Satisfaction. Academic Press (1993) 9. Dechter, R.: Constraint Processing. Morgan Kaufmann (2003) 10. Subbarayan, S., Jensen, R.M., Hadzic, T., Andersen, H.R., Hulgaard, H., Møller, J.: Comparing two implementations of a complete and backtrack-free interactive configurator. In: CP'04 CSPIA Workshop. (2004) 97–111 11. Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers 8 (1986) 677–691 12. Meinel, C., Theobald, T.: Algorithms and Data Structures in VLSI Design. Springer (1998) 13. Wegener, I.: Branching Programs and Binary Decision Diagrams. Society for Industrial and Applied Mathematics (SIAM) (2000) 9
<image> <image> As mentioned before, the default sequence-weighting method used by HMMER package is a high quality algorithm. Therefore, we combine both the default HMMER's M matrix in (3) and the of Ms structural matrix in (4), as shown in (5). $$\mathbf{M_{s}^{'}=M M_{s}^{T}=\left(\begin{array}{c c c}{{w_{11}m_{11}}}&{{\ldots}}&{{w_{1L}m_{1L}}}\\ {{}}&{{}}&{{}}\\ {{\vdots}}&{{\vdots}}&{{\vdots}}\\ {{w_{N1}m_{N1}}}&{{\ldots}}&{{w_{N L}m_{N L}}}\end{array}\right)}\qquad(5)$$ However, introducing weights affect the computation of the observed frequencies. More precisely, the observed frequency cj (σ) shown in 1 is now found through the equation 6, where sij = wijmij is structural weight of residue σ, according to M ′ s matrix. $$c_{j}(\sigma)=\sum_{i}^{N}f(\sigma)\ \therefore f(\sigma)=\left\{\begin{array}{cc}s_{ij},&\mbox{if$\sigma$is the amino-acid in position ij}\\ \\ 0,&\mbox{otherwise}\end{array}\right.\tag{6}$$ In the same way, we apply the equations 7 to determine ckl shown in 2. If the k and l states are either M or I states, ckl can be calculated through the arithmetic mean of mik and mil . If at least one state is a D state, ckl is either mik, if l ∈ {D}, or mil , if k ∈ {D}. Last, if both are D states, ckl is 1. $$c_{kl}=\sum_{i}^{N}f_{kl}\cdot\cdot f_{kl}=\left\{\begin{array}{cc}\frac{s_{ik}+s_{il}}{2},&\mbox{se k,l}\in\{M,I\}\\ \\ s_{ik},&\mbox{se l}\in\{D\}\;e\;\mbox{k}\notin\{D\}\\ \\ s_{il},&\mbox{se k}\in\{D\}\;e\;\mbox{l}\notin\{D\}\\ \\ 1,&\mbox{se k,l}\in\{D\}\end{array}\right.\tag{7}$$ ## 2.3 The Ms Structural Weight Matrices As explained above, our algorithm considers a number of different sources of structural information. Next, we approach how this information was obtained and used to built Ms matrix. 2.3.1 Secondary structural elements Secondary structure is often conserved among homologue proteins. Indeed, *motifs* (Branden *et al*., 1991), consensus sequences in homologue proteins, usually include a combination of well conserved secondary structure elements (Chakrabarti *et al*., 2004). In order to build a Ms matrix based on secondary structure elements we need to identify secondary structure elements in the original sequences. This is possible because we assume we have full structural data for the *training* sequences. In this work, we chose to utilize the SSTRUCT program, part of the widely used joy package (Mizuguchi *et al*., 1998), to extract secondary elements from the PDB files. SSTRUCT output is a character sequence, such that the characters {L=loop, H=helix, C=sheet} match a secondary structure element against a residue, as shown in figure 2. Following Deane's work on the relative frequency of conserved regions (Deane *et al*., 2003), we mapped each SSTRUCT element as follows: L → 1, H → 2, and C → 4. Our mapping thus favours conservation in sheets, and gives default weight to loops. Although the active site of proteins can be found in loops, these regions often contain *indel* segments. Figure 2 shows an example of structural weight attributions for proteins in a partial alignment. 2.3.2 Solvent Inaccessibility The hydrophobic interactions of nonpolar side chains in amino-acids are believed to contribute significantly to the stability of the tertiary structures in proteins. Hydrophobic amino-acids will tend to cluster together, not as a result of attraction, but as a result of their repulsion by the hydrogen bond water network in which the protein is dissolved. Therefore, these amino-acids will preferentially be located away from the surface of the molecule. Since they form the core of protein, they tend to be more conserved and are, thus, more useful for identifying remote evolutionary relationships. We have utilized the PSA (Lee *et al*., 1971) program to provide solvent inaccessibility information. PSA is part of the JOY package. The Ms matrix was built giving weight 3 for inaccessible residues and weight one to the others. The weights are based on (Chakrabarti *et al*., 2004), which demonstrated empirically that inaccessible amino-acids are three times more conserved than accessible amino-acids. The Ms matrix represents structural weights that were used to build the model *pHMMAcc*, as shown in figure 1. 2.3.3 Packing density The tertiary structure of proteins stems from a very large number of atomic interactions. In regions where the interactions are stronger residues tend to be packed together. It is well known that densely packed regions tend to be preserved, and hence that amino-acids belonging to those regions are usually more conserved than other amino-acids. TJ Ooi created a measure, called the Ooi Number (Nishikawa *et al*., 1986), that estimates the amino-acid packing density. Essentially, the Ooi number counts for a residue counts the number of neighboring C-α atoms within a radius of 14A of the given residues own C- ˚ α. Although crude, this measure does give a good impression of which parts of the structure are buried and which are exposed on the surface. We again use the JOY package to obtain the Ooi number and estimate packing density. Figure 3 shows a stretch of JOY output, in which the numbers represent the Ooi measure for the Dehaloperoxidase protein in the Globins family (16wc PDB code). We used these numbers to build the structural weight matrix Ms. The structural weights were than used to build the model *pHMMOoi*, as shown in figure 1. <image> Fig. 3. Ooi measure for the Dehaloperoxidase protein of Globins family (16wc PDB code), each number represents the amount of neighbor aminoacids inside a radius of 14A. ˚ 3
2.3.4 Homologuous Core Structure Structural similarity among proteins can provide valuable insights into their functionality. One way to provide structural similarities is through three-dimensional alignment of proteins, also called *structural alignment*. The goal is to align two or more proteins by trying to overlap the three-dimensional coordinates of their atoms. When multiple homologue proteins are structurally aligned, we tend to observe that there is a subset of coordinates whose spatial locations are better conserved across structural alignment. This subset is called the homolog core structure (HCS) (Matsuo *et al*., 1999). According to the result reported by Gerstein *et al*. (1995), HCS can be utilized to detect homologue proteins. Our goal was to estimate the HCS of a set of protein. As a first approximation, we propose a method to extract it from structural alignment by calculating how much aligned residues from different proteins tend to be close together. Following MAMMOTH, we represent residues through the coordinates of their C-α atoms. In other words, we assume that closeness between C-α atoms will approximate overlapping among amino-acids. To find out how much amino-acids are close together, we utilize the Euclidian distance measure, as shown in the equation 8. It represents the shortest distance between two points in the space. $$d e_{a,b}={\sqrt{(x_{a}-x_{b})^{2}+(y_{a}-y_{b})^{2}+(z_{a}-z_{b})^{2}}}$$ 2 (8) The degree of overlap between aligned residues in the structural alignment was calculated through the relative distance dij , equation 9. This distance can be found through the average distance among the amino-acid in the position ij and other amino-acids in the j column of alignment. $$d i_{j}={\frac{\sum_{b=j}^{n-1}d e_{(i,b),(i,b+1)}}{n-1}}$$ Finally, the relative distance was normalized according to 10, and it was used to determine the degree of overlap of each residue. These measures were normalized by using the equation 10, where dmin is the minimal distance and Omaxi is the maximal Ooi measure for protein i. $$m_{i j}={\frac{d_{m i n}*O_{m a x_{i}}}{d i_{j}}}$$ After this step, we built the Ms matrix, where each mij matrix element corresponds to the relative distance of amino-acids ij in the structural alignment. This matrix represents structural weights that were used to build the model *pHMM3D*, shown in the figure 1. ## 2.4 Library Of Structural Models In a second step, we join the models built from these matrices to form a library of structural models aiming at building a single model to represent the structural patterns under different aspects. We used the hmmpfam HMMER tool to combine the models together. Library of models have been used in a number of studies, such as (Bateman *et al*., 2004; Haft *et al*., 2003; Gough *et al*., 2001), and they are known to achieve better results than those achieved by single models. ## 2.5 Test Procedure The main concern of our study is to build pHMMs that can be helpful in remote homology detection. Therefore, our experiments considered proteins with identity below 30%. To do so, we used the SCOP database (Andreeva *et al*., 2004), and more specifically ASTRAL SCOP version 1.67 PDB40 (with 6600 protein sequences). ASTRAL SCOP is particularly interesting for our study because it describes structural and evolutionary relationships among proteins, such that none of the sequences in ASTRAL SCOP present > 40% sequence identity. Thus, it is an excellent dataset to evaluate the performance of remote homology detection methods and has been widely used to reach this goal (Espadaler *et al*., 2005; Wistrand *et al*., 2005; Hou *et al*., 2004; Alexandrov *et al*., 2004). SCOP classifies all protein domains of known structure into a hierarchy with four levels: class, fold, super family and family. In our study, we work at the super family level, which gathers families in such a way that a common evolutionary origin is not obvious from sequence identity, but probable from an analysis of structure and from functional features. We believe that this level better represents remote homolog. Moreover, we used cross-validation (Mitchell, 1997) to compare the different approaches. First, we divided SCOP database by super family level. Next, from ASTRAL PDB40, we chose those super families containing at least three families and at least 20 sequences. We eventually tested 39 super families, as listed in Table 1. This whittled down the number of sequences we used to model building to 1137. Third, we implemented leave-one-familyout cross-validation. For any super family x having n families, we built n profiles so that each profile P was built from the sequences in the remaining n − 1 families. Thus, the n − 1 sequences form the training set for profile P. The test set for profile P will be the remaining sequences (test positives) plus all other database sequences (test negatives). Table 1. Superfamily SCOP-Ids | a.1.1. | a.138.1. | a.25.1. | a.26.1. | a.3.1. | a.39.1. | a.4.1. | b.121.4. | |----------|------------|-----------|-----------|----------|-----------|----------|------------| | b.18.1. | b.29.1. | b.36.1. | b.47.1. | b.55.1. | b.60.1. | b.6.1. | b.71.1. | | b.82.1. | c.1.10. | c.23.1. | c.26.1. | c.36.1. | c.52.1. | c.55.1. | c.55.3. | | c.67.1. | d.108.1. | d.14.1. | d.144.1. | d.15.1. | d.153.1. | d.169.1. | d.3.1. | | d.58.7. | d.92.1. | g.3.11. | g.3.6. | g.3.7. | g.37.1. | g.39.1. | | SCOP Super families used in our experiments. We only considered super families with at least 20 proteins and three or more families. $$10$$ In order to assess HMMER-STRUCT performance, we used the HMMER package. We did not compare with SAM (Hughey *et al*., 1996) package. First, because our goal was to evaluate whether structural properties can improve pHMMs, not to compare the two packages, and second, because a related previous study on the same dataset actually showed HMMER outperforming SAM (Bernardes *et al*., 2007). The same study also indicated better results on the "twilight zone" using structural alignment tools, such as MAMMOTH-mult and 3DCOFFEE. We used MAMMOTH in this study. Results were graphically analyzed by building ROC and Precision/Recall curves. ROC curves are a common measure of performance that is very used in bioinformatics application. They are based on the relation of the false positives (non homologue proteins) and of true positives (homologue proteins), and are obtained by varying a parameter that affect these relationships. We further present Precision/Recall curves, as they give a good perspective on true positives, false positives and false negatives hits. In both cases, the bigger the area under the curve (AUC), the more efficient the analyzed tool is. In both cases we used the minimal *e-value* required to accept a match as the parameter used to build both curves. We ranged e-values between 10−50 and 10. Finally, we used the paired two tailed t-test to assess significance, and assumed that results with p ≤ 0.05 (I.e. 95% of confidence) are significant. ## 3 Results As a first step, we build a model for each structural property and evaluate it according to the methodology described in the Methods section. The ROC curves are presented in figure 4 and the Precision/Recall curves in figure 5. Both figures show all models, that is, *pHMM2D* (secondary structural model), *pHMMOi* (Ooi measure model), *pHMMAcc* (inaccessibility model) and *pHMM3D* (threedimensional structure model) outperforming the HMMER model.
HMMER-STRUCT HMMER 10−4 pHMMAcc 10−4 pHMMOi 10−3 pHMM2D 10−3 pHMM3D 10−4 Paired two tailed t-test when comparing performance of each HMMERSTRUCT component with the combined model. Table 3. HMMER-STRUCT paired t-test ## 4 Discussion The accuracy of homology detection methods is essential for the problem of inferring the function of unknown-function proteins. However, improving accuracy becomes hard when similarity between sequences is low. We proposed a method to improve pHMMs sensitivity by adding structural properties in the model building stage. We showed that the pHMMs trained according to this method are more sensitive than pHMMs trained from multiple sequence alignments, even if the alignment itself relied on structural properties. Our experiments demonstrated best performance for *pHMM2D*, that used secondary structural properties, and for *pHMMOi*, that used packing density residues. Both pHMMs present similar performance. We believe that the good results obtained with the *pHMMoi* model can be attributed to the fact that tight packing is important for protein stability, and follow well-known results that indicate that amino-acids located in the core protein are more conserved than amino-acids located in other sites (Privalov, 2000). In the same way, the *pHMM2D* model achieve good performance as secondary structure elements are responsible for maintaining the form in homologue proteins. These elements form motifs and domains, which are related with protein function. Conserved sites may point to functionally and structurally important regions. These observations may explain the higher performance of models based on packing residues and on secondary structural properties. The *pHMMAcc* models, based on amino-acid inaccessibility, and the *pHMM3D* models, based on three-dimensional coordinates, did not perform as well. The *pHMMAcc* models did not achieve statistical significance results, when they were compared with HMMER. On the other hand, we observe that the inaccessibility property can be explained by hydrophobic effects, as are the amino-acids with hydrophobic side-chain that go toward the core protein by forming packages. Therefore hydrophobicity was represented in the *pHMMOi* model, that achieved good performance. Our results suggest the difference between models stems from the *pHMMOi* models to be more accurate and precise than what is used when building pHMMAcc. However, we believe the inaccessibility property is already represented appropriately by *pHMMOi* model. Since amino-acids with high packing density already are inaccessible. Therefore, *pHMMOi* outperformed the *pHMMAcc*, as *pHMMoi* has more information than *pHMMAcc*. The chief contribution of our method was achieved when all the models work together. The combined models performed significantly better than any single model. We believe that this results from the fact that each trained pHMMs represents a different structural property. Therefore, combining the models increases sensitivity by exploring the different structural properties. Our method shows that structural information can be added during the training phase of pHMM to improve sensitivity, without much changes to the usage of pHMM methodology, and applied to recently discovered proteins for which there is little structural information. ## 5 Conclusion The increasing number of studies involving pHMMs and the use of structural information has been quite remarkable (Hou *et al*., 2004; Alexandrov *et al*., 2004; Bystroff *et al*., 2000). Most of these approaches build structural models based on three-dimensional coordinates. In contrast, we present a novel methodology to train pHMMs based on structural alignment and other structural properties using a set of homologue protein sequences. Our method builds five models from an aligned homologue sequence set. Each model represents a different structural property, and the union of the models represent the structural context of aligned proteins. The properties used were primary, secondary and tertiary structures, accessibility and packing residue. Note that previous attempts have already used secondary and tertiary structural properties to train pHMM, though in quite a different way. However, accessibility and packing residue properties were used for the first time in pHMM training, with good results in the latter case. In order, to build each model, we developed a novel sequenceweighting algorithm based on structural weights that are attributed for each amino-acid. Traditional weighting-algorithm works gives the same weight for every residue in the protein. Instead, we propose a method that gives a different weight to each aminoacid into a protein, according to structural properties that suggest it may be in a conserved region. Our results relied on prior work (Chakrabarti *et al*., 2004; Deane *et al*., 2003; Nishikawa *et al*., 1986) that suggested interesting properties and estimated their weight. Nowadays, the most popular approach to discovering the function of a newly found protein is through sequence similarity search. In fact, it is well known that structure is more conserved than sequence, and thus structural similarity can suggest function similarity. On the other hand, structural data is sparse and are usually not available for proteins with unknown function. Therefore, it is very important that methods that uses structural properties to build models will not need to rely on structural information for a new protein. Our method makes use of structural properties only at the model building stage, but not at scoring. Our results show that the use of structural properties can improve the sensitivity of remote homology methods. Moreover, the combination of different model (one for each property) outperforms the use of individual properties. A number of future research directions present themselves. It will be interesting to include more models, such as that based on bond-hydrogen properties. Also, it will be interesting to apply our methodology to other remote homology tools, such as SAM (Hughey *et al*., 1996) and T-HMM Qian *et al*. (2004). Ultimately, we believe that our work is a step in the major challenge of finding the set of structural properties or features that represent precisely membership of a super family.
## Acknowledgement We Are Grateful To Cnpq For Financial Support. References Alexandrov,V., Gerstein,M. (2004) Using 3D Hidden Markov Models that explicitly represent spatial coordinates to model and compare protein structures, BMC Bioinformatics, 5, 110. Altschul,F., Gish,W., Miller,W., Myers,E., Lipman,D. (1990) A basic local alignment search tool, *Journal of Molecular Biology*, 215, 403-410. Altschul,S., Madden,T., Schaffer,A., Zhang,J., Zhang,Z., Miller,W., Lipman,D. (2000) PSI-BLAST searches using hidden markov models of structural repeats: prediction of an unusual sliding DNA clamp and of beta-propellers in UV-damaged DNAbinding protein, *Nucleic Acids Research*, 28, 3570-3580. Andreeva,A., Howorth,D., Brenner,S., Hubbard,T., Chothia,C., Murzin,A. (2004) SCOP database in 2004: refinements integrate structure and sequence family data, Nucleic Acids Research, 32, 226-229. Attwood,T., Bradley,P., Flower,D., Gaulton,A., Maudling,N., Mitchell,A. (2005) Article title, *Bioinformatics*, 21, 32553263. Bateman,A., Coin,L., Durbin,R., Finn,R., Hollich,V., Griffiths-Jones,S., Khanna,A., Marshall,M., Moxon,S., Sonnhammer,E., Studholme,D., Yeats,C., Eddy,S. (2004) The Pfam Protein Families Database, *Nucleic Acids Research*, 32, 138-141. Mitchell, T. (1997) Machine Learning, *McGraw-Hill*. Bernardes, J., D`avila, A., Costa, V., Zaverucha, G. (2007) Improving model construction of profile HMMs for remote homology detection through structural alignment, BMC Bioinformatics, 8, 435:1-12. Branden,C., Tooze,J. (1991) Introduction to protein Structure, chapter Motifs of protein structure, *Garland Publishing*, 11-29. Brown,M., Hughey,R., Krogh,A., Mian,I., Sjlander,K., Haussler,D. (1993) Using Dirichlet mixture priors to derive hidden markov models for protein families, Proc. of First Int. Conf. on Intelligent Systems for Molecular Biology, 1, 4755. Bystroff,C., Baker,D. (2000) HMMSTR:A hidden Markov model for local sequencestructure correlation in proteins, *Journal of Molecular Biology*, 301, 173190. Chakrabarti,S., Sowdhamini,R. (2004) Regions of minimal structural variation among members of protein domain superfamilies: application to remote homology detection and modelling using distant relationships, *FEBS*, 569, 31-36. Deane,C., Perdersen,J., Lunter, G. (2003) Insertions and deletions in protein alignment, unpublished. Eddy,S. (1996) Hidden markov models, *Current Opinion in Structural Biology*, 6, 361365. Eddy,S. (1998) Profile hidden Markov models, *Current Opinion in Structural Biology*, 14, 755-763. Espadaler,J., Aragues,R., Eswar,N., Marti-Renom,M., Querol,E., Aviles,F., Sali,A., Oliva,B. (2005) Detecting remotely related proteins by their interactions and sequence similarity, *National Academy of Sciences*, 102, 7151-7156. Gerstein,M., Sonnhammer,E., Chothia,C. (1994) Volume changes in protein evolution, Journal of Molecular Biology, 236, 1067-1078. Gerstein,M., Altman,R. (1995) Average core structures and variability measures for protein families: application to the immunoglobulins, *Journal of Molecular Biology*, 251, 165-175. Gough,J., Karplus,K., Hughey,R., Chothia,C. (2001) Assignment od homology to genome sequences using a library of hidden Markov models that represent all proteins of known structure, *Journal of Molecular Biology*, 313, 903919. Goyon,F., Tuffry,P. (2004) SA-Search: A web tool for protein structure mining based on structural alphabet, *Nucleic Acids Research*, 32, 545548. Gribskov,M., McLachlan,A., Eisenberg,D. (1987) Profile analysis: detection of distantly related proteins, *National Academy of Sciences*, 84,43554358. Haft,D., Selengut,J., White,O. (2003) The TIGRFAMs database of protein families, Nucleic Acids Research, 31, 371-373. Helen,M., Westbrook,J., Feng,Z., Gilliland,G., Bhat,T., Weissig,H., Shindyalov,I., Bourne,P. (2000) The Protein Data Bank, *Nucleic Acids Research*, 28, 235-242. Hou,Y., Hsu,W., Lee,M., Bystroff,C. (2004) Remote homolog detection using local sequence-structure correlations, *Journal of Molecular Biology*, 340, 385-395. Hughey,R., Krogh,A. (1996) Hidden Markov models for sequence analysis: extension and analysis of the basic method, *Computer Applications in the Biosciences*, 12, 95-107. Karplus,K., Karchin,R., Shackelford,G., Hughey,R. (2005) Calibrating E-values for hidden Markov models using reverse-sequence null models, *Bioinformatics*, 21, 4107-4115. Krogh,A., Brown,M., Mian,I., Sjolander,K., Haussler,D. (1994) Hidden markov models in computational biology applications to protein modeling, *Journal of Molecular* Biology, 235, 1501-1531. Lee,B., Richards,F. (1971) The interpretation of protein structure: estimation of static accessibility, *Journal of Molecular Biology*, 14, 379-400. Matsuo,Y., Bryant,S. (1999) Identification of homolog core structures, *Proteins*, 35, 70-79. Mitchell,T. (1997) Machine Learning, *McGraw-Hill* Mizuguchi,K., Deane,C., Johnson,M., Blundell,T., Overington,J. (1998) Article title, Journal Name, 14, 617-623. Nishikawa,K., Ooi,T. (1986) Radial locations of amino acid residues in a globular protein: correlation with the sequence, *Journal of Biochemistry*, 100, 1043-1047. Park,J., Karplus,K., Barrett,C., Hughey,R., Haussler,D., Hubbard,T., Chothia,C. (1998) Sequence comparisons using multiples sequence detect three times as many remote homologues as pairwise methods, *Journal of Molecular Biology*, 284, 12011210. Pearson,W. (1985) Rapid and sensitive sequence comparisons with FASTP and FASTA, Methods Enzymol, 183, 63-98. Privalov,P. (1996) Intermediate states in protein folding, *Journal Name*, 258, 707-725. Qian,B., Goldstein,R. (2004) Performance of an iterated T-HMM for homology detection, *Bioinformatics*, 20, 21752180. Wistrand,M., Sonnhammer,E. (2005) Improved profile HMM performance by assessment of critical algorithmic in SAM and HMMER, *BMC Bioinformatics*, 6, 1-10. Sch¨olkopf,B., Burges,C., Smola,A. (1999) Advances in kernel methods: support vector learning, *MIT Press*. 7
## Bayesian Approach To Rough Set Tshilidzi Marwala and Bodie Crossingham University of the Witwatersrand Private Bag x3 Wits, 2050 South Africa e-mail: t.marwala@ee.wits.ac.za This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV. ## Introduction Rough set theory (RST) was introduced by Pawlak (1991) and is a mathematical tool which deals with vagueness and uncertainty. It is of fundamental importance to artificial intelligence (AI) and cognitive science and is highly applicable to the tasks of machine
differences in posterior probabilities between two states that are in transition (Metropolis et al., 1953). This algorithm ensures that states with high probability form the majority of the Markov chain and is mathematically represented as: $$(14)$$ If ) ( | ) ( | P Mn+1 D > P Mn D **then accept** Mn+1 , (14) else accept if $P(M_{n+1}\mid D)$$P(M_{n}\mid D)$$\geq\xi$ where $\xi\in$ [0,I] (15) else reject and randomly generate another model Mn+1 . ## Experimental Investigation: Modelling Of Hiv The proposed method is applied to create a model that uses demographic characteristics to estimate the risk of HIV. In the last 20 years, over 60 million people have been infected with HIV (Human immunodeficiency virus), and of those cases, 95% are in developing countries (Lasry et al, 2007). HIV has been identified as the cause of AIDS. Early studies on HIV/AIDS focused on the individual characteristics and behaviors in determining HIV risk and Fee and Krieger (1993) refer to this as biomedical individualism. But it has been determined that the study of the distribution of health outcomes and their social determinants is of more importance and this is referred to as social epidemiology (Poundstone et. al., 2004). This study uses individual characteristics as well as social and demographic factors in determining the risk of HIV using rough set models formulated using Bayesian approach and trained using Monte Carlo method. Previously, computational intelligence techniques have been used extensively to analyze HIV and Leke et al (2006, 2006, 2007) used autoencoder network classifiers, inverse neural networks, as well as conventional feed-forward neural networks to estimate HIV risk from demographic factors. Although good accuracy was achieved when using the
autoencoder method, it is disadvantageous due to its "black box" nature which is that it is not transparent. To improve transparency Bayesian rough set theory (RST) is proposed to forecast and interpret the causal effects of HIV. Rough set have been used in various biomedical and engineering applications (Ohrn, 1999; Pe-a et. al, 1999; Tay and Shen, 2003; Golan and Ziarko, 1995). But in most applications, RST is used primarily for prediction and this paper proposes Bayesian rough set models for HIV prediction. Rowland et al (1998) compared the use of RST and neural networks for the prediction of ambulation spinal cord injury, and although the neural network method produced more accurate results, its "black box" nature makes it impractical for the use of rule extraction problems. Poundstone et al (2004) related demographic properties to the spread of HIV. In their work they justified the use of demographic properties to create a model to predict HIV from a given database, as is done in this study. In order to achieve good accuracy, the rough set partitions or discretisation process needs to be well chosen and this is done by sampling through the granulization space and accepting the samples with high posterior probability using Metropolis et. al. algorithms (1953). The data set used in this paper was obtained from the South African antenatal sero-prevalence survey of 2001 (Department of Health, 2001). The data was obtained through questionnaires completed by pregnant women attending selected public clinics and was conducted concurrently across all nine provinces in South Africa. The six demographic variables considered are: race, age of mother, education, gravidity, parity and, age of father, with the outcome or decision being either HIV positive or negative. The HIV status is the decision represented in binary form as either a 0 or 1, with a 0 representing HIV negative and a 1 representing HIV positive. The input data was discretised into four partitions. This
number was chosen as it gave a good balance between computational efficiency and accuracy. The parents' ages are given and discretised accordingly, education is given as an integer, where 13 is the highest level of education, indicating tertiary education. Gravidity is defined as the number of times that a woman has been pregnant, whereas parity is defined as the number of times that she has given birth. It must be noted that multiple births during a pregnancy are indicated with a parity of one. Gravidity and parity also provide a good indication of the reproductive health of pregnant women in South Africa. The rough set models were trained by sampling in the input space and accepting or rejecting using Metropolis et. al. algorithm (1953). The sample input space and the | Number | | | | | | | | | | | |----------|-------|------|-------|------|----|-------|-------|-------|----------|--------| | LowA | MedA | MedB | HighB | MedC | … | HighD | LowE | HighE | Accuracy | Rules | | 6.14 | 27.03 | 5.86 | 9.31 | 1.63 | … | 2.56 | 2.38 | 10.85 | 55.50 | 191.00 | | 11.44 | 15.77 | 9.21 | 10.19 | 2.76 | … | 5.71 | 0.59 | 32.67 | 59.87 | 299.00 | | 8.56 | 24.08 | 7.01 | 8.10 | 4.62 | … | 3.65 | 7.83 | 28.55 | 56.44 | 202.00 | | 1.78 | 3.76 | 0.00 | 1.71 | 0.27 | … | 3.39 | 6.54 | 19.84 | 60.77 | 130.00 | | 4.12 | 6.33 | 1.25 | 6.86 | 1.77 | … | 4.15 | 10.37 | 28.81 | 57.52 | 226.00 | | 7.83 | 20.49 | 1.45 | 4.99 | 1.13 | … | 3.36 | 5.00 | 23.70 | 62.54 | 283.00 | | 2.68 | 25.31 | 4.98 | 6.24 | 0.32 | … | 3.72 | 0.79 | 14.97 | 56.37 | 204.00 | from a total of 13087. The input data was therefore the demographic characteristics explained earlier and the output were the plausibility of HIV with 1 representing 100%
plausibility that a person is HIV positive and -1 indicating 100% plausibility of HIV negative. When training the rough set models using Markov Chain Monte Carlo, 500 samples were accepted and retained meaning that 500 sets of rules where each set contained 50 up to 550 numbers of rules with an average of 222 rules as can be seen in Figure 1. 500 samples were retained because the simulation had converged to a stationary distribution. This figure must be interpreted in the light of the fact on calculating the posterior probability we used the knowledge that fewer rules are more desirable than many. Therefore, the Bayesian rough set framework is able to select the number of rules in addition to the partition sizes. <image>
Lower Approximation Rules 1. If Race = African and Mothers Age = 23 and Education = 4 and Gravidity = 2 and Parity = 1 and Fathers Age = 20 Then HIV = Most Probably Positive 2. If Race = Asian and Mothers Age = 30 and Education = 13 and Gravidity = 1 and Parity = 1 and Fathers Age = 33 Then HIV = Most Probably Negative Upper Approximation Rules 1. If Race = Coloured and Mothers Age = 33 and Education = 7 and Gravidity = 1 and Parity = 1 and Fathers Age = 30 Then HIV = Positive with plausibility = 0.33333 2. If Race = White and Mothers Age = 20 and Education = 5 and Gravidity = 2 and Parity = 1 and Fathers Age = 20 Then HIV = Positive with plausibility = 0.06666 ## Conclusion Rough set were formulated using Bayesian framework. They were then trained using Markov Chain Monte Carlo method. The Bayesian framework is found to offer probabilistic interpretations to rough set. A balance between transparency of the rough set model and the accuracy of HIV estimation is achieved with a great deal of computational effort. ## References 1. Bishop, C.M., 2006. Pattern recognition and machine intelligence. **Springer, Berlin,** Germany.
2. **Deja, A., Peszek, P., 2003. Applying rough set theory to multi stage medical** diagnosing. Fundamenta Informaticae, 54, 387–408. 3. **Department of Health, 2001. National HIV and syphilis sero-prevalence survey of** women attending public antenatal clinics in South Africa. http://www.info.gov.za/otherdocs/2002/hivsurvey01.pdf. 4. **Fee, E., Krieger, N., 1993. Understanding AIDS: historical interpretations and the** limits of biomedical individualism. American Journal of Public Health, 83, 1477– 1486. 5. **Goh, C., Law, R., 2003. Incorporating the rough set theory into travel demand** analysis. Tourism Management , 24, 511-517. 6. **Golan, R. H., Ziarko, W., 1995. A methodology for stock market analysis utilizing** rough set theory. In Proceedings of Computational Intelligence for Financial Engineering, New York, USA, 32–40. 7. **Greco S., Matarazzo B., Slowinski R., 2006. Rough membership and Bayesian** confirmation measures for parameterized rough set. Proceedings of SPIE - The International Society for Optical Engineering, 6104, 314-324. 8. **Greco S., Pawlak Z.X., Slowinski R., 2004. Can Bayesian confirmation measures be** useful for rough set decision rules? Engineering Applications of Artificial Intelligence, 17 (4), 345-361. 9. **Inuiguchi, M., Miyajima, T., 2006. Rough set based rule induction from two decision** tables. European Journal of Operational Research, (in press).
10. **Lasry, G., Zaric, S., Carter, M.W., 2007. Multi-level resource allocation for HIV** prevention: A model for developing countries. European Journal of Operational Research, 180, 786-799. 11. **Leke, B.B., 2007. Computational Intelligence for Modelling HIV. Ph.D. Thesis,** School of Electrical & Information Engineering, University of the Witwatersrand, South Africa. 12. **Leke, B.B., Marwala, T., Tettey, T., 2006. Autoencoder networks for HIV** classification. Current Science, 91, 1467–1473. 13. **Leke, B.B., Marwala, T., Tettey, T., 2007. Using inverse neural network for HIV** adaptive control. International Journal of Computational Intelligence Research, 3, 11– 15. 14. **Leke, B.B., Marwala, T., Tim, T., Lagazio, M., 2006. Prediction of HIV status from** demographic data using neural networks. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. Taiwan, 2339-2344. 15. **Marwala, T., 2007. Bayesian training of neural network using genetic programming.** Pattern Recognition Letters**, http://dx.doi.org/10.1016/j.patrec.2007.03.004 (in press).** 16. **Malve, S., Uzsoy, R., 2007. A genetic algorithm for minimizing maximum lateness** on parallel identical batch processing machines with dynamic job arrivals and incompatible job families. Computers and Operations Research, 34, 3016-3028. 17. **Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E., 1953.** Equations of state calculations by fast computing machines. Journal of Chemical Physics. 21, 1087-1092.
18. **Nishino T., Nagamachi M., Tanaka H., 2006. Variable precision Bayesian rough set** model and its application to human evaluation data. Proceedings of SPIE - The International Society for Optical Engineering, 6104, 294-303. 19. **Ohrn, A., 1999. Discernibility and Rough set in Medicine: Tools and Applications.** PhD Thesis, Department of Computer and Information Science, Norwegian University of Science and Technology. 20. **Ohrn, A., Rowland, T., 2007. Rough set: A knowledge discovery technique for** multifactorial medical outcomes. American Journal of Physical Medicine and Rehabilitation (to appear). 21. **Pawlak, Z., 1991. Rough set: Theoretical Aspects of Reasoning about Data. Kluwer** Academic Publishers. 22. **Pe-a, J., Ltourneau, S., Famili, A. 1999. Application of rough set algorithms to** prediction of aircraft component failure. In Proceedings of the Third International Symposium on Intelligent Data Analysis, Amsterdam. 23. **Poundstone, K. E., Strathdee, S.A., Celentano, D.D., 2004. The social epidemiology** of human immunodeficiency Virus/Acquired Immunodeficiency Syndrome." Epidemiol Reviews, vol. 26, pp. 22–35, 2004. 24. **Rowland, T., Ohno-Machado, L., Ohrn, A., 1998. Comparison of multiple prediction** models for ambulation following spinal cord injury. Chute, 31, 528–532. 25. **Slezak D., Ziarko, W., 2005. The investigation of the Bayesian rough set model.** International Journal of Approximate Reasoning, 40 (1-2), 81-91. 26. **Tay, F.E.H., Shen. L., 2003. Fault diagnosis based on rough set theory. Engineering** Applications of Artificial Intelligence, 16, 39-43.
learning and decision analysis. Rough set are useful in the analysis of decisions in which there are inconsistencies. To cope with these inconsistencies, lower and upper approximations of decision classes are defined (Inuiguchib and Miyajima, 2006). Rough set theory is often contrasted to compete with fuzzy set theory (FST), but it in fact complements it. One of the advantages of RST is that it does not require a priori knowledge about the data set, and it is for this reason that statistical methods are not sufficient for determining the relationships that exists in complex cases such as between the demographic variables and their respective HIV status. Greco et al. (2006) generalized the original idea of rough set and introduced variable precision rough set, which is based on the concept of relative and absolute rough membership. The Bayesian framework is a tool that can be used to extend this absolute to relative membership framework. Nashino et. al. (2006) proposed a rough set method to analyze human evaluation data with much ambiguity such as sensory and feeling data and handles totally ambiguous and probabilistic human evaluation data using a probabilistic approximation based on information gains of equivalent classes. Slezak and Ziarko (2005) proposed a **rough set model, which is concerned primarily with algebraic properties of** approximately defined sets and extended the basic rough set theory to incorporate probabilistic information. This paper extends the rough set model to the probabilistic domain using Bayesian framework, Markov Chain Monte Carlo simulation and Metropolis algorithms. In order to achieve this, the rough set membership functions' granulizations are interpreted probabilistically. The proposed
27. **Witlox, F., Tindemans, H., 2004. The application of rough set analysis in activity** based modeling: Opportunities and constraints. Expert Systems with Applications, 27, 585-592.
Once the information table is obtained, the data is discretised into partitions as mentioned earlier. An information system can be understood by a pairΛ = (U, A**), where U and A,** are finite, non-empty sets called the universe, and the set of attributes, respectively (Deja and Peszek, 2003). For every attribute a an element of A, we associate a set Va**, of its** values, where Va **is called the value set of** a. a: U→ Va **(1)** Any subset B of A determines a binary relation *I(B)* on U**, which is called an** indiscernibility relation. The main concept of rough set theory is an indiscernibility relation (indiscernibility meaning indistinguishable from one another). Sets that are indiscernible are called elementary sets, and these are considered the building blocks of RST's knowledge of reality. A union of elementary sets is called a crisp set, while any other sets are referred to as rough or vague. More formally, for given information system Λ, then for any subset B ⊆ A, there is an associated equivalence relation *I(B)* called the B − *indiscernibility* **relation and is represented as shown as:** (x, y)∈ I(B) iff a(x) = a( y**) (2)** RST offers a tool to deal with indiscernibility and the way in which it works is, for each concept/decision X, the greatest definable set containing X **and the least definable set** containing X **are computed. These two sets are called the lower and upper approximation** respectively. The sets of cases/objects with the same outcome variable are assembled together. This is done by looking at the "purity" of the particular objects attributes in relation to its outcome. In most cases it is not possible to define cases into crisp sets, in such instances lower and upper approximation sets are defined. The lower approximation is defined as the collection of cases whose equivalence classes are fully contained in the
set of cases we want to approximate (Ohrn and Rowland, 2006). The lower approximation of set X is denoted BX **and is mathematically represented as:** $\eqref{eq:walpha}$. BX = {x∈U : B(x) ⊆ X} **(3)** The upper approximation is defined as the collection of cases whose equivalence classes are at least partially contained in the set of cases we want to approximate. The upper approximation of set X is denoted BX **and is mathematically represented as:** $$(4)$$ BX = {x ∈U : B(x) ∩ X = **Ø} (4)** It is through these lower and upper approximations that any rough set is defined. Lower and upper approximations are defined differently in various literature, but it follows that a crisp set is only defined for BX = BX **. It must be noted that for most cases in RST,** reducts are generated to enable us to discard functionally redundant information (Pawlak, 1991) and in this paper the prior probability handles reducts. ## Rough Membership Function The rough membership function is described; : X η A U→ **[0, 1] that, when applied to object** x, quantifies the degree of relative overlap between the set X **and the indiscernibility set to** which x **belongs. This membership function is a measure of the plausibility of which an** object x belongs to set X**. This membership function is defined as:** $$\eta_{A}^{x}={\frac{\left[X\right]_{B}\cap X|}{\left[X\right]_{B}}}$$ η = **(5)** $$({\mathfrak{S}})$$ where [X]b is an elementary set.
Rough Set Accuracy The accuracy of rough set provides a measure of how closely the rough set is approximating the target set. It is defined as the ratio of the number of objects which can be positively placed in X to the number of objects that can be possibly placed in X**. In** other words it is defined as the number of cases in the lower approximation, divided by the number of cases in the upper approximation; 0 ≤ (X ) ≤1 α p $$\alpha_{p}(X)={\frac{\left|B X\right|}{\left|\overline{{B}}X\right|}}$$ $$\mathbf{(6)}$$ (X ) = **(6)** Rough set Formulation The process of modeling the rough set can be broken down into five stages. The first stage would be to select the data while the second stage involves pre-processing the data to ensure that it is ready for analysis. The second stage involves discretizing the data and removing unnecessary data (cleaning the data). If reducts were considered, the third stage would be to use the cleaned data to generate reducts. A reduct is the most concise way in which we can discern object classes (Witlox and Tindermans, 2004). In other words, a reduct is the minimal subset of attributes that enables the same classification of elements of the universe as the whole set of attributes (Pawlak, 1991). To cope with inconsistencies, lower and upper approximations of decision classes are defined (Ohrn, 2006; Deja and Peszek, 2003). Stage four is where the rules are extracted or generated. The rules are normally determined based on condition attributes values (Goh and Law, 2003). Once the rules are extracted, they can be presented in an *if CONDITION(S)-then* DECISION format (Leke, 2007). The final or fifth stage involves testing the newly
created rules on a test set to estimate the prediction error of the rough set model. The equation representing the mapping between the inputs to the output using rough set can be written as y = f (G,N,R**) (7)** where y is the output, G **is the granulization of the input space into high, low, medium** etc, N is the number of rules and R **is the rule. So for a given nature of granulization, the** rough set model will be able to give the optimal number of rules and the accuracy of prediction. Therefore, in rough set modeling there is always a trade-off between the degree of granulization of the input space (which affects the nature and size of rules) and the prediction accuracy of the rough set model. Therefore, the estimation process for the level and nature of the granulization process will be solved using Bayesian framework, which is explained in the next section. Bayesian Training on Rough Set Model The Bayesian framework can be written as in (Marwala, 2007; Bishop, 2006): $$(7)$$ $P(M\mid D)=\frac{P(D\mid M)p(M)}{p(D)}$ where $M=\begin{bmatrix}G\\ N\\ R\end{bmatrix}$ $$\begin{array}{l}\mathbf{(8)}\end{array}$$ . Within the context of Bayesian rough set models, G is granulization, R **= rough set rules,** N = number of rules, D is the data which consist of input x and output y and A **= accuracy** of rough set model prediction. The parameter P(M | D) **is the probability of the rough set** model given the observed data, P(D | M ) **is the probability of the data given the assumed** rough set model and is also called the likelihood function, P(M **) is the prior probability** of the rough set model and P(D) is the probability of the data and is also called the
evidence. The evidence can be treated as the normalization constant and therefore is ignored in this paper. The likelihood function may be estimated as follows: $$P(D\,|\,M)=\frac{1}{z_{1}}\mathrm{exp}(-e r r o r)=\frac{1}{z_{1}}\mathrm{exp}\{A(N,R,G)-1\}$$ $$(9)$$ P D M **(9)** Here 1 z **is the normalization constant. The prior probability in this problem is linked to** the concept of reducts, which was explained earlier and it is the prior knowledge that the best rough set models are the ones with the minimum numbers of rules (N**). Therefore,** the prior probability may be written as follows: $$P(M)={\frac{1}{z_{2}}}\mathrm{exp}\{-\lambda N\}$$ $$(10)$$ (10) where 2 z is the normalization constant and λ **is a hyperparameter that scales the prior** information to be in line with the magnitude of the likelihood function. The posterior probability of the model given the observed data is thus: $$P(M\mid D)=\frac{1}{z}\mathrm{exp}\{A(N,R,G)-1-\lambda N\}$$ $$(11)$$ ( | **) (11)** where z **is the normalization constant. Since the number of rules and the rules themselves** given the data depend on the nature of the granulization of the input space, we shall sample in the granule space using a procedure called Markov Chain Monte Carlo simulation (Marwala, 2007; Bishop, 2006). Markov Monte Carlo Simulation The manner in which the probability distribution in equation 11 may be sampled is to randomly generate a succession of granule vectors and accepting or rejecting them based on how probable they are using Metropolis algorithm. This process requires a generation
of large samples of granules for the input space, which in many cases is not computationally efficient. The MCMC creates a chain of granules and accepts or rejects them using Metropolis algorithm. The application of Bayesian approach and MCMC rough set, results in the probability distribution function of the granules, which in turn leads to the distribution of the rough set outputs. From these distribution functions the average prediction of the rough set model and the variance of that prediction can be calculated. The probability distributions of the rough set model represented by granules are mathematically described by equation 11. From equation 11 and by following the rules of probability theory, the distribution of the output parameter, y, **is written as** (Marwala, 2007): $$p(y\,|\,x,D)=\int p(y\,|\,x,M)\,p(M\,|\,D)d M$$ p( y | x,D) = p( y | x,M ) p(M | D)dM **(12)** Equation 12 depends on equation 11, and is difficult to solve analytically due to relatively high dimension of granule space. Thus the integral in equation 12 may be approximated as follows: $$\stackrel{\mathrm{�}}{y}\equiv\frac{1}{L}\sum_{i=I}^{R+L-1}F(M_{i})$$ $$\quad(12)$$ $$(13)$$ y **(13)** Here F is the mathematical model that gives the output given the input, ỹ **is the average** prediction of the Bayesian rough set model, R **is the number of initial states that are** discarded in the hope of reaching a stationary posterior distribution function described in equation 11 and L **is the number of retained states. In this paper, MCMC method is** implemented by sampling a stochastic process consisting of random variables {g1,g2*,…,g*n} through introducing random changes to granule vector {g} **and either** accepting or rejecting the state according to Metropolis et al. algorithm given the
# Comparing Robustness Of Pairwise And Multiclass Neural-Network Systems For Face Recognition J. Uglov, V. Schetinin, C. Maple Computing and Information System Department, University of Bedfordshire, Luton, UK Abstract. Noise, corruptions and variations in face images can seriously hurt the performance of face recognition systems. To make such systems robust, multiclass neuralnetwork classifiers capable of learning from noisy data have been suggested. However on large face data sets such systems cannot provide the robustness at a high level. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness of face recognition. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face images corrupted by noise. ## 1. Introduction Performance of face recognition systems is achieved at a high level when such systems are robust to noise, corruptions and variations in face images [1]. To make face recognition systems robust, multiclass artificial neural networks(ANNs) capable of learning from noisy data have been suggested [1]. However on large face data sets such neural-network systems cannot provide the robustness at a high level [1] - [3]. To overcome this problem pairwise classification systems have been proposed, see e.g. [3], [4]. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face image data described in [5]. In section 2 we briefly describe face image representation and noise problems, and then in section 3 we describe a pairwise neural-network system proposed for face recognition. Section 4 describes our experiments and finally section 5 concludes the paper. ## 2. Face Image Representation And Noise Problems Following to [1] - [3], we use the principal component analysis (PCA) to represent face images as m-dimensional vectors of components. The PCA is the common technique for data representation in face recognition systems. The first two principal components which make the most important contribution in face recognition can be used to visualise the scatter of patterns of different classes (faces). Therefore the use of such a visualisation allows us to observe how the noise can corrupt the boundaries of classes. For example, Fig. 1 shows two graphs depicting the examples of four classes whose centres of gravity are visually distinct. The left side plot depicts the examples taken from the original data while the right side plot depicts these examples containing
noise components drawn from a Gaussian density function with zero mean and the standard <image> <image> deviation alpha = 0.5. From this plot we can observe that the noise components corrupt the boundary of the given classes, and therefore the performance of a face recognition system can be affected. From these plots we can also observe that the boundaries between pairs of the classes can remain almost the same. This inspire us to exploit such a classification scheme to implement a pairwise neural-network system for face recognition. ## 3. A Pairwise Neural-Network System The idea behind the pairwise classification is to use two-class ANNs learning to classify all possible pairs of classes. Therefore for C classes the pairwise system should include *C*(C* - 1)/2 ANNs learnt to solve two-class problems. For example, for classes 1, 2, and 3 depicted in Fig. 2, the number of two-class ANNs is equal to 3. In this figure the lines f1/2, f1/3 and f2/3 are the dividing hyperplanes learnt by the ANNs. We can simply assume these functions are given the positive values for examples of the classes standing first in the lower indexes (1, 1, and 2) and the negative values for the classes standing second in there (2, 3, and 3).
<image> Now we can combine hyperplanes f1/2, f1/3 and f2/3 to build up the new dividing hyperplanes g1, g2, and g3. The first hyperplane g1 combines the functions f1/2 and f1/3 so that g1 = f1/2 + f1/3. These functions are taken with weights of 1.0 because both functions f1/2 and f1/3 give the positive output values on the examples of class 1. Likewise, the second and third hyperplanes are as follows: g2 = f2/3 - f1/2 and g3 = - f1/3 - f2/3. In practice each of hyperplanes g1, …, gC, can be implemented as a two-layer feedforward ANN with a given number of hidden neurons fully connected to the input nodes. Then we can introduce the output neuron summing all outputs of the ANNs to make a final decision. For example, the pairwise neural-network system depicted in Fig 3 consists of three neural networks performing the functions f1/2, f1/3, and f2/3. The three output neurons g1, g2, and g3 are connected to these networks with weights equal to (+1, +1), (–1, +1) and (–1, – 1), respectively. <image> In general, a pairwise neural-network system consists of C(C - 1)/2 neural networks, performing functions f1/2, …, fi/j, …, fC - 1/C, and C output neurons g1, …, gC, where i < j = 2, …, C. We can see that the weights of output neuron gi connected to the hidden neurons fi/k and fk/i should be equal to + 1 and –1, respectively.
## 4. Experiments The goal of our experiments is to compare the robustness of the proposed pairwise and standard multiclass neural-network systems on the Cambridge ORL face image data set [5] (in a full paper, the experiments will run on different face image data sets). To estimate the robustness we add noise components to the data and then estimate the performance on the test data within 5 fold cross-validation. The performances of the pairwise and multiclass systems are listed in Table 1 and shown in Fig. 4. | alpha. The performances are represented by the means and 2intervals. | | | | | | | | | |-------------------------------------------------------------------------|--------|--------|--------|--------|--------|--------|--------|--------| | alpha | 0.0 | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | 1.1 | 1.3 | | P, mean | 0.972 | 0.966 | 0.953 | 0.920 | 0.859 | 0.772 | 0.659 | 0.556 | | P, 2 | ±0.004 | ±0.013 | ±0.017 | ±0.013 | ±0.018 | ±0.030 | ±0.028 | ±0.031 | | M, mean | 0.952 | 0.951 | 0.932 | 0.898 | 0.802 | 0.678 | 0.557 | 0.419 | | M, 2 | ±0.017 | ±0.016 | ±0.025 | ±0.016 | ±0.015 | ±0.052 | ±0.036 | ±0.050 | From this table we can see that for alpha ranging between 0.0 and 1.3 the proposed pairwise <image> system significantly outperforms the multiclass systems. For alpha = 0.0 the improvement in the performance is 2.0% while for alpha = 1.1 the improvement becomes 10.2%.
## 5. Conclusion We have proposed a pairwise neural-network system for face recognition in order to reduce the negative effect of noise and corruptions in face images. Within such a classification scheme we expect that the improvement in the performance can be achieved on the base of our observation that boundaries between pairs of classes remain almost the same while a noise level increases. We have compared the performances of the proposed pairwise and standard multiclass neural-network systems on the face dataset [5]. Evaluating the mean values and standard deviations of the performances under different levels of noise in the data, we have found that the proposed pairwise system is superior to the multiclass neural-network system. Thus we conclude that the proposed pairwise system is capable of decreasing the negative effect of noise and corruptions in face images. Clearly this is a very desirable property for face recognition systems when the robustness is of crucial importance. ## 6. References 1. S.Y. Kung, M.W. Mak and S.H. Lin. Biometric Authentication: A Machine Learning Approach. Pearson Education, 2005 2. C. Liu and H. Wechler. Robust coding scheme for indexing and retrieval from large face database. IEEE Trans Image Processing, 9(1), 132-137, 2000 3. A.S. Tolba, A.H. El-Baz and A.A. El-Harby. Face Recognition: A Literature Review. IJSP. 2(2), 88-103, 2005 4. T. Hastie and R. Tibshirani. Classification by pairwise coupling. Advances in NIPS, 10, 507-513, 1998 5. E.S. Samaria. Face recognition using hidden Markov models. PhD thesis. University of Cambridge, 1994
# Ensemble Learning For Free With Evolutionary Algorithms ? Christian Gagn´e ∗ Informatique WGZ Inc., 819 avenue Monk, Qu´ebec (QC), G1S 3M9, Canada. christian.gagne@wgz.ca ## Abstract Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final **result. In the meanwhile, Ensemble Learning, one of the most** efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-EEL) or incrementally along evolution (On**-EEL).** Experiments on a set of benchmark problems show that Off**-EEL outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier** ensembles. ## Categories And Subject Descriptors I.5.2 [Pattern Recognition]: Design Methodology—Classifier design and evaluation; I.2.8 [**Artificial Intelligence**]: Problem Solving, Control Methods, and Search—**Heuristic** methods ## General Terms Algorithms ∗**This work has been mainly realized during a postdoctoral** fellowship of Christian Gagn´e at the University of Lausanne. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. GECCO'07, July 7–11, 2007, London, England, United Kingdom. Copyright 2007 ACM 978-1-59593-697-4/07/0007 ...$5.00. Mich`ele Sebag Equipe TAO - CNRS UMR 8623 / INRIA Futurs, ´ LRI, Bat. 490, Universit´e Paris Sud, F-91405 Orsay Cedex, France. michele.sebag@lri.fr Marc Schoenauer Equipe TAO - INRIA Futurs / CNRS UMR 8623, ´ LRI, Bat. 490, Universit´e Paris Sud, F-91405 Orsay Cedex, France. ## Marc.Schoenauer@Lri.Fr Marco Tomassini Information Systems Institute, Universit´e de Lausanne, CH-1015 Dorigny, Switzerland. marco.tomassini@unil.ch ## Keywords Ensemble Learning, Evolutionary Computation ## 1. Introduction Ensemble Learning, one of the main advances in Supervised Machine Learning since the early 90's, relies on: i) a weak learner (extracting hypotheses, aka classifiers, with error probability less than 1/2 − ǫ, ǫ > **0); ii) a diversification** heuristics used to extract sufficiently diverse classifiers; **iii)** a voting mechanism, aggregating the diverse classifiers constructed [1, 8]. If the classifiers are sufficiently diverse and their errors are independent, then their majority vote will reach an arbitrarily low error rate on the training set as the number of classifiers increases [6]. Therefore, up to some restrictions on the classifier space [25], the generalization **error** will also be low1. The most innovative aspect of Ensemble Learning w.r.t. the Machine Learning literature concerns the diversity requirement, implemented through parallel or sequential heuristics. In Bagging, diversity is enforced by considering independent sub-samples of the training set, and/or using different learning parameters [1]. Boosting iteratively constructs a sequence of classifiers, where each classifier focuses on the examples misclassified by the previous ones [8]. Diversity is also a key feature of Evolutionary Computation (EC): in contrast with all other stochastic optimization approaches, evolutionary algorithms proceed by evolving a population of solutions, and the diversity thereof has been stressed as a key factor of success since the beginnings of EC. Deep similarities between Ensemble Learning and EC thus appear; in both cases, diversity is used to escape from local minima, where any single "best" solution is only too easily trapped. Despite this similarity, Evolutionary Learning has most often (with some notable exceptions, see [14, 16, 18] among others) focused on single-hypothesis learning, where some single best-of-run hypothesis is returned as the solution. However, the evolutionary population itself could be used 1**In practice, the generalization error is estimated from the** error on a test set, disjoint from the training set. The reader is referred to [4] for a comprehensive discussion about the comparative evaluation of learning algorithms.
as a pool for recruiting the elements of an ensemble, enabling "Ensemble Learning for Free". Previous work along this line will be described in Section 2, mostly based on using an evolutionary algorithm as weak learner [17], or using evolutionary diversity-enforcing heuristics [16, 18]. In this paper, the "Evolutionary Ensemble Learning For Free" claim is empirically examined along two directions. The first direction is that of the classifier diversity; a new learning-oriented fitness function is proposed, inspired by the co-evolution framework [13] and generalizing the diversity-enforcing fitness proposed by [18]. The second direction is that of the selection of the ensemble classifiers within the evolutionary population(s). Selecting the best classifiers in a pool amounts to a feature selection problem, that is, a combinatorial optimization problem [12]. A greedy set-covering approach is used, build on a margin-based criterion inspired by Schapire et al. **[23]. Finally, the paper presents two** Evolutionary Ensemble Learning (EEL) approaches, called Off-EEL and On**-EEL, respectively tackling the selection of** the ensemble classifiers in the final population, or along evolution. Paper structure is as follows. Section 2 reviews and discusses some work relevant to Evolutionary Ensemble Learning. Section 3 describes the two proposed approaches OffEEL and On**-EEL, introducing the specific fitness function** and the ensemble classifier selection procedure. Experimental results based on benchmark problems from the UCI repository are reported in Section 4. The paper concludes with some perspectives for further research, discussing the priorities for a tight coupling of Ensemble Learning with Evolutionary Optimization in terms of dynamic systems [22]. ## 2. Related Work Interestingly, some early approaches in Evolutionary Learning were rooted on Ensemble Learning ideas2**. The Michigan** approach [14] evolves a population made of rules, whereas the Pittsburgh approach evolves a population made of sets of rules. What is gained in flexibility and tractability in the Michigan approach is compensated by the difficulty of assessing a single rule, for the following reason. A rule usually only covers a part of the example space; gathering the best rules (e.g. the rules with highest accuracy) does not result in the best ruleset. Designing an efficient fitness function, such that a good quality ruleset could be extracted from the final population, was found a tricky task. In the last decade, Ensemble Learning has been explored within Evolutionary Learning, chiefly in the context of Genetic Programming (GP). A first trend directly inspired from Bagging and Boosting aims at reducing the fitness computation cost [7, 16] and/or dealing with datasets which do not fit in memory [24]. For instance, Iba [16] divided the GP population into several sub-populations which are evaluated on subsets of the training set. Folino et al. **[7] likewise sampled the training set in a Bagging-like mode in the context** of parallel cellular GP. Song et al. **[24] used Boosting-like** heuristics to deal with training sets that do not fit in memory; the training set is divided into folds, one of which is loaded in memory and periodically replaced; at each generation, small subsets are selected from the current fold to 2**Learning Classifier Systems (LCS, [14, 15]) are mostly devoted to Reinforcement Learning, as opposed to Supervised** Machine Learning; therefore they will not be considered in the paper. compute the fitness function, where the selection is nicely based on a mixture of uniform and Boosting-like distributions. The use of Evolutionary Algorithms as weak learners within a standard Bagging or Boosting approach has also been investigated. Boosting approaches for GP have been applied for instance to classification [21] or symbolic regression [17]: each run delivers a GP tree minimizing the weighted sum of the training errors, and the weights were computed as in standard Boosting [8]. While such ensembles of GP trees result, as expected, in a much lower variance of the performance, they do not fully exploit the population-based nature of GP, as independent runs are launched to learn successive classifiers. Liu et al. **[18] proposed a tight coupling between Evolutionary Algorithms and Ensemble Learning. They constructed an ensemble of Neural Networks, using a modified** back-propagation algorithm to enforce the diversity of the networks; specifically, the back-propagation aims at both minimizing the training error and maximizing the negative correlation of the current network with respect to the current population. Further, the fitness associated to each network is the sum of the weights of all examples it correctly classifies, where the weight of each example is inversely proportional to the number of classifiers that correctly classify this example. While this approach nicely suggests that ensemble learning is a Multiple Objective Optimization (MOO) problem (minimize the error rate and maximize the diversity), it classically handles the MOO problem as a fixed weighted sum of the objectives. The MOO perspective was further investigated by Chandra and Yao in the DIVACE system, a highly sophisticated system for the multi-level evolution of ensemble of classifiers [2, 3]. In [3], the top-level evolution simultaneously minimizes the error rate (accuracy) and maximizes the negative correlation (diversity). In [2], the negative correlationinspired criterion is replaced by a **pairwise failure crediting**; the difference concerns the misclassification of examples that are correctly classified by other classifiers. Finally, the ensemble is constructed either by keeping all classifiers in the final population, or by clustering the final population (after their phenotypic distance) and selecting a classifier in each cluster. While the MOO perspective nicely captures the interplay of the accuracy and diversity goals within Ensemble Learning, the selection of the classifiers in the genetic pool as done in [2, 3] does not fully exploit the possibilities of evolutionary optimization, in two respects. On the one hand, it only considers the final population that usually involves up to a few hundred classifiers, while learning ensembles commonly involve some thousand classifiers. On the other hand, clustering-based selection proceeds on the basis of the phenotypic distance between classifiers, considering again that all examples are equally important, while the higher stress put on harder examples is considered the source of the better Boosting efficiency [5]. ## 3. Ensemble Learning For Free After the above discussion, Evolutionary Ensemble Learning (EEL) involves two critical issues: i) how to enforce both the predictive accuracy and the diversity of the classifiers in the population, and across generations; ii) how to best select the ensemble classifiers, from either the final population
only or all along evolution. Two EEL frameworks have been designed to study these interdependent issues. The first one dubbed Offline Evolutionary Ensemble Learning (Off**-EEL) constructs the ensemble from the final population only. The second one, called** Online Evolutionary Ensemble Learning (On**-EEL), gradually constructs the classifier ensemble as a selective archive** of evolution, where some classifiers are added to the archive at each generation. Both approaches combine a standard generational evolutionary algorithm with two interdependent components: a new diversity-enhancing fitness function, and a selection mechanism. The fitness function, presented in Section 3.1 and generalizing the fitness devised by Liu et al. **[18], is inspired from co-evolution [13]. The selection process is used** to extract a set of classifiers from either the final population (Off-EEL) or the current archive plus the current population (On**-EEL), and proceeds by greedily maximizing the** ensemble margin (Section 3.2). Only binary or multi-class classification problems are considered in this paper. The decision of the classifier ensemble is the majority vote among the classifiers (ties being arbitrarily broken). ## 3.1 Diversity-Enforcing Fitness Traditionally, Evolutionary Learning maximizes the number of correctly classified training examples (or equivalently minimizes the error rate). However, examples are not equally informative; therefore a rule correctly classifying a hard **example (e.g. close to the frontiers of the target concept) is** more interesting and should be more rewarded than a rule correctly classifying an example which is correctly classified by almost all rules. Co-evolutionary learning, first pioneered by Hillis [13], nicely takes advantage of the above remark, gradually forging more and more difficult examples to enforce the discovery of high-quality solutions. Boosting proceeds along the same lines, gradually putting the stress on the examples which have not been successfully predicted so far. A main difference between both frameworks is that Boosting exploits a finite set of labelled examples, while co-evolutionary learning has an infinite supply of labelled examples (since it embeds the oracle). A second difference is that the difficulty of an example depends on the whole sequence of classifiers in Boosting, whereas it only depends on the current classifier population in co-evolution. In other words, Boosting is a memory-based process, while co-evolutionary learning is a memoryless one. Both approaches thus suffer from opposite weaknesses. Being a memory-based process, Boosting can be misled by noisy examples; consistently misclassified, these examples eventually get heavy weights and thus destabilize the Boosting learning process. Quite the contrary, co-evolution can forget what has been learned during early stages and specific heuristics, e.g. the so-called Hall-of-Fame, archive of best-so-far individuals, are required to prevent co-evolution from cycling in the learning landscape [20]. Based on these ideas, the fitness of classifiers is defined in this work from a set of reference classifiers noted Q**. The** hardness of every training example x **is measured after the** number of classifiers in Q which misclassify x**. The fitness of** every classifier h **is then measured by the cumulated hardness of the examples that are correctly classified by** h. Three remarks can be made concerning this fitness function. Firstly, contrasting with standard co-evolution, there is no way classifiers can "unlearn" to classify the training examples, since the training set is fixed. Secondly, as in Boosting, the fitness of a classifier reflects its diversity with respect to the reference set. Lastly, the classifier fitness function is highly multi-modal compared to the simple error rate: good classifiers might correctly classify many easy examples, or sufficiently many hard enough examples, or a few very hard examples. Formally, let E = {(xi, yi), xi ∈ X , yi ∈ Y, i = 1 **. . . n**} denote the training set (referred to as set of fitness cases in the GP context); each fitness case or example (xi, yi**) is** composed of an instance xi **belonging to the instance space** X and the associated label yi **belonging to a finite set** Y . Any classifier h **is a function mapping the instance space** X onto Y . The loss function ℓ is defined as ℓ : Y × Y 7→ IR, where ℓ(y, y′**) is the (real valued) error cost of predicting** label y **instead of the true label** y ′ . The hardness or weight of every training example (xi, yi), noted w Q i , or wi when the reference set Q **is clear from** the context, is the average loss incurred by the reference classifiers on (xi, yi): $$w_{i}={\frac{1}{|{\mathcal{Q}}|}}\sum_{h\in{\mathcal{Q}}}\ell(h(\mathbf{x}_{i}),y_{i}).$$ $$(1)$$ The cumulated hardness fitness F is finally defined as follows: F(h**) is the sum over all training examples that are** correctly classified by h, of their weight wi **raised to power** γ. Parameter γ **governs the importance of the weights** wi (the cumulated hardness boils down to the number of correctly classified examples for γ **= 0) and thus the diversity** pressure. $$\mathcal{F}(h)=\sum_{\begin{subarray}{c}i=1...n\\ h(\mathbf{x}_{i})=y_{i}\end{subarray}}w_{i}^{\gamma}\tag{2}$$ where $h$ is the $i$-th column of $h$. Parameter γ **can also be adjusted depending on the level** of noise in the dataset. As noisy examples typically reach high weights, increasing the value of γ **might lead to retain** spurious hypotheses, which happen to correctly classify a few noisy examples. When ℓ **is set to the step loss function** (ℓ(y, y′**) = 0 if** y = y ′ , 1 otherwise) and γ **is set to 1, the** above fitness function is the same as the one used by Liu et al. [18]. The value of γ **is set to 2 in the experiments** (Section 4). 3.2 Ensemble Selection As noted earlier on, the selection of classifiers in a pool H = {h1, . . . , hT } **in order to form an efficient ensemble** is formally equivalent to a feature selection problem. The equivalence is seen by replacing the initial instance space X with the one defined from the classifier pool, where each instance xi is redescribed as the vector (h1(xi), . . . , hT (xi)). Feature selection algorithms [12] could thus be used for ensemble selection; unfortunately, feature selection is one of the most difficult Machine Learning problems. Therefore, a simple greedy selection process is used in this paper to select the classifiers in the diverse pools considered by the Off-EEL (Section 3.3) and On**-EEL (Section 3.4) algorithms. The novelty is the selection criterion, generalizing** the notion of margin [11, 23] to an ensemble of examples as follows.
plane is amenable to quadratic optimization, with: 1. Let P1 **be the first evolutionary population, and** h ∗ the classifier with minimal error rate on E. 2. L1 = Ensemble-Selection(P1, E, {h ∗ }) 3. For t = 2 **. . . T** : (a) Evolve Pt−1 → Pt, using Lt−1 **as reference set.** (b) Lt = Ensemble-Selection(Pt , E, Lt−1) 4. Return LT . classifier ensemble; like in Boosting, the goal is to find classifiers which overcome the errors of the past classifiers. While the ensemble selection algorithm is launched at every generation, it uses the biased current population as classifier pool. In fact, On**-EEL addresses a dynamic optimization** problem; if the classifier ensemble significantly changes between one generation and the next, the fitness landscape will change accordingly and several evolutionary generations might be needed to accommodate this change. On the other hand, as long as the current population does not perform well, the ensemble selection algorithm is unlikely to select further classifiers in the current ensemble; the fitness landscape thus remains stable. The population diversity does not directly result from the fitness function as in the Off**EEL case; rather, it relates with the dynamic aspects of the** fitness function. ## 4. Experimental Setting This section describes the experimental setting used to assess the EEL framework. ## 4.1 Datasets Experiments are conducted on the six UCI datasets [19] presented in Table 1. The performance of each algorithm is measured after a standard stratified 10-fold cross-validation procedure. The dataset is partitioned into 10 folds with same class distribution. Iteratively, all folds but the i-th one are used to train a classifier, and the error rate of this classifier on the remaining i**-th fold is recorded. The performance of the algorithm is averaged over 10 runs for each** fold, and over the 10 folds. ## 4.2 Classifier Search Space As mentioned earlier on, evolutionary ensemble learning can accommodate any type of classifier; Off-EEL and On**EEL could consider neural nets, genetic programs or decision** lists as genotypic search space. Our experiments will consider the most straightforward classifiers, namely separating hyperplanes, as these can easily be inspected and compared. Formally, let X = IRd**be the instance space, a separating** hyperplane classifier h is characterized as (w, b) ∈ IRd × IR with h(x) = < w, x > − b (< w, x > **denotes the scalar** product of w and x). The search for a separating hyper- $${\mathcal{F}}(h)=\sum_{i=1,\dots,n}\left(h(\mathbf{x}_{i})-y_{i}\right)^{2}.$$ $$({\mathfrak{h}})$$ 2. (6) As the above optimization problem can be tackled using standard optimization algorithms, it provides a well-founded baseline for comparison. Specifically, the first goal of the experiments is thus to assess the merits of evolutionary ensemble learning against three other approaches. The first baseline algorithm referred to as Least Mean Square (LMS) uses a stochastic gradient algorithm to determine the optimal separating hyperplane in the sense of criterion given by Equation 6 (see pseudo-code in Figure 3). The second baseline algorithm is an elementary evolutionary algorithm, producing the best-of-run separating hyperplane such that it minimizes the (training) error rate3. The third reference algorithm is the prototypical ensemble learning algorithm, namely AdaBoost with its default parameters [8]. AdaBoost uses simple decision stumps [23] baseline algorithm as weak learner (more on this below). The learning error is classically viewed as composed from a variance term and a bias term [1]. The bias term measures how far the target concept tc **is from the classifier** search space H**, that is, from the best classifier** h ∗ **in this** search space. The variance term measures how far away one can wander from h ∗ **, wrongly selecting other classifiers in** H (overfitting). The comparison of the first and second baseline algorithms gives some insight into the intrinsic difficulty of the problem. Stochastic gradient (LMS) will find the global optimum for criterion given by Equation 6, but this solution optimizes at best the training error. The comparison between the solutions respectively found by LMS and the simple evolutionary algorithm will thus reflect the learning variance term. Similarly, the comparison of the first baseline algorithm and AdaBoost gives some insight into how the ensemble improves on the base weak learner; this improvement can be interpreted in terms of variance as well as in terms of bias (since the majority vote of decision stumps allows for describing more complex regions than simple separating hyperplanes alone). ## 4.3 Experimental Setting The parameters for the LMS algorithm (see Figure 3) are as follows: the training rate, set to η(t**) = 1**/(n √t**), decreases** over the training epochs; the maximum number of epochs allowed is T **= 10000; the stopping criterion is when the** difference in the error rates over two consecutive epochs, is less that some threshold ǫ (ǫ = 10−7**). Importantly, LMS** requires a preliminary normalization of the dataset, (e.g. ∀i = 1 . . . n, xi ∈ [−1, 1]d**). The final result is the error on** the test set, averaged over 10 runs for each fold (because of the stochastic reordering of the training set) and averaged over 10 folds. The classical AdaBoost algorithm [8] uses simple decision stumps [23], and the number of Boosting iterations is limited to 2000. Decision stumps are simple binary classifiers that 3**For 3-classes problems, e.g.** bos or cmc**, the classifier is** characterized as two hyperplanes, respectively separating class 0 (resp. class 1) from the other two classes. In case of conflict (the example is simultaneously classified in class 0 by the first classifier and in class 1 by the second classifier), the tie is broken arbitrarily.
| Table 1: UCI datasets used for the experimentations. | | | | | |--------------------------------------------------------|------|----------|---------|-----------------------------------------------------------------------------------------------------------------------------------------| | # | # | | | | | Dataset | Size | features | classes | Application domain | | bcw | 683 | 9 | 2 | Wisconsin's breast cancer, 65 % benign and 35 % malignant. | | bld | 345 | 6 | 2 | BUPA liver disorders, 58 % with disorders and 42 % without disorder. | | bos | 508 | 13 | 3 | Boston housing, 34 % with median value v < 18.77 K$, 33 % with v ∈]18.77, 23.74], and 33 % with v > 23.74. | | cmc | 1473 | 9 | 3 | Contraceptive method choice, 43 % not using contraception, 35 % using short-term contraception, and 23 % using long-term contraception. | | pid | 768 | 8 | 2 | Pima indians diabetes, 65 % tested negative and 35 % tested positive for diabetes. | | spa | 4601 | 57 | 2 | Junk e-mail classification, 61 % tested non-junk and 39 % tested junk. | <image> | 1. Initialize w = 0 and b = 0 2. For t = 1 . . . T : (a) Shuffle the dataset E = {(xi, yi), i = 1 . . . n} (b) For i = 1 . . . n: ai = < w, xi > − b ∆i = 2η(t)(ai − yi) w = w + ∆ixi b = b − ∆i q 1 (c) Errt = P i=1...n (ai − yi) 2 (RMS error) n (d) If |Errt − Errt−1| < ǫ, stop | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| classify data according to a threshold value on one of the features of the data set. If the feature value of a given data is less (or greater) than the threshold, the data is assigned to a given class, otherwise it is assigned to another class. Decision stumps are trained deterministically, by looping over all features and all features threshold for a given training dataset, selecting the feature, threshold, and comparison operation on the threshold (> or <**) that maximize the classification accuracy on the training data set. Decision stumps** are the simplest possible linear classifiers, but generate good results in combination with AdaBoost. The elementary evolutionary algorithm is a real-valued generational GA using SBX crossover, Gaussian mutations, and tournament selection. The search space is IRd+1 for binary classification problems, and IR2d+2 for ternary classification problem, where d **is the number of attributes in the** problem domain. The evolutionary parameters are detailed in Table 2. All experiments with the real-valued GA rely on the C++ framework Open BEAGLE [9, 10]. ## 5. Results This section reports on the experimental results obtained by Off-EEL and On**-EEL, compared to the three baseline** methods respectively noted LMS (optimal linear classifier), GA (genetically evolved linear classifier) and Boosting (ensemble of decision stumps), on the six UCI data sets described in Table 1. For each method and problem, the average test error (over 100 independent runs as described in Section 4) and the associated standard deviation are displayed in Table 3. The average computational effort of Off**-EEL for** a run ranges from 30 seconds (on problem bld**) to 20 minutes (on problem** spa**), on AMD Athlon 1800+ computers** with 1G of memory. For On**-EEL, the average computational effort for a run ranges from 2 hours (on problem** pid) to 24 hours (on problem spa**), on the same computers.** With respect to the baseline algorithms, a first remark is that the LMS-based classifier is significantly outperformed by all other methods, on all problems but one (pid**). This is** explained as the criterion given by Equation 6 uselessly overconstrains the learning problem, replacing a set of linear inequalities with the minimization of the sum of quadratic terms. Similarly, the single-hypothesis evolutionary learning is dominated by all other methods on all problems but one (bcw**). Boosting shows its acknowledged efficiency as it is** the best algorithm on two out of six problems (Off**-EEL and** Boosting are both best performers for the cmc **problem).** Off**-EEL is the best method for three out of six problems** tested. Compared to AdaBoost, it generates ensemble with lower test error rate on four problems, with a tie for the cmc problem, and AdaBoost being the best on spa **problem. In** all cases, the number of classifiers is lower, with an average between 235 and 335 classifiers for Off**-EEL compared** with more than 750 on all problems but bcw **for Boosting.** This is understandable given that the ensembles are built with Off**-EEL starting from a population of 500 individuals. This raises the question on whether the evolutionary** learning accuracy could be improved by considering larger population sizes. But it should not be forgotten that the decision stumps classifier making the AdaBoost ensembles are significantly simpler than the evolved linear discriminants of Off**-EEL. No clear conclusion can thus be made on the** relative complexity of the ensembles generated by Off-EEL | Table 2: Parameters for the real-valued GA. | | |-----------------------------------------------|----------------------------| | Parameter | Value | | Population size | 500 | | Termination criteria | 100000 fitness evaluations | | Tournament size | 2 | | Initialization range | [-1,1] | | SBX crossover prob. | 0.3 | | SBX crossover n-value | n = 2 | | Gaussian mutation prob. | 0.1 | | Gaussian mutation std. dev. | σ = 0.05 |
| Measure | LMS | GA | Boosting | Off-EEL | On-EEL | |--------------------|--------------|--------------|----------------|--------------|-----------------| | bcw | | | | | | | Train error | 3.9% (0.2%) | 1.8% (0.2%) | 0.0% (0.0%) | 1.4% (0.2%) | 0.4% (0.4%) | | Test error | 4.0% (1.6%) | 3.2% (1.7%) | 5.3% (2.0%) | 3.4% (1.7%) | 3.5% (2.0%) | | Test error p-value | 0.00 | - | 0.00 | 0.09 | 0.04 | | Ensemble size | - | - | 291.6 (68.2) | 235.6 (66.8) | 116.3 (278.2) | | bld | | | | | | | Train error | 29.8% (0.9%) | 25.4% (1.2%) | 0.0% (0.0%) | 20.9% (1.5%) | 18.9% (2.0%) | | Test error | 30.4% (6.6%) | 32.7% (6.6%) | 30.4% (5.4%) | 29.2% (7.4%) | 29.5% (8.4%) | | Test error p-value | 0.04 | 0.00 | 0.14 | - | 0.64 | | Ensemble size | - | - | 1081.4 (166.1) | 301.0 (37.9) | 294.1 (154.2) | | bos | | | | | | | Train error | 32.2% (1.3%) | 23.4% (4.1%) | 0.0% (0.0%) | 16.7% (1.9%) | 20.9% (2.3%) | | Test error | 34.0% (6.7%) | 30.7% (7.5%) | 26.9% (4.2%) | 22.7% (5.7%) | 26.2% (7.2%) | | Test error p-value | 0.00 | 0.00 | 0.00 | - | 0.00 | | Ensemble size | - | - | 761.1 (40.8) | 303.8 (41.4) | 2960.9 (2109.3) | | cmc | | | | | | | Train error | 51.6% (0.4%) | 45.7% (1.4%) | 43.3% (0.7%) | 42.9% (1.2%) | 43.9% (1.4%) | | Test error | 51.8% (2.5%) | 50.4% (3.9%) | 46.8% (2.9%) | 46.8% (3.9%) | 47.7% (3.9%) | | Test error p-value | 0.00 | 0.00 | 0.99 | - | 0.04 | | Ensemble size | - | - | 4000.0 (0.0) | 326.4 (35.7) | 2707.7 (1696.1) | | pid | | | | | | | Train error | 22.0% (0.6%) | 20.2% (0.7%) | 0.6% (0.5%) | 19.8% (0.7%) | 20.0% (0.8%) | | Test error | 22.8% (3.5%) | 24.2% (3.9%) | 28.1% (5.0%) | 24.0% (4.0%) | 24.0% (3.9%) | | Test error p-value | - | 0.00 | 0.00 | 0.00 | 0.00 | | Ensemble size | - | - | 1978.1 (43.0) | 309.5 (37.6) | 1196.3 (765.7) | | spa | | | | | | | Train error | 11.1% (0.4%) | 7.9% (0.5%) | 1.4% (0.1%) | 6.1% (0.2%) | 7.6% (0.8%) | | Test error | 11.3% (1.2%) | 9.0% (1.3%) | 5.7% (0.8%) | 6.7% (1.2%) | 8.3% (1.4%) | | Test error p-value | 0.00 | 0.00 | - | 0.00 | 0.00 | | Ensemble size | - | - | 2000.0 (0.0) | 331.1 (28.4) | 6890.0 (2938.1) | Table 3: Results on the UCI datasets based on 10-folds cross-validation, using 10 independent runs over each fold. Values are averages (standard deviations) over the 100 runs. Statistical tests are p**-values of paired** t**-tests on the test error rate compared to that of the best method on the dataset (in bold).** compared to Boosting. Despite its larger ensemble size, On**-EEL is dominated by** Off**-EEL on all problems but** pid**, where both approaches** generate identical test error rates. A tentative explanation stems from the nature of the two approaches, with Off**-EEL** having a clear algorithm organized in two stages, classifiers evolution with diversity-enhancing fitness followed by ensemble construction, while On**-EEL is more complex, with** a succession of ensemble construction and classifiers evolution with diversity-enforcing measure taken relatively to the current ensemble. The dynamics of On**-EEL is hard** to understand, but it can be speculated that the iterative construction of the ensemble (without individual removal) is prone to be stuck in local optima. Indeed, the "construction path" taken to build the ensemble begins with a selection of some (supposed poor) individuals at the beginning of the evolution. As these individuals cannot be removed from the ensemble, they significantly influence the choice of other individuals, biasing and possibly misleading the whole process. ## 6. Discussion And Perspectives This paper has examined the "Evolutionary Ensemble Learning for Free" claim, based on the fact that, since Evolutionary Algorithms maintain a population of solutions, it comes naturally to use these populations as a pool for building classifier ensembles. Two main issues have been studied, respectively concerned with enforcing the diversity of the population of classifiers, and with selecting the classifiers either in the final population or along evolution. The use of a co-evolution-inspired fitness function, generalizing [18], was found sufficient to generate diverse classifiers. As already noted, there is a great similarity between the co-evolution of programs and fitness cases [13] and the Boosting principles [8]; the common idea is that good classifiers are learned from good examples, while good examples are generalized by good classifiers. The difference between Boosting and co-evolution is that in Boosting, the training examples are not evolved; instead, their weights are updated. However, the uncontrolled growth of some weights, typically in the case of noisy examples, actually appears as the Achilles' heel of Boosting compared to Bagging. Basically, AdaBoost can be viewed as a dynamic system [22]; the possible instability or periodicity of this dynamic system has undesired consequences on the ensemble learning performance. The use of co-evolutionary ideas, even though the set of ensemble does not evolve, seems to increase the
stability of the learning process. The two EEL frameworks investigated in this paper can be considered as promising. Off**-EEL constructs ensembles** with best performances while needing little modifications over a traditional evolutionary algorithm, with a diversityenhancing fitness and the construction of an ensemble from the final population. But the size of the ensembles generated suggests that bigger population would lead to bigger and possibly better ensembles. For the sake of scalability, this suggests that the ensemble should be gradually constructed along evolution, instead of considering only the final population. This has been explored with On**-EEL, with** lesser performance comparing to Off**-EEL. It is suggested** that ensemble construction with On**-EEL is prone to be** stuck in local minima, so some capability of removing individuals can be beneficial, at the risk of inducing an highly dynamic algorithm. Ultimately, the momentum and dynamics of EEL should be controlled by evolution itself, enforcing some trade-off between exploring new regions and preserving efficient optimization. This will be the subject of future researches. ## Acknowledgments This work was supported by postdoctoral fellowships from the ERCIM-SARIT (Europe), the Swiss National Science Foundation (Switzerland), and the FQRNT (Qu´ebec) to C. Gagn´e. The second and third authors gratefully acknowledge the support of the Pascal **Network of Excellence IST2002-506 778.** ## 7. References [1] L. Breiman. Arcing classifiers. **Annals of Statistics**, 26(3):801–845, 1998. [2] A. Chandra and X. Yao. Ensemble learning using multi-objective evolutionary algorithms. **J. of** Mathematical Modelling and Algorithms**, 5(4):417–425,** 2006. [3] A. Chandra and X. Yao. Evolving hybrid ensembles of learning machines for better generalisation. Neurocomputing**, 69:686–700, 2006.** [4] T. G. Dietterich. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation**, 10:1895–1923, 1998.** [5] T. G. Dietterich. Ensemble methods in machine learning. In **First Int. Workshop on Multiple Classifier** Systems**, pages 1–15, 2000.** [6] R. Esposito and L. Saitta. Monte Carlo theory as an explanation of Bagging and Boosting. In **Proc. of the** Int. Joint Conf. on Artificial Intelligence (IJCAI'03), pages 499–504, 2003. [7] G. Folino, C. Pizzuti, and G. Spezzano. Ensemble techniques for parallel genetic programming based classifiers. In **Proc. of the European Conf. on Genetic** Programming (EuroGP'03)**, pages 59–69, 2003.** [8] Y. Freund and R. Schapire. Experiments with a new Boosting algorithm. In **Proc. of the Int. Conf. on** Machine Learning (ICML'96)**, pages 148–156, 1996.** [9] C. Gagn´e and M. Parizeau. Genericity in evolutionary computation software tools: Principles and case-study. Int. J. on Artificial Intelligence Tools**, 15(2):173–194,** 2006. [10] C. Gagn´e and M. Parizeau. Open BEAGLE: An evolutionary computation framework in C++. http://beagle.gel.ulaval.ca**, 2006.** [11] R. Gilad-Bachrach, A. Navot, and N. Tishby. Margin based feature selection - theory and algorithms. In Proc. of the Int. Conf. on Machine Learning (ICML'04)**, pages 43–50, 2004.** [12] I. Guyon, S. Gunn, M. Nikravesh, and L. Zadeh, editors. **Feature Extraction: Foundations And** Applications**. Springer-Verlag, 2006.** [13] W. D. Hillis. Co-evolving parasites improve simulated evolution as an optimization procedure. **Physica D**, 42:228–234, 1990. [14] J. Holland. Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In **Machine Learning, An** Artificial Intelligence Approach**, volume 2, pages** 593–623. Morgan Kaufmann, 1986. [15] J. Holmes, P. Lanzi, W. Stolzmann, and S. Wilson. Learning classifier systems: New models, successful applications. **Information Processing Letters**, 82(1):23–30, 2002. [16] H. Iba. Bagging, Boosting, and bloating in genetic programming. In **Proc. of the Genetic and** Evolutionary Computation Conference (GECCO'99), pages 1053–1060, 1999. [17] M. Keijzer and V. Babovic. Genetic programming, ensemble methods, and the bias/variance tradeoff – introductory investigations. In **Proc. of the European** Conf. on Genetic Programming (EuroGP'00)**, pages** 76–90, 2000. [18] Y. Liu, X. Yao, and T. Higuchi. Evolutionary ensembles with negative correlation learning. **IEEE** Trans. on Evolutionary Computation**, 4(4):380–387,** 2000. [19] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html, 1998. [20] J. Paredis. Coevolving cellular automata: Be aware of the Red Queen! In **Proc. of the Int. Conf. on Genetic** Algorithms (ICGA'97)**, pages 393–400, 1997.** [21] G. Paris, D. Robilliard, and C. Fonlupt. Applying Boosting techniques to genetic programming. In Artificial Evolution 2001, volume 2310 of LNCS**, pages** 267–278. Springer Verlag, 2001. [22] C. Rudin, I. Daubechies, and R. E. Schapire. The dynamics of AdaBoost: Cyclic behavior and convergence of margins. **J. of Machine Learning** Research**, 5:1557–1595, 2004.** [23] R. Schapire, Y. Freund, P. Bartlett, and W. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. **Annals of Statistics**, 26(5):1651–1686, 1998. [24] D. Song, M. I. Heywood, and A. N. Zincir-Heywood. Training genetic programming on half a million patterns: an example from anomaly detection. **IEEE** Trans. on Evolutionary Computation**, 9(3):225–239,** 2005. [25] V. N. Vapnik. Statistical Learning Theory**. Wiley, New** York, NY (USA), 1998.
# Fault Classification In Cylinders Using Multilayer Perceptrons, Support Vector Machines And Guassian Mixture Models Tshilidzi Marwalaa, Unathi Maholaa **and Snehashish Chakraverty**b a *School of Electrical and Information Engineering* University of the Witwatersrand Private Bag x 3 Wits 2050 South Africa e-mail: t.marwala@ee.wits.ac.za b**Central Building Research Institute** Roorkee-247 667, U.A. India e-mail: sne_chak@yahoo.com ## Abstract Gaussian mixture models (GMM) and support vector machines (SVM) are introduced to classify faults in a population of cylindrical shells. The proposed procedures are tested on a population of 20 cylindrical shells and their performance is compared to the procedure, which uses multi-layer perceptrons (MLP). The modal properties extracted from vibration data are used to train the GMM, SVM and MLP. It is observed that the GMM produces 98%, SVM produces 94% classification accuracy while the MLP produces 88% classification rates. ## 1. Introduction Vibration data have been used with varying degrees of success to classify damage in structures [1]. In the fault classification process there are various stages involved and these are: data extraction, data processing, data analysis and fault classification. Data extraction process involves the choice of data to be extracted and the method of extraction. Data that have been used for fault classification include strains concentration in structures and vibration data where strain gauges and accelerometers are used respectively [1]. In this paper vibration data processed using modal analysis are used for fault classification. In the data processing stage the measured vibration data need to be processed. This is mainly due to the fact that the measured vibration data, which are in the time domain, are difficult to use in raw form. Thus far the time-domain vibration data may be transformed to the modal analysis, frequency domain analysis and time-frequency domain [2,3]. In this paper the time-domain vibration data set is transformed into the modal domain where it is represented as natural frequencies and mode shapes. The data processed need to be analysed and the general trend has been to automate the analysis process and thus automate the fault classification process. To achieve this goal intelligent pattern recognition process needs to be employed and methods such as neural networks have been widely applied [1]. There are many types of neural networks that have been employed and these include multi-layer perceptron (MLP), radial basis function (RBF) and Bayesian neural networks [4,5]. Recently, new pattern recognition methods called support vector machines (SVMs) and Gaussian mixture models (GMMs) have been proposed and found to be particularly suited to classification problems [6]. SVMs have been found to outperform neural networks [7]. One of the examples where the fault classification process summarized at the beginning of this paper has been implemented is fault classification in a population of nominally identical cylindrical
Materials & Structures, 10**: 540-547, 2001.** [10]S. Haykin. *Neural networks***. Prentice-Hall, Inc, New York, USA, 1995.** [11]G.E. Hinton. Learning translation invariant recognition in massively parallel networks. **Proceedings PARLE Conference on Parallel Architectures and** Languages, **1-13, 1987.** [12]M. Møller. A scaled conjugate gradient algorithm for fast-supervised learning. Neural Networks, 6**:525-533, 1993.** [13]K.R. Müller, S. Mika, G. Rätsch, K. Tsuda, B. SchÖ**lkopf. An introduction to** kernel-based learning algorithms. *IEEE Transactions on Neural Networks*, 12:**181-** 201, 2001. [14]V. Vapnik. *The Nature of Statistical Learning Theory***. New York: Springer Verlag,** 1995. [15]C.A. Burges, A tutorial on support vector machines for pattern recognition. **Data** Mining and Knowledge Discovery, 2**:121-167, 1998.** [16]E. Habtemariam, T. Marwala, M. Lagazio. Artificial intelligence for conflict management. **Proceedings of the IEEE International Joint Conference on Neural** Networks**, Montreal, Canada, 2583-2588, 2005.** [17]B. SchÖ**lkopf, A.J. Smola. A short introduction to learning with kernels.** Proceedings of the Machine Learning Summer School**, Springer, Berlin, 41–64,** 2003. [18]A. Dempster, N. Laird, D Rubin, Maximum likelihood from incomplete data via the EM algorithm, *Journal of the Royal Statistical Society* 39**:1-38, 1977.** [19]Y-F. Wang, P.J. Kootsookos. Modeling of Low Shaft speed bearing faults for condition monitoring. *Mechanical Systems and Signal Processing* 12**:415-426,** 1998. [20]R. Bellman. *Adaptive Control Processes: A Guided Tour.* **Princeton University** Press, New Jersey, USA, 1961. [21]I.T. Jollife. *Principal Component Analysis.* **Springer-Verlag , New York, USA.,** 1986. [22]D.J. Ewins. *Modal Testing: Theory and Practice***, Research Studies Press,** Letchworth, UK, 1995. [23]N.M.M. Maia, J.M.M. Silva, *Theoretical and Experimental Modal Analysis.* Research Studies Press, Letchworth, U.K, 1997.
shells [2,3,4]. The importance of fault identification process in a population of nominally identical structures is particularly important in areas such as the automated manufacturing process in the assembly line. Thus far various forms of neural networks such as MLP and Bayesian neural networks have been successfully used to classify faults in structures [8]. Worden and Lane [9] used SVMs to identify damage in structures. However, SVMs have not been used for fault classification in a population of cylinders. Based on the successes of SVMs observed in other areas, we therefore propose in this paper SVMs and GMMs for classifying faults in a population of nominally identical cylindrical shells. This paper is organized as follows: neural networks, SVMs and GMMs are summarized, experiment performed is discussed and results as well as conclusions are discussed. ## 2. Neural Networks Neural networks are parameterised graphs that make probabilistic assumptions about data and in this paper these data are modal domain data and their respective classes of faults. In this paper multi-layer perceptron neural networks are trained to give a relationship between modal domain data and the fault classes. As mentioned earlier, there are several types of neural network procedures such as multi-layer perceptron, radial basis function, Bayesian networks and recurrent networks [5] and in this paper the MLP is used. This network architecture contains hidden units and output units and has one hidden layer. For the MLP the relationship between the output y, representing fault class, and x, representing modal data, may be may be written as follows [5,10,11]: $$\mathbf{y}_{\mathrm{{\scriptsize~k}}}=\mathbf{f}_{\mathrm{outer}}\!\left(\sum_{\mathrm{{j=1}}}^{\mathrm{{M}}}\mathbf{w}_{\mathrm{{\scriptsize~kj}}}^{(2)}\mathbf{f}_{\mathrm{inner}}\!\left(\sum_{\mathrm{{i=1}}}^{\mathrm{{d}}}\mathbf{w}_{\mathrm{{\scriptsize~ji}}}^{(1)}\,\mathbf{x}_{\mathrm{{\scriptsize~i}}}+\mathbf{w}_{\mathrm{{\scriptsize~j0}}}^{(1)}\,\right)\!+\mathbf{w}_{\mathrm{{\scriptsize~k0}}}^{(2)}\,\right)$$ ykf outer wkj f w x w **w (1)** * [10] A. A. K. Here, )1( w ji and )2( w ji **indicate weights in the first and second layer, respectively, going** from input i to hidden unit j, M is the number of hidden units, d is the number of output units while )1( w 0j and )2( wk0 indicate the biases for the hidden unit j and the output unit k. In this paper, the function fouter(•) is logistic while finner **is a hyperbolic tangent function.** Training the neural network identifies the weights in equations 1 and in this paper the scaled conjugate gradient method is used [12]. ## 3. Support Vector Machines (Svms) According to [13], the classification problem can be formally stated as estimating a function f: RN → **{−1, 1} based on an input-output training data generated from an** independently, identically distributed unknown probability distribution P(x, y) such that f will be able to classify previously unseen (x, y) pairs. Here x is the modal data while y is the fault class. The best such function is the one that minimizes the expected error (risk) which is given by ∫ R ]f[ = ),x(f(l dP)y )y,x( **(2)** where l represents a loss function [13]. Since the underlying probability distribution P is unknown, equation 2 cannot be solved directly. The best we can do is finding an upper bound for the risk function [14] that is given by
However, assuming that we can only access the feature space using only dot products, equation 7 is transformed into a dual optimization problem by introducing Lagrangian multipliers αi**,i = 1, 2, ..., n and doing some minimisation, maximisation and saddle** point property of the optimal point [14,15,16] the problem becomes $$\sum_{\mathrm{{w,b}}}^{n}\sum_{\mathrm{{i=1}}}^{n}\alpha_{\mathrm{{i}}}-{\frac{1}{2}}\sum_{\mathrm{{i,j=1}}}^{n}\alpha_{\mathrm{{i}}}\alpha_{\mathrm{{j}}}\mathrm{{y_{{i}}y_{{j}}k(x_{\mathrm{{i}}},x_{\mathrm{{j}}})}}$$ which at t: $$({\boldsymbol{\delta}})$$ $$(\mathbf{9})$$ (8) $$\sum_{i=1}^{n}\alpha_{i}\mathbf{y}_{i}=0$$ subject to $$\alpha_{\mathrm{i}}\geq0,\mathrm{\boldmath~i=1,...,n~}$$ The Lagrangian coefficients, αi**'s, are obtained by solving equation 8 which in turn is** used to solve w to give the non-linear decision function [12] $$\begin{array}{c}{{\mathrm{f}(\mathrm{x})=\mathrm{sgn}\!\left(\sum_{\mathrm{i=1}}^{\mathrm{n}}\mathrm{y}_{\mathrm{i}}\alpha_{\mathrm{i}}\!\left(\Phi(\mathrm{x}).\Phi(\mathrm{x}_{\mathrm{i}})\right)+\mathrm{b}\right)}}\\ {{=\mathrm{sgn}\!\left(\sum_{\mathrm{i=1}}^{\mathrm{n}}\mathrm{y}_{\mathrm{i}}\alpha_{\mathrm{i}}\mathrm{k}(\mathrm{x},\mathrm{x}_{\mathrm{i}})+\mathrm{b}\right.}}\end{array}$$ = **(9)** In the case when the data is not linearly separable, a slack variable ξi**, i = 1,..., n is** introduced to relax the constraints of the margin as y ((w. x( )) )b 1 , ,0 i **1,...,n** i Φ i + ≥ − ζi ζi ≥ = **(10)** A trade off is made between the VC dimension and the complexity term of equation 3, which gives the optimisation problem $$\begin{array}{r l}{\operatorname*{min}}\\ {\mathrm{W},\mathbf{b},\xi}\end{array}\qquad{\frac{1}{2}}\left\|\mathbf{w}\right\|^{2}+\mathbf{C}\sum_{\mathrm{i=1}}^{1}\xi_{\mathrm{i}}$$ 1 **(11)** where C > 0 is a regularisation constant that determines the above-mentioned trade-off. The dual optimisation problem is then given by [12] max $$\sum_{\mathrm{i=1}}^{n}\alpha_{\mathrm{i}}-{\frac{1}{2}}\sum_{\mathrm{i,j=1}}^{n}\alpha_{\mathrm{i}}\alpha_{\mathrm{j}}\mathrm{y}_{\mathrm{i}}\mathrm{y}_{\mathrm{j}}\mathrm{k}(\mathrm{x}_{\mathrm{i}},\mathrm{x}_{\mathrm{j}})$$ Let $\alpha_{\mathrm{i}}$ be a unit vector. $$(11)$$ $$(12)$$ 1 **(12)** $\mathbf{i}=1,\ldots,\mathbf{n}$ . subject to $$\bar{\sum_{i\in I}}\alpha_{i}y_{i}=0$$ 0 i,C **1,...,** .n ≤ α ≤ = n i 1 i A Karush-Kuhn-Tucker (KKT) condition which says that only the αi**'s associated with** the training values xi's on or inside the *margin* **area have non-zero values, is applied to** the above optimisation problem to find the αi**'s and the threshold variable b reasonably** and the decision function f [17]. ## 4. Gaussian Mixture Models (Gmms) GMM non-linear pattern classifier also works by creating a maximum likelihood model for each fault case given by [18], λ= { w, µ, Σ } **(13)** where, w, µ, Σ **are the weights, means and diagonal covariance of the features. Given a** collection of training vectors the parameters of this model are estimated by a number of algorithms such as the Expectation-Maximization (EM) algorithm [18]. In this paper,
the EM algorithm is used since it has reasonable fast computational time when compared to other algorithms. The EM algorithm finds the optimum model parameters by iteratively refining GMM parameters to increase the likelihood of the estimated model for the given fault feature modal vector. For the EM equations for training a GMM, the reader is referred to [19]. Fault detection or diagnosis using this classifier is then achieved by computing the likelihood of the unknown modal data of the different fault models. This likelihood is given by [18] $\hat{\mathbf{s}}=\arg\max_{\|\mathbf{f}\|\leq\mathbf{F}}\sum_{\mathbf{k}=\mathbf{l}}^{\mathbf{k}}\log\mathbf{p}(\mathbf{x}_{\mathbf{k}}\,|\,\lambda_{\mathbf{f}}\,)$ where $\mathbf{F}$ represent the number of faults. K (14) where, F, represent the number of faults to be diagonalized, X x{ x, **,...,** x } = 1 2 K is the unknown D-dimension fault modal data and p(xk|λf**) is the mixture density function** given by [18] $\mathrm{p}(\mathrm{x}\,|\,\lambda)=\sum_{i=1}^{M}\mathrm{w}_{i}\mathrm{p}(\mathrm{x})$ with, $\mathrm{p}_{i}(\mathrm{x}_{i})=\frac{1}{\left(2\pi\right)^{D/2}}\sqrt{\sum_{i}}\exp\{-\frac{1}{2}\left(\mathrm{x}_{k}-\mu_{i}\right)^{T}\left(\sum_{i}\right)^{-1}\left(\mathrm{x}_{k}-\mu_{i}\right)\}$ It should be noted that the mixture weights, wi**, satisfy the constrains,** w 1 $\sum_{\mathbf{v}}^{\mathbf{M}}\mathbf{w}_{\mathbf{v}}$ $$(14)$$ $$(15)$$ $$(16)$$ i 1 ∑ i = = $_{\text{in}}\;=\;1$ . ## 5. Input Data This section describes the inputs that are used to test the SVM, MLP and GMM. When modal analysis is used for fault classification it is often found that there are more parameters extracted from the vibration data than can be possibly used for MLP, SVM and GMM training. These data must therefore be reduced because of a phenomenon called the curse of dimensionality [18], which refers to the difficulties associated with the feasibility of density estimation in many dimensions. However, this reduction process must be conducted such that the loss of essential information is minimized. The techniques implemented in this paper to reduce the dimension of the input data remove parts of the data that do not contribute significantly to the dynamics of the system being analysed or those that are too sensitive to irrelevant parameters. To achieve this, we implement the principal component analysis, which is discussed in the next section. ## 5.1 Principal Component Analysis In this paper we use the principal component analysis (PCA) [20;21] to reduce the input data into independent input data. The PCA orthogonalizes the components of the input vector so that they are uncorrelated with each other. In the PCA, correlations and interactions among variables in the data are summarised in terms of a small number of underlying factors. ## 6. Foundations Of Dynamics As indicated earlier, in this paper modal properties i.e. natural frequencies and mode shapes are extracted from the measured vibration data and used for fault classification. For this reason the foundation of these parameters are described in this section. All elastic structures may be described the time domain as [22]
[M]{ }''X + [C]{ }'X + [K]}{X} = { }F **(17)** where [M], [C] and [K] are the mass, damping and stiffness matrices respectively, and {X}, {X′} and {X′′**} are the displacement, velocity and acceleration vectors,** respectively, while {F} is the applied force vector. If equation 17 is transformed into the modal domain to form an eigenvalue equation for the ith **mode, then [23]** ( [M] j [C] [K]){ } }0{ i i 2 −ωi + ω + φ = **(18)** where j = −1 , ωi is the ith **complex eigenvalue, with its imaginary part corresponding** to the natural frequency ωi**, {0} is the null vector, and** i {φ} is the ith **complex mode** shape vector with the real part corresponding to the normalized mode shape {φ}i**. From** equation 18 it is evident that changes in the mass and stiffness matrices cause changes in the modal properties and thus modal properties are damage indicators. ## 7. Example: Cylindrical Shells 7.1 Experimental Procedure In this section the procedures of using GMM and SVM are experimentally validated and compared to the procedure of using MLP. The experiment is performed on a population of cylinders, which are supported by inserting a sponge rested on a bubble-wrap, to simulate a 'free-free' environment [see Figure 2]. The impulse hammer test is performed on each of the 20 steel seam-welded cylindrical shells. The impulse is applied at 19 different locations as indicated in Figure 2. More details on this experiment may be found in [4]. Each cylinder is divided into three equal substructures and holes of 10-15 mm in <image> diameter are introduced at the centers of the substructures to simulate faults.
For one cylinder the first type of fault is a zero-fault scenario. This type of fault is given the identity [0 0 0], indicating that there are no faults in any of the three substructures. The second type of fault is a one-fault-scenario, where a hole may be located in any of the three substructures. Three possible one-fault-scenarios are [1 0 0], [0 1 0], and [0 0 1] indicating one hole in substructures 1, 2 or 3, respectively. The third type of fault is a two-fault scenario, where holes are located in two of the three substructures. Three possible two-fault-scenarios are [1 1 0], [1 0 1], and [0 1 1]. The final type of fault is a three-fault-scenario, where a hole is located in all three substructures, and the identity of this fault is [1 1 1]. There are 8 different types of fault-cases considered (including [0 0 0]). Each cylinder is measured three times under different boundary conditions by changing the orientation of a rectangular sponge inserted inside the cylinder. The number of sets of measurements taken for undamaged population is 60 (20 cylinders × **3 for different** boundary conditions). In the 8 possible fault types, two fault types [0 0 0] and [1 1 1] has 60 number of occurrences while the rest has 24. It should be noted that the numbers of one- and two-fault cases are each 72. This is because as mentioned above, increasing the sizes of holes in the substructures and taking vibration measurements generated additional one- and two-fault cases. Fault cases used to train and test the networks are shown in Table 1. | Fault | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | |--------------|---------|---------|---------|---------|---------|---------|---------|---------| | Training set | 21 | 21 | 21 | 21 | 21 | 21 | 21 | 21 | | Test set | 39 | 3 | 3 | 3 | 3 | 3 | 3 | 39 | The impulse and response data are processed using the Fast Fourier Transform (FFT) to convert the time domain impulse history and response data into the frequency domain. The data in the frequency domain are used to calculate the frequency response functions (FRFs). From the FRFs, the modal properties are extracted using modal analysis [21]. The number of modal properties identified is 340 (17 modes×**19 measured mode-shapeco-ordinates+17 natural frequencies). The PCA are used to reduce the dimension of the** input data from 340×264 modal properties to 10×**264. Here 264 correspond to the** number of fault cases measured. ## 8. Results And Discussion The measured data was used for the MLP training and the MLP architecture contained 10 input units, 8 hidden units and 3 output units. The scaled conjugate gradient method was used for training the MLP network [12]. The average time it took to train the MLP networks was 12 CPU seconds on the Pentium II computer. The results obtained are shown in Table 2. In this table the actual fault cases are listed against the predicted fault cases. These results show that the MLP classify fault cases to the accuracy of 88%. In Table 1 it was shown that some fault cases are more numerous than others. In this case the measure of accuracy as a ratio of the sum of fault cases classified correctly divided
by the total number of cases can be misleading. This is the case if the fault cases classified incorrectly are those from the less numerous cases. To remedy this situation a measure of accuracy called geometrical accuracy (GA) is used and defined as: $$\mathrm{G}\mathrm{A}=$$ $\left(19\right)^2$ 1 n 1 n q **...q** c **...c** GA∏ ∏ = **(19)** where ∏ is the product, cn is the number of nth **fault cases classified correctly while q**n is the nth **fault class. Using this measure the MLP gives the accuracy of 0.7.On training** the SVM, there are different parameters that can be changed namely the capacity and the e-insensitivity, the amount of training inputs and the function to be used for the kernel. Some of the functions that can be used are: linear, radial basis function, sigmoid, and spline. In this paper the exponential radial basis function is used as a kernel. The training process took 45 CPU seconds and the capacity was set to infinity. The results obtained are shown in Table 3 and these results show that the SVM gives accuracy of 94% while it gives the GA of 0.92. GMM architecture on the other hand used diagonal covariance matrix with 3 centers. The main advantage of using the diagonal covariance matrix is that this de-correlates the feature vectors. The training process took 45 CPU seconds. Table 4 shows that the GMM gives accuracy of 98% while it gives the GA of 0.95. As can be seen from Tables 2, 3 and 4, the GMM outperforms, the SVM network which out-performed the MLP. The MLP network. Table 2. **The confusion matrix obtained when the MLP network is used for fault classification** | | Predicted | | | | | | | | | |--------|-------------|-------|-------|-------|-------|-------|-------|-------|----| | | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | | | | [000] | 39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | [100] | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | [010] | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | | Actual | [001] | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 6 | | | [110] | 0 | 0 | 0 | 0 | 3 | 1 | 0 | 0 | | | [101] | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | | | [011] | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | | | [111] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 29 | Table 3. **The confusion matrix obtained when the SVM network is used for fault classification** | | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | | |--------|---------|---------|---------|---------|---------|---------|---------|---------|----| | | [000] | 39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | [100] | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | [010] | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | | Actual | [001] | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 5 | | | [110] | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | | | [101] | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | | | [011] | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 1 | | | [111] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 33 |
Table 4. **The confusion matrix obtained when the GMM network is used for fault classification** **Predicted** | | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | | |--------|---------|---------|---------|---------|---------|---------|---------|---------|----| | | [000] | 39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | [100] | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | [010] | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | | Actual | [001] | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 1 | | | [110] | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 1 | | | [101] | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | | | [011] | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | | | [111] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 35 | ## 8. Conclusions In this paper GMM and SVM were introduced to classify faults in a population of cylindrical shells and compared to the MLP. The GMM was observed to give more accurate results than the SVM, which was in turn observed to give more accurate results than the MLP. ## Reference [1] **S.W. Doebling, C.R. Farrar, M.B. Prime, D.W. Shevitz. Damage identification and** health monitoring of structural and mechanical systems from changes in their vibration characteristics: a literature review. *Los Alamos Technical Report LA13070-MS.,* **Los Alamos National Laboratory, New Mexico, USA, 1996.** [2] **T. Marwala. On fault identification using pseudo-modal-energies and modal** properties. *American Institute of Aeronautics and Astronautics Journal,* 39**:1608-** 1618, 2001. [3] **T. Marwala. Probabilistic fault identification using a committee of neural networks** and vibration data. *American Institute of Aeronautics and Astronautics, Journal of* Aircraft, 38**:138-146, 2001.** [4] T. Marwala. Fault *Identification Using Neural Networks and Vibration Data.* University of Cambridge Ph.D. Thesis, Cambridge, UK, 2001. [5] C.M. Bishop. *Neural Networks for Pattern Recognition.* **Oxford University Press,** Oxford, UK, 1995. [6] **J. Joachims. Making large-scale SVM learning practical. Scholkopf, C. J. C. Burges** and A. J. Smola, editors, *Advances in Kernel Methods-Support Vector Learning*, 169-184, MIT Press, Cambridge, MA: ,1999. [7] **M.M. Pires and T. Marwala. American option pricing using multi-layer perceptron** and support vector machines, **Proceedings of the IEEE Conference in Systems, Man** and Cybernetics, **The Hague, 1279-1285, 2004.** [8] **A. Iwasaki, A Todoroki, Y. Shimamura, et al. An unsupervised statistical damage** detection method for structural health monitoring (applied to detection of delamination of a composite beam). *Smart Materials and Structures,* 13**:80-85,** 2004. [9] K. Worden, A.J. Lane. Damage identification using support vector machines. Smart
# Learning To Bluff Evan Hurwitz And Tshilidzi Marwala Abstract— The act of bluffing confounds game designers to this day. The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically. Through the use of intelligent, learning agents, and carefully designed agent outlooks, an agent can in fact learn to predict its opponents' reactions based not only on its own cards, but on the actions of those around it. With this wider scope of understanding, an agent can in learn to bluff its opponents, with the action representing not an "illogical" action, as bluffing is often viewed, but rather as an act of maximising returns through an effective statistical optimisation. By using a TD(λ) learning algorithm to continuously adapt neural network agent intelligence, agents have been shown to be able to learn to bluff without outside prompting, and even to learn to call each other's bluffs in free, competitive play. ## I. Introduction W HILE many card games involve an element of bluffing, simulating and fully understanding bluffing yet remains one of the most elusive tasks presented to the game design engineer. The entire process of bluffing relies on performing a task that is unexpected, and is thus misinterpreted by one's opponents. For this reason, static rules are doomed to failure since once they become predictable, they cannot be misinterpreted. In order to create an artificially intelligent agent that can bluff, one must first create an agent that is capable of learning. The agent must be able to learn not only about the inherent nature of the game it is playing, but also must be capable of learning trends emerging from its opponent's behaviour, since bluffing is only plausible when one can anticipate the opponent's reactions to one's own actions. Firstly the game to be modelled will be detailed, with the reasoning for its choice being explained. The paper will then detail the system and agent architecture, which is of paramount importance since this not only ensures that the correct information is available to the agent, but also has a direct impact on the efficiency of the learning algorithms utilised. Once the system is fully illustrated, the actual learning of the agents is shown, with the appropriate findings detailed. ## Ii. Lerpa The card game being modelled is the game of Lerpa [4]. While not a well-known game, its rules suit the purposes of this research exceptionally well, making it an ideal testbed application for intelligent agent Multi-Agent Modelling (MAM). The rules of the game first need to be elaborated upon, in order to grasp the implications of the results obtained. Thus, the rules for Lerpa now follow. The game of *Lerpa* is played with a standard deck of cards, with the exception that all of the 8s, 9s and 10s are removed from the deck. The cards are valued from greatest- to least-valued from ace down to 2, with the exception that the 7 is valued higher than a king, but lower than an ace, making it the second most valuable card in a suit. At the end of dealing the hand, the dealer has the choice of *dealing himself in* - which entails flipping his last card over, unseen up until this point, which then declares which suit is the *trump suit* [4]. Should he elect not to do this, he then flips the next card in the deck to determine the trump suit. Regardless, once trumps are determined, the players then take it in turns, going clockwise from the dealer's left, to elect whether or not to play the hand (to *knock*), or to drop out of the hand, referred to as *folding* (if the Dealer has *dealt himself in*, as described above, he is then automatically required to play the hand). Once all players have chosen, the players that have elected to play then play the hand, with the player to the dealer's left playing the first card. Once this card has been played, players must then play in suit - in other words, if a heart is played, they must play a heart if they have one. If they have none of the required suit, they may play a trump, which will win the trick unless another player plays a higher trump. The highest card played will win the trick (with all trumps valued higher than any other card) and the winner of the trick will lead the first card in the next trick. At any point in a hand, if a player has the Ace of trumps and can legally play it, he is then required to do so [4]. The true risk in the game comes from the betting, which occurs as follows: At the beginning of the round, the dealer pays the table 3 of whatever the basic betting denomination is (referred to usually as 'chips'). At the end of the hand, the chips are divided up proportionately between the winners, i.e. if you win two tricks, you will receive two thirds of whatever is in the pot. However, if you stayed in, but did not win any tricks, you are said to have been *Lerpa'd*, and are then required to match whatever was in the pot for the next hand, effectively costing you the pot. It is in the evaluation of this risk that most of the true skill in *Lerpa* lies.
## Iii. Lerpa Mam As with any optimisation system, very careful consideration needs to be taken with regards to how the system is structured, since the implications of these decisions can often result in unintentional assumptions made by the system created. With this in mind, the Lerpa Multi-Agent System (MAS) has been designed to allow the maximum amount of freedom to the system, and the agents within, while also allowing for generalisation and swift convergence in order to allow the intelligent agents to interact unimpeded by human assumptions, intended or otherwise. ## A. System Overview The game is, for this model, going to be played by four players. Each of these players will interact with each other indirectly, by interacting directly with the *table*, which is their shared environment, as depicted in Fig. 1. <image> Over the course of a single hand, an agent will be required to make three decisions, once at each interactive stage of the game. These three decision-making stages are: 1) To play the hand, or drop (knock or *fold*). 2) Which card to play first. 3) Which card to play second. Since there is no decision to be made at the final card, the hand can be said to be effectively finished from the agent's perspective after it has played its second card (or indeed after the first decision should the agent fold). Following on the TD(λ) algorithm [5], each agent will update its own neural network at each stage, using its own predictions as a reward function, only receiving a true reward after its final decision has been made. This decision making process is illustrated below, in Fig. 2. <image> Each hand played will be viewed as an independent, stochastic event, and as such only information about the current hand will be available to the agent, who will have to draw on its own learned knowledge base to draw deductions not from previous hands ## B. Agent Ai Design A number of decisions need to be made in order to implement the agent artificial intelligence (AI) effectively and efficiently. The type of learning to be implemented needs to be chosen, as well as the neural network architecture [7]. Special attention needs to be paid to the design of the inputs to the neural network, as these determine what the agent can 'see' at any given point. This will also determine what assumptions, if any, are implicitly made by the agent, and hence cannot be taken lightly. Lastly, this will determine the dimensionality of the network, which directly affects the learning rate of the network, and hence must obviously be minimized. 1) *Input Parameter Design:* In order to design the input stage of the agent's neural network, one must first determine all that the network may need to know at any given decision-making stage. All inputs, in order to optimise stability, are structured as binary-encoded inputs. When making its first decision, the agent needs to know its own cards, which agents have stayed in or folded, and which agents are still to decide [9]. It is necessary for the agent to be able to determine which specific agents have taken their specific actions, as this will allow for an agent to learn a particular opponent's characteristics, something impossible to do if it can only see a number of players in or out. Similarly, the agent's own cards must be specified fully, allowing the agent to draw its own conclusions about each card's relative value. It is also necessary to tell the agent which suit has been designated the trumps suit, but a more elegant method has been found to handle that information, as will be seen shortly. Fig. 3 below illustrates the initial information required by the network. <image> The agent's hand needs to be explicitly described, and the obvious solution is to encode the cards exactly, i.e. four suits, and ten numbers in each suit, giving forty possibilities for each card. A quick glimpse at the number of options available shows that a raw encoding style provides a sizeable problem of dimensionality, since an encoded hand can be one of 403 possible hands (in actuality, only 40P3 hands could be selected, since cards cannot be repeated, but the raw encoding
scheme would in fact allow for repeated cards, and hence 403 options would be available). The first thing to notice is that only a single deck of cards is being used, hence no card can ever be repeated in a hand. Acting on this principle, consistent ordering of the hand means that the base dimensionality of the hand is greatly reduced, since it is now combinations of cards that are represented, instead of permutations. The number of combinations now represented is 40C3. This seemingly small change from nPr to nCr reduces the dimensionality of the representation by a factor of r!, which in this case is a factor of 6. Furthermore, the representation of cards as belonging to discrete suits is not optimal either, since the game places no particular value on any suit by its own virtue, but rather by virtue of which suit is the trump suit. For this reason, an alternate encoding scheme has been determined, rating the 'suits' based upon the makeup of the agent's hand, rather than four arbitrary suits. The suits are encoded as belonging to one of the following groups, or new "suits": - Trump suit - Suit agent has multiple cards in (not trumps) - Suit in agent's highest singleton - Suit in agent's second-highest singleton - Suit in agent's third-highest singleton This allows for a much more efficient description of the agent's hand, greatly improving the dimensionality of the inputs, and hence the learning rate of the agents. These five options are encoded in a binary format, for stability purpose, and hence three binary inputs are required to represent the suits. To represent the card's number, ten discrete values must be represented, hence requiring four binary inputs to represent the card's value. Thus a card in an agent's hand is represented by seven binary inputs, as depicted in Fig. 4. <image> Next must be considered the information required in order to make decisions two and three. For both of these decisions, the cards that have already been played, if any, are necessary to know in order to make an intelligent decision as to the correct next card to play. For the second decision, it is also plausible that knowledge of who has won a trick would be important. The most cards that can ever be played before a decision must be made is seven, and since the table after a card is played is used to evaluate and update the network, eight played cards are necessary to be represented. Once again, however, simply utilising the obvious encoding method is not necessarily the most efficient method. The actual values of the cards played are not necessarily important, only their values relative to the cards in the agent's hand. As such, the values can be represented as one of the following, with respect to the cards in the same suit in the agent's hand: - Higher than the card/cards in the agent's hand - Higher than the agent's second-highest card - Higher than the agent's third-highest card - Lower than any of the agent's cards - Member of a void suit (number is immaterial) Also, another suit is now relevant for representation of the played cards, namely a void suit - a suit in which the agent has no cards. Lastly, a number is necessary to handle the special case of the Ace of trumps, since its unique rules mean that strategies are possible to develop based on whether it has or has not been played. The now six suits available still only require three binary inputs to represent, and the six number groupings now reduce the value representations from four binary inputs to three binary inputs, once again reducing the dimensionality of the input system. With all of these inputs specified, the agent now has available all of the information required to draw its own conclusions and create its own strategies, without human-imposed assumptions affecting its "thought" patterns. 2) *Network Architecture Design:* With the inputs now specified, the hidden and output layers need to be designed. For the output neurons, these need to represent the prediction P that the network is making [2]. A single hand has one of five possible outcomes, all of which need to be catered for. These possible outcomes are: - The agent wins all three tricks, winning 3 chips. - The agent wins two tricks, winning 2 chips. - The agent wins one trick, winning 1 chip. - The agent wins zero tricks, losing 3 chips. - The agent elects to fold, winning no tricks, but losing no chips. This can be seen as a set of options, namely [-3 0 1 2 3]. While it may seem tempting to output this as one continuous output, there are two compelling reasons for breaking these up into binary outputs. The first of these is in order to optimise stability, as elaborated upon in Section five. The second reason is that these are discrete events, and a continuous representation would cover the range of [-3 0], which does not in fact exist. The binary inputs then specified are: $$\begin{array}{r l}{\bullet}&{{}\mathrm{P(O=3)}}\\ {\bullet}&{{}\mathrm{P(O=2)}}\\ {\bullet}&{{}\mathrm{P(O=1)}}\\ {\bullet}&{{}\mathrm{P(O=-3)}}\end{array}$$ With a low probability of all four catering to folding, winning and losing no chips. Consequently, the agent's predicted return is: P = 3A + 2B + C − 3D (1) $$A=P(O=3)$$ $$\left(2\right)$$ where A = P(O = 3) (2)
$$B=P(O=2)$$ B = P(O = 2) (3) $$C=P(O=1)$$ $$D=P(O=-3)$$ C = P(O = 1) (4) D = P(O = −3) (5) The internal structure of the neural network uses a standard sigmoidal activation function [7], which is suitable for stability issues and still allows for the freedom expected from a neural network. The sigmoidal activations function varies between zero and one, rather than the often-used one and minus one, in order to optimise for stability [7]. Since a high degree of freedom is required, a high number of hidden neurons are required, and thus fifty have been used. This number is iteratively achieved, trading off training speed versus performance. The output neurons are linear functions, since while they represent not binary effects, but rather a continuous probability of particular binary outcomes 2) *Agent Decision Making:* With its own predictor specified [2], the agent is now equipped to make decisions when playing. These decisions are made by predicting the return of the resultant situation arising from each legal choice it can make. An ε-greedy policy is then used to determine whether the agent will choose the most promising option, or whether it will explore the result of the less appealing result. In this way, the agent will be able to trade off exploration versus exploitation. ## Iv. The Intelligent Model With each agent implemented as described above, and interacting with each other as specified in Section III, we can now perform the desired task, namely that of utilising a multi-agent model to analyse the given game, and develop strategies that may "solve" the game given differing circumstances. Only once agents know how to play a certain hand can they then begin to outplay, and potentially bluff each other. ## A. Agent Learning Verification In order for the model to have any validity, one must establish that the agents do indeed learn as they were designed to do. In order to verify the learning of the agents, a single intelligent agent was created, and placed at a table with three 'stupid' agents. These 'stupid' agents always stay in the game, and choose a random choice whenever called upon to make a decision. The results show quite conclusively that the intelligent agent soon learns to consistently outperform its opponents, as shown in Fig. 5. The agents named Randy, Roderick and Ronald use random decision-making, while AIden has the TD(λ) AI system implemented [5]. The results have been averaged over 40 hands, in order to be more viewable, and to also allow for the random $$({\mathfrak{I}})$$ $$(4)$$ $$({\boldsymbol{S}})$$ <image> 1) *Cowardice:* In the learning phase of the abovementioned intelligent agent, an interesting and somewhat enlightening problem arises. When initially learning, the agent does not in fact continue to learn. Instead, the agent quickly determines that it is losing chips, and decides that it is better off not playing, and keeping its chips! This is illustrated in Fig. 6. <image> As can be seen, AIden [8] quickly decides that the risks are too great, and does not play in any hands initially. After forty hands, AIden decides to play a few hands, and when they go badly, gets scared off for good. This is a result of the penalising nature of the game, since bad play can easily mean one loses a full three chips, and since the surplus of lost chips is nor carried over in this simulation, a bad player loses chips regularly. While insightful, a cowardly agent is not of any particular use, and hence the agent must be given enough 'courage' to play, and hence learn the game. In order to do this, one option is to increase the value of ε for the ε-greedy policy, but this makes the agent far too much like a random player without any intelligence. A more successful, and sensible solution is to force the agent to play when it knows nothing, until such a stage as it seems prepared to play. This was done by forcing AIden [8] to play the first 200 hands it had ever seen, and thereafter leave AIden to his own devices [8], the result of which has been shown already in Fig. 5.
## B. Parameter Optimisation A number of parameters need to be optimised, in order to optimise the learning of the agents. These parameters are the learning-rate α, the memory parameter λ and the exploration parameter ε. The multi-agent system provides a perfect environment for this testing, since four different parameter combinations can be tested competitively . By setting different agents to different combinations, and allowing them to play against each other for an extended period of time (number of hands), one can iteratively find the parameter combinations that achieve the best results, and are hence the optimum learning parameters [3]. Fig. 7 shows the results of one such test, illustrating a definite 'winner', whose parameters were then used for the rest of the multi-agent modeling. It is also worth noting that as soon as the dominant agent begins to lose, it adapts its play to remain competitive with its less effective opponents. This is evidenced at points 10 and 30 on the graph (games number 300 and 900, since the graph is averaged over 30 hands) where one can see the dominant agent begin to lose, and then begins to perform well once again. <image> over 30 hands Surprisingly enough, the parameters that yielded the most competitive results were α = 0.1; λ = 0.1 and ε = 0.01. while the ε value is not particularly surprising, the relatively low α and λ values are not exactly intuitive. What they amount to is a degree of temperance, since higher values would mean learning a large amount from any given hand, effectively over-reacting when they may have played well, and simply have fallen afoul of bad luck. ## C. Mas Learning Patterns With all of the agents learning in the same manner, it is noteworthy that the overall rewards they obtain are far better than those obtained by the random agents, and even by the intelligent agent that was playing against the random agents [3]. A sample of these results is depicted in Fig. 8. R1 to R3 are the Random agents, while AI1 is the intelligent agent playing against the random agents. AI2 to AI 5 depict intelligent agents playing against each other. As can be seen, the agents learn far better when playing against intelligent opponents, an attribute that is in fact mirrored in human competitive learning. <image> ## D. Agent Adaptation In order to ascertain whether the agents in fact adapt to each other or not, the agents were given pre-dealt hands, and required to play them against each other repeatedly. The results one such experiment, illustrated in Fig. 9, shows how an agent learns from its own mistake, and once certain of it changes its play, adapting to better gain a better return from the hand. The mistakes it sees are its low returns, returns of -3 to be precise. At one point, the winning player obviously decides to explore, giving some false hope to the losing agent, but then quickly continues to exploit his advantage. Eventually, at game \#25, the losing agent gives up, adapting his play to suit the losing situation in which he finds himself. Fig. 9 illustrates the progression of the agents and the adaptation described. <image> ## E. Strategy Analysis The agents have been shown to successfully learn to play the game, and to adapt to each other's play in order to maximise their own rewards. These agents form the pillars of the multi-agent model, which can now be used to analyse, and attempt to 'solve' the game. Since the game has a nontrivial degree of complexity, situations within the game are to be solved, considering each situation a sub-game of the overall game. The first and most obvious type of analysis is a static analysis, in which all of the hands are pre-dealt. This system can be said to have stabilised when the results and the playout become constant, with all agents content to play the hand out in the same manner, each deciding nothing better can be
# Soft Constraint Abstraction Based On Semiring Homomorphism ∗ Sanjiang Li and Mingsheng Ying † Department of Computer Science and Technology Tsinghua University, Beijing 100084, China ## Abstract The semiring-based constraint satisfaction problems (semiring CSPs), proposed by Bistarelli, Montanari and Rossi [3], is a very general framework of soft constraints. In this paper we propose an abstraction scheme for soft constraints that uses semiring homomorphism. To find optimal solutions of the concrete problem, the idea is, first working in the abstract problem and finding its optimal solutions, then using them to solve the concrete problem. In particular, we show that a mapping preserves optimal solutions if and only if it is an order-reflecting semiring homomorphism. Moreover, for a semiring homomorphism α and a problem P over S, if t is optimal in α(P), then there is an optimal solution t¯ of P such that t¯ has the same value as t in α(P). Keywords: Abstraction; Constraint solving; Soft constraint satisfaction; Semiring homomorphism; Order-reflecting. ## 1 Introduction In the recent years there has been a growing interest in soft constraint satisfaction. Various extensions of the classical constraint satisfaction problems (CSPs) [10, 9] have been introduced in the literature, e.g., Fuzzy CSP [11, 5, 12], Probabilistic CSP [6], Weighted CSP [15, 7], Possibilistic CSP [13], and Valued CSP [14]. Roughly speaking, these extensions are just like classical CSPs **except** that each assignment of values to variables in the constraints is associated to an element taken from a semiring. Furthermore, nearly all of these **extensions,** as well as classical CSPs, can be cast by the semiring-based constraint solving framework, called SCSP (for *Semiring CSP***), proposed by Bistarelli, Montanari** and Rossi [3]. ∗**Work partially supported by National Nature Science Foundation of China** (60673105,60621062, 60496321). †**lisanjiang@tsinghua.edu.cn (S. Li), yingmsh@tsinghua.edu.cn (M. Ying)** arXiv:0705.0734v1 [cs.AI] 5 May 2007
The next lemma shows that α **preserves optimal solutions only if it is a** semiring homomorphism. Lemma 4.3. Let α be a mapping from c-semiring S to c-semiring Se *such that* α(0) = 0e, α(1) = 1e. Suppose α : S → Se *preserves optimal solutions. Then* α is a semiring homomorphism. Proof. By Lemma 4.1, we know α **satisfies Equation 5. We first show that** α is monotonic. Take u, v ∈ S, u ≤S v. Suppose α(u) 6≤Se α(v). Then α(v)+eα(v) = α(v) <Se α(u)+eα(v). By Equation 5, we have v = v + v <S u + v = v**. This is a** contradiction, hence we have α(u) ≤Se α(v). Next, for any u, v ∈ S, we show α(u+v) = α(u)+eα(v). Since α **is monotonic,** we have α(u + v) ≥Se α(u)+eα(v). Suppose α(u + v)+eα(u + v) = α(u + v) >Se α(u)+eα(v). By Equation 5 again, we have (u + v) + (u + v) >S u + v**, also a** contradiction. Finally, for u, v ∈ S, we show α(u × v) = α(u)×eα(v**). Suppose not and set** w = α(u)×eα(v)+eα(u×v**). Then we have either** α(u)×eα(v) <Se w or α(u×v) <Se w. Since α(0) = e0, α(1) = 1e**, these two inequalities can be rewritten respectively** as α(u)×eα(v) + α(1)×eα(0) <Se α(u)×eα(v)+eα(u × v)×eα(1e) and α(1)×eα(0) + α(u × v)×eα(1) <Se α(u)×eα(v)+eα(u × v)×eα(1e). By Equation 5 again, we have either u × v + 1 × 0 <S u × v + (u × v) × 1 or 1 × 0 + (u × v) × 1 <S u × v + (u × v) × 1**. Both give rise to a contradiction.** This ends the proof. We now achieve our main result: Theorem 4.1. Let α be a mapping from c-semiring S to c-semiring Se *such* that α(0) = 0e, α(1) = e1. Then α preserves optimal solutions for all constraint systems if and only if α is an order-reflecting semiring homomorphism. Proof. **The necessity part of the theorem follows from Lemmas 4.2 and 4.3. As** for the sufficiency part, we need only to show that, if α **is an order-reflecting** semiring homomorphism, then α **satisfies Equation 5. Suppose** $$\widetilde{\sum}_{i=1}^{-n}\widetilde{\prod}_{j=1}^{m}\alpha(u_{i j})<_{\widetilde{S}}\widetilde{\sum}_{i=1}^{-n}\widetilde{\prod}_{j=1}^{m}\alpha(v_{i j}).$$ Clearly we have $\frac{1}{2}$ Clearly we have $$\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}u_{ij})=\widetilde{\sum}_{i=1}^{n}\widetilde{\prod}_{j=1}^{m}\alpha(u_{ij})<_{\widetilde{S}}\widetilde{\sum}_{i=1}^{n}\widetilde{\prod}_{j=1}^{m}\alpha(v_{ij})=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}v_{ij})\,$$ since $\alpha$ commutes with $\sum$ and $\prod$. By order-reflecting, we have immediately $$\sum_{i=1}^{n}\prod_{j=1}^{m}u_{ij}<_{S}\sum_{i=1}^{n}\prod_{j=1}^{m}v_{ij}.$$ This ends the proof. $\square$ $$10$$
## 5 Computing Concrete Optimal Solutions From Abstract Ones In the above section, we investigated conditions under which all **optimal solutions of concrete problem can be related** *precisely* **to those of abstract problem.** There are often situations where it suffices to find *some* **optimal solutions or simply a good approximation of the concrete optimal solutions. This section shows** that, even without the order-reflecting condition, semiring homomorphism can be used to find some optimal solutions of concrete problem using abstract ones. Theorem 5.1. Let α : S → Se *be a semiring homomorphism. Given an SCSP* P over S*, suppose* t ∈ Opt(α(P)) has value v in P and value ev in α(P)*. Then* there exists t¯ ∈ Opt(P) ∩ Opt(α(P)) with value v¯ ≥S v in P *and value* ve in α(P)*. Moreover, we have* α(¯v) = α(v) = ev. Proof. **Suppose** P = hC, coni, C = {ci} m i=1 and ci = hdefi, conii**. Set** con = con ∪ S{conj} m j=1 and k = |con|. Suppose t **is an optimal solution of** α(P), with semiring value ve in α(P) and v in P**. By definition of solution, we have** $$v=S o l(P)(t)=\sum_{t^{\prime}|_{\mathrm{con}}^{\mathrm{cons}}=t\,j=1}^{m}\prod_{\operatorname*{def}_{j}(t^{\prime}|_{\mathrm{con}_{j}})}.$$ Denote $T(t)=\{t':t\}$ ′is a |k|**-tuple with** t Set n = |T (t)|, and write T = {t1, · · · , tn}. For each 1 ≤ i ≤ n **and each** 1 ≤ j ≤ m**, set** vij = defj (ti|conj $$t^{\prime}|_{\mathrm{con}}^{\mathrm{con}}=t\}.$$ ). Then $$v=\sum_{i=1}^{n}\prod_{j=1}^{m}v_{i j},\;\;\widetilde{v}=\widetilde{\sum}_{i=1}^{n}\widetilde{\prod}_{j=1}^{m}\alpha(v_{i j}).$$ Since α **preserves sums and products, we have** $$\alpha(v)=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}v_{i j})=\widehat{\sum}_{i=1}^{n}\alpha(\prod_{j=1}^{m}v_{i j})=\widehat{\sum}_{i=1}^{n}\widehat{\prod}_{j=1}^{m}\alpha(v_{i j})=\widehat{v}.$$ $$\mathbf{r}\mathbf{a}$$ Notice that if t is also optimal in P, then we can choose t¯ = t**. Suppose** t is not optimal in P. Then there is a tuple t¯ that is optimal in P**, say with value** v >S v**. Denote** T (t**¯) =** {t ′: t ′is a |k|**-tuple with** t ′| $$\stackrel{\mathrm{ion}}{|\mathrm{con}}={\bar{t}}\}.$$ Clearly |T (t¯)| = |T (t)| = n. Write T (t¯) = {t¯1, · · · ,t¯n}. For each 1 ≤ i ≤ n and each 1 ≤ j ≤ m**, set** uij = defj (t¯i|conj**). Then** $${\overline{{v}}}=\sum_{i=1}^{n}\prod_{j=1}^{m}u_{i j}.$$ $$11$$
Now we show α(v) ≤Se ve. $\alpha$ we show $\alpha(v)$ -$\alpha$. By $v<_{S}\overline{v}$, we have $\alpha(v)\leq_{S}\alpha(\overline{v})$. Then $$\widetilde{v}=\widetilde{\sum_{i=1}^{n}\prod_{j=1}^{m}}\alpha(v_{ij})$$ $$=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}v_{ij})$$ $$=\alpha(v)\leq_{\widetilde{S}}\alpha(\overline{v})=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}u_{ij})=\widetilde{\sum_{i=1}^{n}\prod_{j=1}^{m}}\alpha(u_{ij})=\widetilde{\overline{v}}$$ where the last term, ev, is the value of t¯ in α(P). Now since t **is optimal in** α(P), we have ve = α(v) = α(v) = ev. That is, t¯ is also optimal in α(P**) with value** ve. Remark 5.1. **If our aim is to find some instead of all optimal solutions of the** concrete problem P**, by Theorem 5.1 we could first find all optimal solutions** of the abstract problem α(P), and then compute their values in P**, tuples that** have maximal values in P are optimal solutions of P**. In this sense, this theorem** is more desirable than Theorem 4.1 because we do not need the assumption that α **is order-reflecting.** Theorem 5.1 can also be applied to find good approximations of the optimal solutions of P. Given an optimal solution t ∈ Opt(α(P)) with value ˜v ∈ Se**, then** by Theorem 5.1 there is an optimal solution t¯ ∈ Opt(P**) with value in the set** {u ∈ S : α(u) = ve}. Note that Theorem 5.1 requires α **to be a semiring homomorphism. This** condition is still a little restrictive. Take the probabilistic semiring S**prop** = h[0, 1], max, ×, 0, 1i and the classical semiring SCSP = h{T, F}, ∨, ∧**, F, T** i as example, there are no nontrivial homomorphisms between Sprop and SCSP **. This** is because α(a × b) = α(a) ∧ α(b**) requires** α(a n) = α(a) for any a ∈ [0, **1] and** any positive integer n, which implies (∀a > 0)α(a) = 1 or (∀a < 1)α(a**) = 1.** In the remainder of this section, we relax this condition. Definition 5.1 (quasi-homomorphism). A mapping ψ **from semiring** hS, +, ×, 0, 1i to semiring hS, e +e, ×e, 0e, 1ei **is said to be a** *quasi-homomorphism* **if for any** a, b ∈ S - ψ(0) = 0e, ψ(1) = 1e**; and** - ψ(a + b) = ψ(a)+eψ(b**); and** - ψ(a × b) ≤Se ψ(a)×eψ(b). The last condition is exactly the *locally correctness* of ×e w.r.t. × **[1].** Clearly, each monotonic surjective mapping between Sprop and SCSP **is a quasihomomorphism.** The following theorem shows that a quasi-homomorphism is also useful.
Set t = (d2, d2). Clearly, t is an optimal solution of α(P**) with value** {q} in α(P), and value ∅ in P. Notice that t¯= (d1, d1**) is the unique optimal solution** of P. Since α({a}) = {p} 6⊆ {q}, there is no optimal solution tˆ of P **such that** α(tˆ) ⊆ {q}. ## 6 Related Work Our abstraction framework is closely related to the work of Bistarelli et al. [1] and de Givry et al. [4]. ## 6.1 Galois Insertion-Based Abstraction Bistarelli et al. [1] proposed a Galois insertion-based abstraction scheme for soft constraints. The questions investigated here were studied in [1]. In particular, Theorems 27, 29, 31 of [1] correspond to our Theorems 4.1, **5.2, and 5.1,** respectively. We recall some basic notions concerning abstractions used in [1]. Definition 6.1 (Galois insertion [8]). Let (C, ⊑) and (A, ≤**) be two posets (the** concrete and the abstract domain). A *Galois connection* hα, γi : (C, ⊑) ⇄ (A, ≤) is a pair of monotonic mappings α : C → A and γ : A → C **such that** (∀x ∈ C)(∀y ∈ A) α(x) ≤ y ⇔ x ⊑ γ(y**) (6)** In this case, we call γ the upper adjoint (of α), and α **the lower adjoint (of** γ). A Galois connection hα, γi : (C, ⊑) ⇄ (A, ≤**) is called a** *Galois insertion* (of A in C**) if** α ◦ γ = idA. Definition 6.2 (abstraction). A mapping α : S → Se **between two c-semirings** is called an *abstraction* if 1. α has an upper adjoint γ such that hα, γi : S ⇋ Se **is a Galois insertion** 2. ×e is *locally correct* with respect to ×, i.e. (∀**a, b** ∈ S) α(a × b) ≤Se α(a)×eα(b). Theorem 27 of [1] gives a sufficient condition for a Galois insertion preserving optimal solutions. This condition, called *order-preserving***, is defined as follows:** Definition 6.3 ([1]). Given a Galois insertion hα, γi : S ⇄ Se, α **is said to be** order-preserving if for any two sets I1 and I2**, we have** $$\widetilde{\prod}_{x\in I_{1}}\alpha(x)\leq_{\widetilde{S}}\widetilde{\prod}_{x\in I_{2}}\alpha(x)\Rightarrow\prod_{x\in I_{1}}x\leq_{S}\prod_{x\in I_{2}}x.$$ x. (7) This notion plays an important role in [1]. In fact, several results ([1, Theorems 27, 39, 40, 42]) require this property. The next proposition, **however, shows** that this property is too restrictive, since an order-preserving Galois insertion is indeed a semiring isomorphism. $$\left(7\right)$$
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card