image
imagewidth (px)
1k
1.76k
markdown
stringlengths
1
43.7k
# Calculating Valid Domains For Bdd-Based Interactive Configuration Tarik Hadzic, Rune Moller Jensen, Henrik Reif Andersen Computational Logic and Algorithms Group, IT University of Copenhagen, Denmark tarik@itu.dk,rmj@itu.dk,hra@itu.dk Abstract. In these notes we formally describe the functionality of Calculating Valid Domains from the BDD representing the solution space of valid configurations. The formalization is largely based on the CLab [1] configuration framework. ## 1 Introduction Interactive configuration problems are special applications of Constraint Satisfaction Problems (CSP) where a user is assisted in interactively assigning values to variables by a software tool. This software, called a configurator, assists the user by calculating and displaying the available, valid choices for each unassigned variable in what are called valid domains computations. Application areas include customising physical products (such as PC's and cars) and services (such as airplane tickets and insurances). Three important features are required of a tool that implements interactive configuration: it should be complete (all valid configurations should be reachable through user interaction), backtrack-free (a user is never forced to change an earlier choice due to incompleteness in the logical deductions), and it should provide real-time performance (feedback should be fast enough to allow real-time interactions). The requirement of obtaining backtrack-freeness while maintaining completeness makes the problem of calculating valid domains NP-hard. The real-time performance requirement enforces further that runtime calculations are bounded in polynomial time. According to userinterface design criteria, for a user to perceive interaction as being real-time, system response needs to be within about 250 milliseconds in practice [2]. Therefore, the current approaches that meet all three conditions use off-line precomputation to generate an efficient runtime data structure representing the solution space [3,4,5,6]. The challenge with this data structure is that the solution space is almost always exponentially large and it is NP-hard to find. Despite the bad worst-case bounds, it has nevertheless turned out in real industrial applications that the data structures can often be kept small [7,5,4]. ## 2 Interactive Configuration The input *model* to an interactive configuration problem is a special kind of Constraint Satisfaction Problem (CSP) [8,9] where constraints are represented as propositional formulas:
The significance of this demand is that it guarantees the user backtrack-free assignment to variables as long as he selects values from valid domains. This reduces cognitive effort during the interaction and increases usability. At each step of the interaction, the configurator reports the valid domains to the user, based on the current partial assignment ρ resulting from his earlier choices. The user then picks an unassigned variable xj ∈ X \ dom(ρ) and selects a value from the calculated valid domain vj ∈ D ρ j . The partial assignment is then extended to ρ ∪ {(xj , vj )} and a new interaction step is initiated. ## 3 Bdd Based Configuration In [5,10] the interactive configuration was delivered by dividing the computational effort into an *offline* and *online* phase. First, in the offline phase, the authors compiled a BDD representing the solution space of all valid configurations Sol = {ρ | ρ |= F}. Then, the functionality of *calculating valid domains* (**CV D**) was delivered online, by efficient algorithms executing during the interaction with a user. The benefit of this approach is that the BDD needs to be compiled only once, and can be reused for multiple user sessions. The user interaction process is illustrated in Fig. 2. InCo(Sol, ρ) 1: while |Solρ| > 1 2: compute D ρ = CVD(Sol, ρ) 3: report D ρ **to the user** 4: the user chooses (xi, v) **for some** xi 6∈ dom(ρ), v ∈ D ρ i 5: ρ ← ρ ∪ {(xi, v)} 6: return ρ Fig. 2. Interactive configuration algorithm working on a BDD representation of the solutions Sol reaches a valid total configuration as an extension of the argument ρ. Important requirement for online user-interaction is the guaranteed real-time experience of user-configurator interaction. Therefore, the algorithms that are executing in the online phase must be provably efficient in the size of the BDD representation. This is what we call the *real-time guarantee*. As the **CV D** functionality is NP-hard, and the online algorithms are polynomial in the size of generated BDD, there is no hope of providing polynomial size guarantees for the worst-case BDD representation. However, it suffices that the BDD size is small enough for all the configuration instances occurring in practice [10]. ## 3.1 Binary Decision Diagrams A reduced ordered Binary Decision Diagram (BDD) is a rooted directed acyclic graph representing a Boolean function on a set of linearly ordered Boolean variables. It has one or two terminal nodes labeled 1 or 0 and a set of variable nodes. Each variable node
is associated with a Boolean variable and has two outgoing edges low and *high*. Given an assignment of the variables, the value of the Boolean function is determined by a path starting at the root node and recursively following the high edge, if the associated variable is true, and the low edge, if the associated variable is false. The function value is *true*, if the label of the reached terminal node is 1; otherwise it is *false*. The graph is ordered such that all paths respect the ordering of the variables. A BDD is reduced such that no pair of distinct nodes u and v are associated with the same variable and low and high successors (Fig. 3a), and no variable node u has identical low and high successors (Fig. 3b). Due to these reductions, the number of nodes <image> in a BDD for many functions encountered in practice is often much smaller than the number of truth assignments of the function. Another advantage is that the reductions make BDDs canonical [11]. Large space savings can be obtained by representing a collection of BDDs in a single multi-rooted graph where the sub-graphs of the BDDs are shared. Due to the canonicity, two BDDs are identical if and only if they have the same root. Consequently, when using this representation, equivalence checking between two BDDs can be done in constant time. In addition, BDDs are easy to manipulate. Any Boolean operation on two BDDs can be carried out in time proportional to the product of their size. The size of a BDD can depend critically on the variable ordering. To find an optimal ordering is a co-NP-complete problem in itself [11], but a good heuristic for choosing an ordering is to locate dependent variables close to each other in the ordering. For a comprehensive introduction to BDDs and *branching programs* in general, we refer the reader to Bryant's original paper [11] and the books [12,13]. ## 3.2 Compiling The Configuration Model Each of the finite domain variables xi with domain Di = {0, . . . , |Di| − 1} is encoded by ki = ⌈log|Di|⌉ Boolean variables x i 0 , . . . , xiki−1 . Each j ∈ Di, corresponds to a
binary encoding v0 **. . . v**ki−1 denoted as v0 . . . vki−1 = enc(j). Also, every combination of Boolean values v0 **. . . v**ki−1 represents some integer j ≤ 2 ki − 1, denoted as j = dec(v0 **. . . v**ki−1). Hence, atomic proposition xi = v is encoded as a Boolean expression x i 0 = v0 ∧ **. . .** ∧ x i ki−1 = vki−1. In addition, *domain constraints* are added to forbid those assignments to v0 **. . . v**ki−1 which do not translate to a value in Di, i.e. where dec(v0 . . . vki−1) ≥ |Di|. Let the solution space Sol over ordered set of variables x0 **< . . . < x**k−1 be represented by a Binary Decision Diagram B(V, E, Xb**, R, var**), where V is the set of nodes u, E is the set of edges e and Xb = {0, 1, . . . , |Xb| − 1} is an ordered set of variable indexes, labelling every non-terminal node u with var(u) ≤ |Xb| − 1 and labelling the terminal nodes T0, T1 with index |Xb|. Set of variable indexes Xb is constructed by taking the union of Boolean encoding variables Sn−1 i=0 {x i 0 , . . . , xiki−1 } and ordering them in a natural layered way, i.e. x i1 j1 < xi2 j2 iff i1 < i2 or i1 = i2 and j1 < j2. Every directed edge e = (u1, u2) has a starting vertex u1 = π1(e) and ending vertex u2 = π2(e). R denotes the root node of the BDD. Example 2. The BDD representing the solution space of the T-shirt example introduced in Sect. 2 is shown in Fig. 4. In the T-shirt example there are three variables: x1, x2 and x3, whose domain sizes are four, three and two, respectively. Each variable is represented by a vector of Boolean variables. In the figure the Boolean vector for the variable xi with domain Diis (x 0 i , x1 i , **· · ·** x li−1 i), where li = ⌈lg |Di|⌉. For example, in the figure, variable x2 which corresponds to the size of the T-shirt is represented by the Boolean vector (x 0 2 , x12 ). In the BDD any path from the root node to the terminal node 1, corresponds to one or more valid configurations. For example, the path from the root node to the terminal node 1, with all the variables taking low values represents the valid configuration (black , small, MIB). Another path with x 0 1 , x11 , and x 0 2 taking low values, and x 12 taking high value represents two valid configurations: (black , medium, MIB) and (black , medium, STW ), namely. In this path the variable x 0 3 is a don't care variable and hence can take both low and high value, which leads to two valid configurations. Any path from the root node to the terminal node 0 corresponds to invalid configurations. ♦ ## 4 Calculating Valid Domains Before showing the algorithms, let us first introduce the appropriate notation. If an index k ∈ Xb corresponds to the j + 1-st Boolean variable x i j encoding the finite domain variable xi, we define var1(k) = i and var2(k) = j to be the appropriate mappings. Now, given the BDD B(V, E, Xb**, R, var**), Vi denotes the set of all nodes u ∈ V that are labelled with a BDD variable encoding the finite domain variable xi, i.e. Vi = {u ∈ V | var1(u) = i}. We think of Vi as defining a layer in the BDD. We define Inito be the set of nodes u ∈ Vi reachable by an edge originating from outside the Vi layer, i.e. Ini = {u ∈ Vi| ∃(u ′ , u) ∈ **E. var**1(u ′ ) < i}. For the root node R, labelled with i0 = var1(R) we define Ini0 = Vi0 = {R}. We assume that in the previous user assignment, a user fixed a value for a finite domain variable x = **v, x** ∈ X, extending the old partial assignment ρold to the current
<image> assignment ρ = ρold ∪ {(**x, v**)}. For every variable xi ∈ X, old valid domains are denoted as D ρold i, i = 0**, . . . , n** − 1. and the old BDD Bρold is reduced to the restricted BDD, Bρ(V, E, Xb**, var**). The **CV D** functionality is to calculate valid domains D ρ i for remaining unassigned variables xi 6∈ dom(ρ) by extracting values from the newly restricted BDD Bρ(V, E, Xb**, var**). To simplify the following discussion, we will analyze the isolated execution of the CV D algorithms over a given BDD B(V, E, Xb**, var**). The task is to calculate valid domains V Di from the starting domains Di. The user-configurator interaction can be modelled as a sequence of these executions over restricted BDDs Bρ, where the valid domains are D ρ i and the starting domains are D ρold i. The **CV D** functionality is delivered by executing two algorithms presented in Fig. 5 and Fig. 6. The first algorithm is based on the key idea that if there is an edge e = (u1, u2) crossing over Vj , i.e. var1(u1) **< j < var**1(u2) then we can include all the values from Dj into a valid domain V Dj ← Dj . We refer to e as a *long edge* of length var1(u2) − var1(u1). Note that it skips var(u2) − var(u1) Boolean variables, and therefore compactly represents the part of a solution space of size 2 var(u2)−var(u1). For the remaining variables xi, whose valid domain was not copied by **CV D** − Skipped, we execute CV D(**B, x**i) from Fig. 6. There, for each value j in a domain D′i we check whether it can be part of the domain Di. The key idea is that if j ∈ Dithen there must be u ∈ Vi such that traversing the BDD from u with binary encoding of j
CV D − **Skipped**(B) 1: for each i = 0 to n − 1 2: L[i] ← i + 1 3: T ← **T opologicalSort**(B) 4: for each k = 0 to |T | − 1 5: u1 ← T [k], i1 ← var1(u1) 6: for each u2 ∈ **Adjacent**[u1] 7: L[i1] ← max{L[i1]**, var**1(u2)} 8: S **← {}**, s ← 0 9: for i = 0 to n − 2 10: if i + 1 < L[s] 11: L[s] ← max{L[s], L[i + 1]} 12: else 13: if s + 1 < L[s] S ← S ∪ {s} 14: s ← i + 1 15: for each j ∈ S 16: for i = j to L[j] 17: V Di ← Di Fig. 5. In lines 1-7 the L[i] array is created to record longest edge e = (u1, u2) originating from the Vilayer, i.e. L[i] = max{var1(u ′ ) | ∃(u, u′) ∈ **E.var**1(u) = i}. The execution time is dominated by **T opologicalSort**(B) which can be implemented as depth first search in O(|E|+|V |) = O(|E|) time. In lines 8-14, the overlapping long segments have been merged in O(n) steps. Finally, in lines 15-17 the valid domains have been copied in O(n) steps. Hence, the total running time is O(|E| + n). CV D(**B, x**i) 1: V Di **← {}** 2: for each j = 0 to |Di| − 1 3: for each k = 0 to |Ini| − 1 4: u ← Ini[k] 5: u ′ ← T raverse(**u, j**) 6: if u ′ 6= T0 7: V Di ← V Di ∪ {j} 8: Return Fig. 6. Classical CVD algorithm. enc(j) denotes the binary encoding of number j to ki values v0**, . . . , v**ki−1. If T raverse(**u, j**) from Fig. 7 ends in a node different then T0, then j ∈ V Di.
will lead to a node other than T0, because then there is at least one satisfying path to T1 allowing xi = j. T raverse(**u, j**) 1: i ← var1(u) 2: v0, . . . , vki−1 ← enc(j) 3: s ← var2(u) 4: if **Marked**[u] = j **return** T0 5: **Marked**[u] ← j 6: while s ≤ ki − 1 7: if var1(u) > i **return** u 8: if vs = 0 u ← low(u) 10: else u ← **high**(u) 12: if **Marked**[u] = j **return** T0 13: **Marked**[u] ← j 14: s ← var2(u) Fig. 7. For fixed u ∈ V, i = var1(u), T raverse(**u, j**) iterates through Vi and returns the node in which the traversal ends up. When traversing with T raverse(**u, j**) we mark the already traversed nodes ut with j, **M arked**[ut] ← j and prevent processing them again in the future j-traversals T raverse(u ′ , j). Namely, if T raverse(**u, j**) reached T0 node through ut, then any other traversal **T raverse**(u ′ , j) reaching ut must as well end up in T0. Therefore, for every value j ∈ Di, every node u ∈ Viis traversed at most once, leading to worst case running time complexity of O(|Vi|·|Di|). Hence, the total running time for all variables is O(Pn−1 i=0 |Vi**| · |**Di|). The total worst-case running time for the two **CV D** algorithms is therefore O(Pn−1 i=0 |Vi|· |Di| + |E| + n) = O(Pn−1 i=0 |Vi**| · |**Di| + n). ## References 1. Jensen, R.M.: CLab: A C++ library for fast backtrack-free interactive product configuration. http://www.itu.dk/people/rmj/clab/ (2007) 2. Raskin, J.: The Humane Interface. Addison Wesley (2000) 3. Amilhastre, J., Fargier, H., Marquis, P.: Consistency restoration and explanations in dynamic CSPs-application to configuration. Artificial Intelligence 1-2 (2002) 199–234 ftp://fpt.irit.fr/pub/IRIT/RPDMP/Configuration/. 4. Madsen, J.N.: Methods for interactive constraint satisfaction. Master's thesis, Department of Computer Science, University of Copenhagen (2003) 5. Hadzic, T., Subbarayan, S., Jensen, R.M., Andersen, H.R., Møller, J., Hulgaard, H.: Fast backtrack-free product configuration using a precompiled solution space representation. In: PETO Conference, DTU-tryk (2004) 131–138 6. Møller, J., Andersen, H.R., Hulgaard, H.: Product configuration over the internet. In: Proceedings of the 6th INFORMS Conference on Information Systems and Technology. (2002) 7. Configit Software A/S. **http://www.configit-software.com** (online)
8. Tsang, E.: Foundations of Constraint Satisfaction. Academic Press (1993) 9. Dechter, R.: Constraint Processing. Morgan Kaufmann (2003) 10. Subbarayan, S., Jensen, R.M., Hadzic, T., Andersen, H.R., Hulgaard, H., Møller, J.: Comparing two implementations of a complete and backtrack-free interactive configurator. In: CP'04 CSPIA Workshop. (2004) 97–111 11. Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers 8 (1986) 677–691 12. Meinel, C., Theobald, T.: Algorithms and Data Structures in VLSI Design. Springer (1998) 13. Wegener, I.: Branching Programs and Binary Decision Diagrams. Society for Industrial and Applied Mathematics (SIAM) (2000) 9
<image> <image> As mentioned before, the default sequence-weighting method used by HMMER package is a high quality algorithm. Therefore, we combine both the default HMMER's M matrix in (3) and the of Ms structural matrix in (4), as shown in (5). $$\mathbf{M_{s}^{'}=M M_{s}^{T}=\left(\begin{array}{c c c}{{w_{11}m_{11}}}&{{\ldots}}&{{w_{1L}m_{1L}}}\\ {{}}&{{}}&{{}}\\ {{\vdots}}&{{\vdots}}&{{\vdots}}\\ {{w_{N1}m_{N1}}}&{{\ldots}}&{{w_{N L}m_{N L}}}\end{array}\right)}\qquad(5)$$ However, introducing weights affect the computation of the observed frequencies. More precisely, the observed frequency cj (σ) shown in 1 is now found through the equation 6, where sij = wijmij is structural weight of residue σ, according to M ′ s matrix. $$c_{j}(\sigma)=\sum_{i}^{N}f(\sigma)\ \therefore f(\sigma)=\left\{\begin{array}{cc}s_{ij},&\mbox{if$\sigma$is the amino-acid in position ij}\\ \\ 0,&\mbox{otherwise}\end{array}\right.\tag{6}$$ In the same way, we apply the equations 7 to determine ckl shown in 2. If the k and l states are either M or I states, ckl can be calculated through the arithmetic mean of mik and mil . If at least one state is a D state, ckl is either mik, if l ∈ {D}, or mil , if k ∈ {D}. Last, if both are D states, ckl is 1. $$c_{kl}=\sum_{i}^{N}f_{kl}\cdot\cdot f_{kl}=\left\{\begin{array}{cc}\frac{s_{ik}+s_{il}}{2},&\mbox{se k,l}\in\{M,I\}\\ \\ s_{ik},&\mbox{se l}\in\{D\}\;e\;\mbox{k}\notin\{D\}\\ \\ s_{il},&\mbox{se k}\in\{D\}\;e\;\mbox{l}\notin\{D\}\\ \\ 1,&\mbox{se k,l}\in\{D\}\end{array}\right.\tag{7}$$ ## 2.3 The Ms Structural Weight Matrices As explained above, our algorithm considers a number of different sources of structural information. Next, we approach how this information was obtained and used to built Ms matrix. 2.3.1 Secondary structural elements Secondary structure is often conserved among homologue proteins. Indeed, *motifs* (Branden *et al*., 1991), consensus sequences in homologue proteins, usually include a combination of well conserved secondary structure elements (Chakrabarti *et al*., 2004). In order to build a Ms matrix based on secondary structure elements we need to identify secondary structure elements in the original sequences. This is possible because we assume we have full structural data for the *training* sequences. In this work, we chose to utilize the SSTRUCT program, part of the widely used joy package (Mizuguchi *et al*., 1998), to extract secondary elements from the PDB files. SSTRUCT output is a character sequence, such that the characters {L=loop, H=helix, C=sheet} match a secondary structure element against a residue, as shown in figure 2. Following Deane's work on the relative frequency of conserved regions (Deane *et al*., 2003), we mapped each SSTRUCT element as follows: L → 1, H → 2, and C → 4. Our mapping thus favours conservation in sheets, and gives default weight to loops. Although the active site of proteins can be found in loops, these regions often contain *indel* segments. Figure 2 shows an example of structural weight attributions for proteins in a partial alignment. 2.3.2 Solvent Inaccessibility The hydrophobic interactions of nonpolar side chains in amino-acids are believed to contribute significantly to the stability of the tertiary structures in proteins. Hydrophobic amino-acids will tend to cluster together, not as a result of attraction, but as a result of their repulsion by the hydrogen bond water network in which the protein is dissolved. Therefore, these amino-acids will preferentially be located away from the surface of the molecule. Since they form the core of protein, they tend to be more conserved and are, thus, more useful for identifying remote evolutionary relationships. We have utilized the PSA (Lee *et al*., 1971) program to provide solvent inaccessibility information. PSA is part of the JOY package. The Ms matrix was built giving weight 3 for inaccessible residues and weight one to the others. The weights are based on (Chakrabarti *et al*., 2004), which demonstrated empirically that inaccessible amino-acids are three times more conserved than accessible amino-acids. The Ms matrix represents structural weights that were used to build the model *pHMMAcc*, as shown in figure 1. 2.3.3 Packing density The tertiary structure of proteins stems from a very large number of atomic interactions. In regions where the interactions are stronger residues tend to be packed together. It is well known that densely packed regions tend to be preserved, and hence that amino-acids belonging to those regions are usually more conserved than other amino-acids. TJ Ooi created a measure, called the Ooi Number (Nishikawa *et al*., 1986), that estimates the amino-acid packing density. Essentially, the Ooi number counts for a residue counts the number of neighboring C-α atoms within a radius of 14A of the given residues own C- ˚ α. Although crude, this measure does give a good impression of which parts of the structure are buried and which are exposed on the surface. We again use the JOY package to obtain the Ooi number and estimate packing density. Figure 3 shows a stretch of JOY output, in which the numbers represent the Ooi measure for the Dehaloperoxidase protein in the Globins family (16wc PDB code). We used these numbers to build the structural weight matrix Ms. The structural weights were than used to build the model *pHMMOoi*, as shown in figure 1. <image> Fig. 3. Ooi measure for the Dehaloperoxidase protein of Globins family (16wc PDB code), each number represents the amount of neighbor aminoacids inside a radius of 14A. ˚ 3
2.3.4 Homologuous Core Structure Structural similarity among proteins can provide valuable insights into their functionality. One way to provide structural similarities is through three-dimensional alignment of proteins, also called *structural alignment*. The goal is to align two or more proteins by trying to overlap the three-dimensional coordinates of their atoms. When multiple homologue proteins are structurally aligned, we tend to observe that there is a subset of coordinates whose spatial locations are better conserved across structural alignment. This subset is called the homolog core structure (HCS) (Matsuo *et al*., 1999). According to the result reported by Gerstein *et al*. (1995), HCS can be utilized to detect homologue proteins. Our goal was to estimate the HCS of a set of protein. As a first approximation, we propose a method to extract it from structural alignment by calculating how much aligned residues from different proteins tend to be close together. Following MAMMOTH, we represent residues through the coordinates of their C-α atoms. In other words, we assume that closeness between C-α atoms will approximate overlapping among amino-acids. To find out how much amino-acids are close together, we utilize the Euclidian distance measure, as shown in the equation 8. It represents the shortest distance between two points in the space. $$d e_{a,b}={\sqrt{(x_{a}-x_{b})^{2}+(y_{a}-y_{b})^{2}+(z_{a}-z_{b})^{2}}}$$ 2 (8) The degree of overlap between aligned residues in the structural alignment was calculated through the relative distance dij , equation 9. This distance can be found through the average distance among the amino-acid in the position ij and other amino-acids in the j column of alignment. $$d i_{j}={\frac{\sum_{b=j}^{n-1}d e_{(i,b),(i,b+1)}}{n-1}}$$ Finally, the relative distance was normalized according to 10, and it was used to determine the degree of overlap of each residue. These measures were normalized by using the equation 10, where dmin is the minimal distance and Omaxi is the maximal Ooi measure for protein i. $$m_{i j}={\frac{d_{m i n}*O_{m a x_{i}}}{d i_{j}}}$$ After this step, we built the Ms matrix, where each mij matrix element corresponds to the relative distance of amino-acids ij in the structural alignment. This matrix represents structural weights that were used to build the model *pHMM3D*, shown in the figure 1. ## 2.4 Library Of Structural Models In a second step, we join the models built from these matrices to form a library of structural models aiming at building a single model to represent the structural patterns under different aspects. We used the hmmpfam HMMER tool to combine the models together. Library of models have been used in a number of studies, such as (Bateman *et al*., 2004; Haft *et al*., 2003; Gough *et al*., 2001), and they are known to achieve better results than those achieved by single models. ## 2.5 Test Procedure The main concern of our study is to build pHMMs that can be helpful in remote homology detection. Therefore, our experiments considered proteins with identity below 30%. To do so, we used the SCOP database (Andreeva *et al*., 2004), and more specifically ASTRAL SCOP version 1.67 PDB40 (with 6600 protein sequences). ASTRAL SCOP is particularly interesting for our study because it describes structural and evolutionary relationships among proteins, such that none of the sequences in ASTRAL SCOP present > 40% sequence identity. Thus, it is an excellent dataset to evaluate the performance of remote homology detection methods and has been widely used to reach this goal (Espadaler *et al*., 2005; Wistrand *et al*., 2005; Hou *et al*., 2004; Alexandrov *et al*., 2004). SCOP classifies all protein domains of known structure into a hierarchy with four levels: class, fold, super family and family. In our study, we work at the super family level, which gathers families in such a way that a common evolutionary origin is not obvious from sequence identity, but probable from an analysis of structure and from functional features. We believe that this level better represents remote homolog. Moreover, we used cross-validation (Mitchell, 1997) to compare the different approaches. First, we divided SCOP database by super family level. Next, from ASTRAL PDB40, we chose those super families containing at least three families and at least 20 sequences. We eventually tested 39 super families, as listed in Table 1. This whittled down the number of sequences we used to model building to 1137. Third, we implemented leave-one-familyout cross-validation. For any super family x having n families, we built n profiles so that each profile P was built from the sequences in the remaining n − 1 families. Thus, the n − 1 sequences form the training set for profile P. The test set for profile P will be the remaining sequences (test positives) plus all other database sequences (test negatives). Table 1. Superfamily SCOP-Ids | a.1.1. | a.138.1. | a.25.1. | a.26.1. | a.3.1. | a.39.1. | a.4.1. | b.121.4. | |----------|------------|-----------|-----------|----------|-----------|----------|------------| | b.18.1. | b.29.1. | b.36.1. | b.47.1. | b.55.1. | b.60.1. | b.6.1. | b.71.1. | | b.82.1. | c.1.10. | c.23.1. | c.26.1. | c.36.1. | c.52.1. | c.55.1. | c.55.3. | | c.67.1. | d.108.1. | d.14.1. | d.144.1. | d.15.1. | d.153.1. | d.169.1. | d.3.1. | | d.58.7. | d.92.1. | g.3.11. | g.3.6. | g.3.7. | g.37.1. | g.39.1. | | SCOP Super families used in our experiments. We only considered super families with at least 20 proteins and three or more families. $$10$$ In order to assess HMMER-STRUCT performance, we used the HMMER package. We did not compare with SAM (Hughey *et al*., 1996) package. First, because our goal was to evaluate whether structural properties can improve pHMMs, not to compare the two packages, and second, because a related previous study on the same dataset actually showed HMMER outperforming SAM (Bernardes *et al*., 2007). The same study also indicated better results on the "twilight zone" using structural alignment tools, such as MAMMOTH-mult and 3DCOFFEE. We used MAMMOTH in this study. Results were graphically analyzed by building ROC and Precision/Recall curves. ROC curves are a common measure of performance that is very used in bioinformatics application. They are based on the relation of the false positives (non homologue proteins) and of true positives (homologue proteins), and are obtained by varying a parameter that affect these relationships. We further present Precision/Recall curves, as they give a good perspective on true positives, false positives and false negatives hits. In both cases, the bigger the area under the curve (AUC), the more efficient the analyzed tool is. In both cases we used the minimal *e-value* required to accept a match as the parameter used to build both curves. We ranged e-values between 10−50 and 10. Finally, we used the paired two tailed t-test to assess significance, and assumed that results with p ≤ 0.05 (I.e. 95% of confidence) are significant. ## 3 Results As a first step, we build a model for each structural property and evaluate it according to the methodology described in the Methods section. The ROC curves are presented in figure 4 and the Precision/Recall curves in figure 5. Both figures show all models, that is, *pHMM2D* (secondary structural model), *pHMMOi* (Ooi measure model), *pHMMAcc* (inaccessibility model) and *pHMM3D* (threedimensional structure model) outperforming the HMMER model.
HMMER-STRUCT HMMER 10−4 pHMMAcc 10−4 pHMMOi 10−3 pHMM2D 10−3 pHMM3D 10−4 Paired two tailed t-test when comparing performance of each HMMERSTRUCT component with the combined model. Table 3. HMMER-STRUCT paired t-test ## 4 Discussion The accuracy of homology detection methods is essential for the problem of inferring the function of unknown-function proteins. However, improving accuracy becomes hard when similarity between sequences is low. We proposed a method to improve pHMMs sensitivity by adding structural properties in the model building stage. We showed that the pHMMs trained according to this method are more sensitive than pHMMs trained from multiple sequence alignments, even if the alignment itself relied on structural properties. Our experiments demonstrated best performance for *pHMM2D*, that used secondary structural properties, and for *pHMMOi*, that used packing density residues. Both pHMMs present similar performance. We believe that the good results obtained with the *pHMMoi* model can be attributed to the fact that tight packing is important for protein stability, and follow well-known results that indicate that amino-acids located in the core protein are more conserved than amino-acids located in other sites (Privalov, 2000). In the same way, the *pHMM2D* model achieve good performance as secondary structure elements are responsible for maintaining the form in homologue proteins. These elements form motifs and domains, which are related with protein function. Conserved sites may point to functionally and structurally important regions. These observations may explain the higher performance of models based on packing residues and on secondary structural properties. The *pHMMAcc* models, based on amino-acid inaccessibility, and the *pHMM3D* models, based on three-dimensional coordinates, did not perform as well. The *pHMMAcc* models did not achieve statistical significance results, when they were compared with HMMER. On the other hand, we observe that the inaccessibility property can be explained by hydrophobic effects, as are the amino-acids with hydrophobic side-chain that go toward the core protein by forming packages. Therefore hydrophobicity was represented in the *pHMMOi* model, that achieved good performance. Our results suggest the difference between models stems from the *pHMMOi* models to be more accurate and precise than what is used when building pHMMAcc. However, we believe the inaccessibility property is already represented appropriately by *pHMMOi* model. Since amino-acids with high packing density already are inaccessible. Therefore, *pHMMOi* outperformed the *pHMMAcc*, as *pHMMoi* has more information than *pHMMAcc*. The chief contribution of our method was achieved when all the models work together. The combined models performed significantly better than any single model. We believe that this results from the fact that each trained pHMMs represents a different structural property. Therefore, combining the models increases sensitivity by exploring the different structural properties. Our method shows that structural information can be added during the training phase of pHMM to improve sensitivity, without much changes to the usage of pHMM methodology, and applied to recently discovered proteins for which there is little structural information. ## 5 Conclusion The increasing number of studies involving pHMMs and the use of structural information has been quite remarkable (Hou *et al*., 2004; Alexandrov *et al*., 2004; Bystroff *et al*., 2000). Most of these approaches build structural models based on three-dimensional coordinates. In contrast, we present a novel methodology to train pHMMs based on structural alignment and other structural properties using a set of homologue protein sequences. Our method builds five models from an aligned homologue sequence set. Each model represents a different structural property, and the union of the models represent the structural context of aligned proteins. The properties used were primary, secondary and tertiary structures, accessibility and packing residue. Note that previous attempts have already used secondary and tertiary structural properties to train pHMM, though in quite a different way. However, accessibility and packing residue properties were used for the first time in pHMM training, with good results in the latter case. In order, to build each model, we developed a novel sequenceweighting algorithm based on structural weights that are attributed for each amino-acid. Traditional weighting-algorithm works gives the same weight for every residue in the protein. Instead, we propose a method that gives a different weight to each aminoacid into a protein, according to structural properties that suggest it may be in a conserved region. Our results relied on prior work (Chakrabarti *et al*., 2004; Deane *et al*., 2003; Nishikawa *et al*., 1986) that suggested interesting properties and estimated their weight. Nowadays, the most popular approach to discovering the function of a newly found protein is through sequence similarity search. In fact, it is well known that structure is more conserved than sequence, and thus structural similarity can suggest function similarity. On the other hand, structural data is sparse and are usually not available for proteins with unknown function. Therefore, it is very important that methods that uses structural properties to build models will not need to rely on structural information for a new protein. Our method makes use of structural properties only at the model building stage, but not at scoring. Our results show that the use of structural properties can improve the sensitivity of remote homology methods. Moreover, the combination of different model (one for each property) outperforms the use of individual properties. A number of future research directions present themselves. It will be interesting to include more models, such as that based on bond-hydrogen properties. Also, it will be interesting to apply our methodology to other remote homology tools, such as SAM (Hughey *et al*., 1996) and T-HMM Qian *et al*. (2004). Ultimately, we believe that our work is a step in the major challenge of finding the set of structural properties or features that represent precisely membership of a super family.
## Acknowledgement We Are Grateful To Cnpq For Financial Support. References Alexandrov,V., Gerstein,M. (2004) Using 3D Hidden Markov Models that explicitly represent spatial coordinates to model and compare protein structures, BMC Bioinformatics, 5, 110. Altschul,F., Gish,W., Miller,W., Myers,E., Lipman,D. (1990) A basic local alignment search tool, *Journal of Molecular Biology*, 215, 403-410. Altschul,S., Madden,T., Schaffer,A., Zhang,J., Zhang,Z., Miller,W., Lipman,D. (2000) PSI-BLAST searches using hidden markov models of structural repeats: prediction of an unusual sliding DNA clamp and of beta-propellers in UV-damaged DNAbinding protein, *Nucleic Acids Research*, 28, 3570-3580. Andreeva,A., Howorth,D., Brenner,S., Hubbard,T., Chothia,C., Murzin,A. (2004) SCOP database in 2004: refinements integrate structure and sequence family data, Nucleic Acids Research, 32, 226-229. Attwood,T., Bradley,P., Flower,D., Gaulton,A., Maudling,N., Mitchell,A. (2005) Article title, *Bioinformatics*, 21, 32553263. Bateman,A., Coin,L., Durbin,R., Finn,R., Hollich,V., Griffiths-Jones,S., Khanna,A., Marshall,M., Moxon,S., Sonnhammer,E., Studholme,D., Yeats,C., Eddy,S. (2004) The Pfam Protein Families Database, *Nucleic Acids Research*, 32, 138-141. Mitchell, T. (1997) Machine Learning, *McGraw-Hill*. Bernardes, J., D`avila, A., Costa, V., Zaverucha, G. (2007) Improving model construction of profile HMMs for remote homology detection through structural alignment, BMC Bioinformatics, 8, 435:1-12. Branden,C., Tooze,J. (1991) Introduction to protein Structure, chapter Motifs of protein structure, *Garland Publishing*, 11-29. Brown,M., Hughey,R., Krogh,A., Mian,I., Sjlander,K., Haussler,D. (1993) Using Dirichlet mixture priors to derive hidden markov models for protein families, Proc. of First Int. Conf. on Intelligent Systems for Molecular Biology, 1, 4755. Bystroff,C., Baker,D. (2000) HMMSTR:A hidden Markov model for local sequencestructure correlation in proteins, *Journal of Molecular Biology*, 301, 173190. Chakrabarti,S., Sowdhamini,R. (2004) Regions of minimal structural variation among members of protein domain superfamilies: application to remote homology detection and modelling using distant relationships, *FEBS*, 569, 31-36. Deane,C., Perdersen,J., Lunter, G. (2003) Insertions and deletions in protein alignment, unpublished. Eddy,S. (1996) Hidden markov models, *Current Opinion in Structural Biology*, 6, 361365. Eddy,S. (1998) Profile hidden Markov models, *Current Opinion in Structural Biology*, 14, 755-763. Espadaler,J., Aragues,R., Eswar,N., Marti-Renom,M., Querol,E., Aviles,F., Sali,A., Oliva,B. (2005) Detecting remotely related proteins by their interactions and sequence similarity, *National Academy of Sciences*, 102, 7151-7156. Gerstein,M., Sonnhammer,E., Chothia,C. (1994) Volume changes in protein evolution, Journal of Molecular Biology, 236, 1067-1078. Gerstein,M., Altman,R. (1995) Average core structures and variability measures for protein families: application to the immunoglobulins, *Journal of Molecular Biology*, 251, 165-175. Gough,J., Karplus,K., Hughey,R., Chothia,C. (2001) Assignment od homology to genome sequences using a library of hidden Markov models that represent all proteins of known structure, *Journal of Molecular Biology*, 313, 903919. Goyon,F., Tuffry,P. (2004) SA-Search: A web tool for protein structure mining based on structural alphabet, *Nucleic Acids Research*, 32, 545548. Gribskov,M., McLachlan,A., Eisenberg,D. (1987) Profile analysis: detection of distantly related proteins, *National Academy of Sciences*, 84,43554358. Haft,D., Selengut,J., White,O. (2003) The TIGRFAMs database of protein families, Nucleic Acids Research, 31, 371-373. Helen,M., Westbrook,J., Feng,Z., Gilliland,G., Bhat,T., Weissig,H., Shindyalov,I., Bourne,P. (2000) The Protein Data Bank, *Nucleic Acids Research*, 28, 235-242. Hou,Y., Hsu,W., Lee,M., Bystroff,C. (2004) Remote homolog detection using local sequence-structure correlations, *Journal of Molecular Biology*, 340, 385-395. Hughey,R., Krogh,A. (1996) Hidden Markov models for sequence analysis: extension and analysis of the basic method, *Computer Applications in the Biosciences*, 12, 95-107. Karplus,K., Karchin,R., Shackelford,G., Hughey,R. (2005) Calibrating E-values for hidden Markov models using reverse-sequence null models, *Bioinformatics*, 21, 4107-4115. Krogh,A., Brown,M., Mian,I., Sjolander,K., Haussler,D. (1994) Hidden markov models in computational biology applications to protein modeling, *Journal of Molecular* Biology, 235, 1501-1531. Lee,B., Richards,F. (1971) The interpretation of protein structure: estimation of static accessibility, *Journal of Molecular Biology*, 14, 379-400. Matsuo,Y., Bryant,S. (1999) Identification of homolog core structures, *Proteins*, 35, 70-79. Mitchell,T. (1997) Machine Learning, *McGraw-Hill* Mizuguchi,K., Deane,C., Johnson,M., Blundell,T., Overington,J. (1998) Article title, Journal Name, 14, 617-623. Nishikawa,K., Ooi,T. (1986) Radial locations of amino acid residues in a globular protein: correlation with the sequence, *Journal of Biochemistry*, 100, 1043-1047. Park,J., Karplus,K., Barrett,C., Hughey,R., Haussler,D., Hubbard,T., Chothia,C. (1998) Sequence comparisons using multiples sequence detect three times as many remote homologues as pairwise methods, *Journal of Molecular Biology*, 284, 12011210. Pearson,W. (1985) Rapid and sensitive sequence comparisons with FASTP and FASTA, Methods Enzymol, 183, 63-98. Privalov,P. (1996) Intermediate states in protein folding, *Journal Name*, 258, 707-725. Qian,B., Goldstein,R. (2004) Performance of an iterated T-HMM for homology detection, *Bioinformatics*, 20, 21752180. Wistrand,M., Sonnhammer,E. (2005) Improved profile HMM performance by assessment of critical algorithmic in SAM and HMMER, *BMC Bioinformatics*, 6, 1-10. Sch¨olkopf,B., Burges,C., Smola,A. (1999) Advances in kernel methods: support vector learning, *MIT Press*. 7
## Bayesian Approach To Rough Set Tshilidzi Marwala and Bodie Crossingham University of the Witwatersrand Private Bag x3 Wits, 2050 South Africa e-mail: t.marwala@ee.wits.ac.za This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV. ## Introduction Rough set theory (RST) was introduced by Pawlak (1991) and is a mathematical tool which deals with vagueness and uncertainty. It is of fundamental importance to artificial intelligence (AI) and cognitive science and is highly applicable to the tasks of machine
differences in posterior probabilities between two states that are in transition (Metropolis et al., 1953). This algorithm ensures that states with high probability form the majority of the Markov chain and is mathematically represented as: $$(14)$$ If ) ( | ) ( | P Mn+1 D > P Mn D **then accept** Mn+1 , (14) else accept if $P(M_{n+1}\mid D)$$P(M_{n}\mid D)$$\geq\xi$ where $\xi\in$ [0,I] (15) else reject and randomly generate another model Mn+1 . ## Experimental Investigation: Modelling Of Hiv The proposed method is applied to create a model that uses demographic characteristics to estimate the risk of HIV. In the last 20 years, over 60 million people have been infected with HIV (Human immunodeficiency virus), and of those cases, 95% are in developing countries (Lasry et al, 2007). HIV has been identified as the cause of AIDS. Early studies on HIV/AIDS focused on the individual characteristics and behaviors in determining HIV risk and Fee and Krieger (1993) refer to this as biomedical individualism. But it has been determined that the study of the distribution of health outcomes and their social determinants is of more importance and this is referred to as social epidemiology (Poundstone et. al., 2004). This study uses individual characteristics as well as social and demographic factors in determining the risk of HIV using rough set models formulated using Bayesian approach and trained using Monte Carlo method. Previously, computational intelligence techniques have been used extensively to analyze HIV and Leke et al (2006, 2006, 2007) used autoencoder network classifiers, inverse neural networks, as well as conventional feed-forward neural networks to estimate HIV risk from demographic factors. Although good accuracy was achieved when using the
autoencoder method, it is disadvantageous due to its "black box" nature which is that it is not transparent. To improve transparency Bayesian rough set theory (RST) is proposed to forecast and interpret the causal effects of HIV. Rough set have been used in various biomedical and engineering applications (Ohrn, 1999; Pe-a et. al, 1999; Tay and Shen, 2003; Golan and Ziarko, 1995). But in most applications, RST is used primarily for prediction and this paper proposes Bayesian rough set models for HIV prediction. Rowland et al (1998) compared the use of RST and neural networks for the prediction of ambulation spinal cord injury, and although the neural network method produced more accurate results, its "black box" nature makes it impractical for the use of rule extraction problems. Poundstone et al (2004) related demographic properties to the spread of HIV. In their work they justified the use of demographic properties to create a model to predict HIV from a given database, as is done in this study. In order to achieve good accuracy, the rough set partitions or discretisation process needs to be well chosen and this is done by sampling through the granulization space and accepting the samples with high posterior probability using Metropolis et. al. algorithms (1953). The data set used in this paper was obtained from the South African antenatal sero-prevalence survey of 2001 (Department of Health, 2001). The data was obtained through questionnaires completed by pregnant women attending selected public clinics and was conducted concurrently across all nine provinces in South Africa. The six demographic variables considered are: race, age of mother, education, gravidity, parity and, age of father, with the outcome or decision being either HIV positive or negative. The HIV status is the decision represented in binary form as either a 0 or 1, with a 0 representing HIV negative and a 1 representing HIV positive. The input data was discretised into four partitions. This
number was chosen as it gave a good balance between computational efficiency and accuracy. The parents' ages are given and discretised accordingly, education is given as an integer, where 13 is the highest level of education, indicating tertiary education. Gravidity is defined as the number of times that a woman has been pregnant, whereas parity is defined as the number of times that she has given birth. It must be noted that multiple births during a pregnancy are indicated with a parity of one. Gravidity and parity also provide a good indication of the reproductive health of pregnant women in South Africa. The rough set models were trained by sampling in the input space and accepting or rejecting using Metropolis et. al. algorithm (1953). The sample input space and the | Number | | | | | | | | | | | |----------|-------|------|-------|------|----|-------|-------|-------|----------|--------| | LowA | MedA | MedB | HighB | MedC | … | HighD | LowE | HighE | Accuracy | Rules | | 6.14 | 27.03 | 5.86 | 9.31 | 1.63 | … | 2.56 | 2.38 | 10.85 | 55.50 | 191.00 | | 11.44 | 15.77 | 9.21 | 10.19 | 2.76 | … | 5.71 | 0.59 | 32.67 | 59.87 | 299.00 | | 8.56 | 24.08 | 7.01 | 8.10 | 4.62 | … | 3.65 | 7.83 | 28.55 | 56.44 | 202.00 | | 1.78 | 3.76 | 0.00 | 1.71 | 0.27 | … | 3.39 | 6.54 | 19.84 | 60.77 | 130.00 | | 4.12 | 6.33 | 1.25 | 6.86 | 1.77 | … | 4.15 | 10.37 | 28.81 | 57.52 | 226.00 | | 7.83 | 20.49 | 1.45 | 4.99 | 1.13 | … | 3.36 | 5.00 | 23.70 | 62.54 | 283.00 | | 2.68 | 25.31 | 4.98 | 6.24 | 0.32 | … | 3.72 | 0.79 | 14.97 | 56.37 | 204.00 | from a total of 13087. The input data was therefore the demographic characteristics explained earlier and the output were the plausibility of HIV with 1 representing 100%
plausibility that a person is HIV positive and -1 indicating 100% plausibility of HIV negative. When training the rough set models using Markov Chain Monte Carlo, 500 samples were accepted and retained meaning that 500 sets of rules where each set contained 50 up to 550 numbers of rules with an average of 222 rules as can be seen in Figure 1. 500 samples were retained because the simulation had converged to a stationary distribution. This figure must be interpreted in the light of the fact on calculating the posterior probability we used the knowledge that fewer rules are more desirable than many. Therefore, the Bayesian rough set framework is able to select the number of rules in addition to the partition sizes. <image>
Lower Approximation Rules 1. If Race = African and Mothers Age = 23 and Education = 4 and Gravidity = 2 and Parity = 1 and Fathers Age = 20 Then HIV = Most Probably Positive 2. If Race = Asian and Mothers Age = 30 and Education = 13 and Gravidity = 1 and Parity = 1 and Fathers Age = 33 Then HIV = Most Probably Negative Upper Approximation Rules 1. If Race = Coloured and Mothers Age = 33 and Education = 7 and Gravidity = 1 and Parity = 1 and Fathers Age = 30 Then HIV = Positive with plausibility = 0.33333 2. If Race = White and Mothers Age = 20 and Education = 5 and Gravidity = 2 and Parity = 1 and Fathers Age = 20 Then HIV = Positive with plausibility = 0.06666 ## Conclusion Rough set were formulated using Bayesian framework. They were then trained using Markov Chain Monte Carlo method. The Bayesian framework is found to offer probabilistic interpretations to rough set. A balance between transparency of the rough set model and the accuracy of HIV estimation is achieved with a great deal of computational effort. ## References 1. Bishop, C.M., 2006. Pattern recognition and machine intelligence. **Springer, Berlin,** Germany.
2. **Deja, A., Peszek, P., 2003. Applying rough set theory to multi stage medical** diagnosing. Fundamenta Informaticae, 54, 387–408. 3. **Department of Health, 2001. National HIV and syphilis sero-prevalence survey of** women attending public antenatal clinics in South Africa. http://www.info.gov.za/otherdocs/2002/hivsurvey01.pdf. 4. **Fee, E., Krieger, N., 1993. Understanding AIDS: historical interpretations and the** limits of biomedical individualism. American Journal of Public Health, 83, 1477– 1486. 5. **Goh, C., Law, R., 2003. Incorporating the rough set theory into travel demand** analysis. Tourism Management , 24, 511-517. 6. **Golan, R. H., Ziarko, W., 1995. A methodology for stock market analysis utilizing** rough set theory. In Proceedings of Computational Intelligence for Financial Engineering, New York, USA, 32–40. 7. **Greco S., Matarazzo B., Slowinski R., 2006. Rough membership and Bayesian** confirmation measures for parameterized rough set. Proceedings of SPIE - The International Society for Optical Engineering, 6104, 314-324. 8. **Greco S., Pawlak Z.X., Slowinski R., 2004. Can Bayesian confirmation measures be** useful for rough set decision rules? Engineering Applications of Artificial Intelligence, 17 (4), 345-361. 9. **Inuiguchi, M., Miyajima, T., 2006. Rough set based rule induction from two decision** tables. European Journal of Operational Research, (in press).
10. **Lasry, G., Zaric, S., Carter, M.W., 2007. Multi-level resource allocation for HIV** prevention: A model for developing countries. European Journal of Operational Research, 180, 786-799. 11. **Leke, B.B., 2007. Computational Intelligence for Modelling HIV. Ph.D. Thesis,** School of Electrical & Information Engineering, University of the Witwatersrand, South Africa. 12. **Leke, B.B., Marwala, T., Tettey, T., 2006. Autoencoder networks for HIV** classification. Current Science, 91, 1467–1473. 13. **Leke, B.B., Marwala, T., Tettey, T., 2007. Using inverse neural network for HIV** adaptive control. International Journal of Computational Intelligence Research, 3, 11– 15. 14. **Leke, B.B., Marwala, T., Tim, T., Lagazio, M., 2006. Prediction of HIV status from** demographic data using neural networks. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. Taiwan, 2339-2344. 15. **Marwala, T., 2007. Bayesian training of neural network using genetic programming.** Pattern Recognition Letters**, http://dx.doi.org/10.1016/j.patrec.2007.03.004 (in press).** 16. **Malve, S., Uzsoy, R., 2007. A genetic algorithm for minimizing maximum lateness** on parallel identical batch processing machines with dynamic job arrivals and incompatible job families. Computers and Operations Research, 34, 3016-3028. 17. **Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E., 1953.** Equations of state calculations by fast computing machines. Journal of Chemical Physics. 21, 1087-1092.
18. **Nishino T., Nagamachi M., Tanaka H., 2006. Variable precision Bayesian rough set** model and its application to human evaluation data. Proceedings of SPIE - The International Society for Optical Engineering, 6104, 294-303. 19. **Ohrn, A., 1999. Discernibility and Rough set in Medicine: Tools and Applications.** PhD Thesis, Department of Computer and Information Science, Norwegian University of Science and Technology. 20. **Ohrn, A., Rowland, T., 2007. Rough set: A knowledge discovery technique for** multifactorial medical outcomes. American Journal of Physical Medicine and Rehabilitation (to appear). 21. **Pawlak, Z., 1991. Rough set: Theoretical Aspects of Reasoning about Data. Kluwer** Academic Publishers. 22. **Pe-a, J., Ltourneau, S., Famili, A. 1999. Application of rough set algorithms to** prediction of aircraft component failure. In Proceedings of the Third International Symposium on Intelligent Data Analysis, Amsterdam. 23. **Poundstone, K. E., Strathdee, S.A., Celentano, D.D., 2004. The social epidemiology** of human immunodeficiency Virus/Acquired Immunodeficiency Syndrome." Epidemiol Reviews, vol. 26, pp. 22–35, 2004. 24. **Rowland, T., Ohno-Machado, L., Ohrn, A., 1998. Comparison of multiple prediction** models for ambulation following spinal cord injury. Chute, 31, 528–532. 25. **Slezak D., Ziarko, W., 2005. The investigation of the Bayesian rough set model.** International Journal of Approximate Reasoning, 40 (1-2), 81-91. 26. **Tay, F.E.H., Shen. L., 2003. Fault diagnosis based on rough set theory. Engineering** Applications of Artificial Intelligence, 16, 39-43.
learning and decision analysis. Rough set are useful in the analysis of decisions in which there are inconsistencies. To cope with these inconsistencies, lower and upper approximations of decision classes are defined (Inuiguchib and Miyajima, 2006). Rough set theory is often contrasted to compete with fuzzy set theory (FST), but it in fact complements it. One of the advantages of RST is that it does not require a priori knowledge about the data set, and it is for this reason that statistical methods are not sufficient for determining the relationships that exists in complex cases such as between the demographic variables and their respective HIV status. Greco et al. (2006) generalized the original idea of rough set and introduced variable precision rough set, which is based on the concept of relative and absolute rough membership. The Bayesian framework is a tool that can be used to extend this absolute to relative membership framework. Nashino et. al. (2006) proposed a rough set method to analyze human evaluation data with much ambiguity such as sensory and feeling data and handles totally ambiguous and probabilistic human evaluation data using a probabilistic approximation based on information gains of equivalent classes. Slezak and Ziarko (2005) proposed a **rough set model, which is concerned primarily with algebraic properties of** approximately defined sets and extended the basic rough set theory to incorporate probabilistic information. This paper extends the rough set model to the probabilistic domain using Bayesian framework, Markov Chain Monte Carlo simulation and Metropolis algorithms. In order to achieve this, the rough set membership functions' granulizations are interpreted probabilistically. The proposed
27. **Witlox, F., Tindemans, H., 2004. The application of rough set analysis in activity** based modeling: Opportunities and constraints. Expert Systems with Applications, 27, 585-592.
Once the information table is obtained, the data is discretised into partitions as mentioned earlier. An information system can be understood by a pairΛ = (U, A**), where U and A,** are finite, non-empty sets called the universe, and the set of attributes, respectively (Deja and Peszek, 2003). For every attribute a an element of A, we associate a set Va**, of its** values, where Va **is called the value set of** a. a: U→ Va **(1)** Any subset B of A determines a binary relation *I(B)* on U**, which is called an** indiscernibility relation. The main concept of rough set theory is an indiscernibility relation (indiscernibility meaning indistinguishable from one another). Sets that are indiscernible are called elementary sets, and these are considered the building blocks of RST's knowledge of reality. A union of elementary sets is called a crisp set, while any other sets are referred to as rough or vague. More formally, for given information system Λ, then for any subset B ⊆ A, there is an associated equivalence relation *I(B)* called the B − *indiscernibility* **relation and is represented as shown as:** (x, y)∈ I(B) iff a(x) = a( y**) (2)** RST offers a tool to deal with indiscernibility and the way in which it works is, for each concept/decision X, the greatest definable set containing X **and the least definable set** containing X **are computed. These two sets are called the lower and upper approximation** respectively. The sets of cases/objects with the same outcome variable are assembled together. This is done by looking at the "purity" of the particular objects attributes in relation to its outcome. In most cases it is not possible to define cases into crisp sets, in such instances lower and upper approximation sets are defined. The lower approximation is defined as the collection of cases whose equivalence classes are fully contained in the
set of cases we want to approximate (Ohrn and Rowland, 2006). The lower approximation of set X is denoted BX **and is mathematically represented as:** $\eqref{eq:walpha}$. BX = {x∈U : B(x) ⊆ X} **(3)** The upper approximation is defined as the collection of cases whose equivalence classes are at least partially contained in the set of cases we want to approximate. The upper approximation of set X is denoted BX **and is mathematically represented as:** $$(4)$$ BX = {x ∈U : B(x) ∩ X = **Ø} (4)** It is through these lower and upper approximations that any rough set is defined. Lower and upper approximations are defined differently in various literature, but it follows that a crisp set is only defined for BX = BX **. It must be noted that for most cases in RST,** reducts are generated to enable us to discard functionally redundant information (Pawlak, 1991) and in this paper the prior probability handles reducts. ## Rough Membership Function The rough membership function is described; : X η A U→ **[0, 1] that, when applied to object** x, quantifies the degree of relative overlap between the set X **and the indiscernibility set to** which x **belongs. This membership function is a measure of the plausibility of which an** object x belongs to set X**. This membership function is defined as:** $$\eta_{A}^{x}={\frac{\left[X\right]_{B}\cap X|}{\left[X\right]_{B}}}$$ η = **(5)** $$({\mathfrak{S}})$$ where [X]b is an elementary set.
Rough Set Accuracy The accuracy of rough set provides a measure of how closely the rough set is approximating the target set. It is defined as the ratio of the number of objects which can be positively placed in X to the number of objects that can be possibly placed in X**. In** other words it is defined as the number of cases in the lower approximation, divided by the number of cases in the upper approximation; 0 ≤ (X ) ≤1 α p $$\alpha_{p}(X)={\frac{\left|B X\right|}{\left|\overline{{B}}X\right|}}$$ $$\mathbf{(6)}$$ (X ) = **(6)** Rough set Formulation The process of modeling the rough set can be broken down into five stages. The first stage would be to select the data while the second stage involves pre-processing the data to ensure that it is ready for analysis. The second stage involves discretizing the data and removing unnecessary data (cleaning the data). If reducts were considered, the third stage would be to use the cleaned data to generate reducts. A reduct is the most concise way in which we can discern object classes (Witlox and Tindermans, 2004). In other words, a reduct is the minimal subset of attributes that enables the same classification of elements of the universe as the whole set of attributes (Pawlak, 1991). To cope with inconsistencies, lower and upper approximations of decision classes are defined (Ohrn, 2006; Deja and Peszek, 2003). Stage four is where the rules are extracted or generated. The rules are normally determined based on condition attributes values (Goh and Law, 2003). Once the rules are extracted, they can be presented in an *if CONDITION(S)-then* DECISION format (Leke, 2007). The final or fifth stage involves testing the newly
created rules on a test set to estimate the prediction error of the rough set model. The equation representing the mapping between the inputs to the output using rough set can be written as y = f (G,N,R**) (7)** where y is the output, G **is the granulization of the input space into high, low, medium** etc, N is the number of rules and R **is the rule. So for a given nature of granulization, the** rough set model will be able to give the optimal number of rules and the accuracy of prediction. Therefore, in rough set modeling there is always a trade-off between the degree of granulization of the input space (which affects the nature and size of rules) and the prediction accuracy of the rough set model. Therefore, the estimation process for the level and nature of the granulization process will be solved using Bayesian framework, which is explained in the next section. Bayesian Training on Rough Set Model The Bayesian framework can be written as in (Marwala, 2007; Bishop, 2006): $$(7)$$ $P(M\mid D)=\frac{P(D\mid M)p(M)}{p(D)}$ where $M=\begin{bmatrix}G\\ N\\ R\end{bmatrix}$ $$\begin{array}{l}\mathbf{(8)}\end{array}$$ . Within the context of Bayesian rough set models, G is granulization, R **= rough set rules,** N = number of rules, D is the data which consist of input x and output y and A **= accuracy** of rough set model prediction. The parameter P(M | D) **is the probability of the rough set** model given the observed data, P(D | M ) **is the probability of the data given the assumed** rough set model and is also called the likelihood function, P(M **) is the prior probability** of the rough set model and P(D) is the probability of the data and is also called the
evidence. The evidence can be treated as the normalization constant and therefore is ignored in this paper. The likelihood function may be estimated as follows: $$P(D\,|\,M)=\frac{1}{z_{1}}\mathrm{exp}(-e r r o r)=\frac{1}{z_{1}}\mathrm{exp}\{A(N,R,G)-1\}$$ $$(9)$$ P D M **(9)** Here 1 z **is the normalization constant. The prior probability in this problem is linked to** the concept of reducts, which was explained earlier and it is the prior knowledge that the best rough set models are the ones with the minimum numbers of rules (N**). Therefore,** the prior probability may be written as follows: $$P(M)={\frac{1}{z_{2}}}\mathrm{exp}\{-\lambda N\}$$ $$(10)$$ (10) where 2 z is the normalization constant and λ **is a hyperparameter that scales the prior** information to be in line with the magnitude of the likelihood function. The posterior probability of the model given the observed data is thus: $$P(M\mid D)=\frac{1}{z}\mathrm{exp}\{A(N,R,G)-1-\lambda N\}$$ $$(11)$$ ( | **) (11)** where z **is the normalization constant. Since the number of rules and the rules themselves** given the data depend on the nature of the granulization of the input space, we shall sample in the granule space using a procedure called Markov Chain Monte Carlo simulation (Marwala, 2007; Bishop, 2006). Markov Monte Carlo Simulation The manner in which the probability distribution in equation 11 may be sampled is to randomly generate a succession of granule vectors and accepting or rejecting them based on how probable they are using Metropolis algorithm. This process requires a generation
of large samples of granules for the input space, which in many cases is not computationally efficient. The MCMC creates a chain of granules and accepts or rejects them using Metropolis algorithm. The application of Bayesian approach and MCMC rough set, results in the probability distribution function of the granules, which in turn leads to the distribution of the rough set outputs. From these distribution functions the average prediction of the rough set model and the variance of that prediction can be calculated. The probability distributions of the rough set model represented by granules are mathematically described by equation 11. From equation 11 and by following the rules of probability theory, the distribution of the output parameter, y, **is written as** (Marwala, 2007): $$p(y\,|\,x,D)=\int p(y\,|\,x,M)\,p(M\,|\,D)d M$$ p( y | x,D) = p( y | x,M ) p(M | D)dM **(12)** Equation 12 depends on equation 11, and is difficult to solve analytically due to relatively high dimension of granule space. Thus the integral in equation 12 may be approximated as follows: $$\stackrel{\mathrm{�}}{y}\equiv\frac{1}{L}\sum_{i=I}^{R+L-1}F(M_{i})$$ $$\quad(12)$$ $$(13)$$ y **(13)** Here F is the mathematical model that gives the output given the input, ỹ **is the average** prediction of the Bayesian rough set model, R **is the number of initial states that are** discarded in the hope of reaching a stationary posterior distribution function described in equation 11 and L **is the number of retained states. In this paper, MCMC method is** implemented by sampling a stochastic process consisting of random variables {g1,g2*,…,g*n} through introducing random changes to granule vector {g} **and either** accepting or rejecting the state according to Metropolis et al. algorithm given the
# Comparing Robustness Of Pairwise And Multiclass Neural-Network Systems For Face Recognition J. Uglov, V. Schetinin, C. Maple Computing and Information System Department, University of Bedfordshire, Luton, UK Abstract. Noise, corruptions and variations in face images can seriously hurt the performance of face recognition systems. To make such systems robust, multiclass neuralnetwork classifiers capable of learning from noisy data have been suggested. However on large face data sets such systems cannot provide the robustness at a high level. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness of face recognition. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face images corrupted by noise. ## 1. Introduction Performance of face recognition systems is achieved at a high level when such systems are robust to noise, corruptions and variations in face images [1]. To make face recognition systems robust, multiclass artificial neural networks(ANNs) capable of learning from noisy data have been suggested [1]. However on large face data sets such neural-network systems cannot provide the robustness at a high level [1] - [3]. To overcome this problem pairwise classification systems have been proposed, see e.g. [3], [4]. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face image data described in [5]. In section 2 we briefly describe face image representation and noise problems, and then in section 3 we describe a pairwise neural-network system proposed for face recognition. Section 4 describes our experiments and finally section 5 concludes the paper. ## 2. Face Image Representation And Noise Problems Following to [1] - [3], we use the principal component analysis (PCA) to represent face images as m-dimensional vectors of components. The PCA is the common technique for data representation in face recognition systems. The first two principal components which make the most important contribution in face recognition can be used to visualise the scatter of patterns of different classes (faces). Therefore the use of such a visualisation allows us to observe how the noise can corrupt the boundaries of classes. For example, Fig. 1 shows two graphs depicting the examples of four classes whose centres of gravity are visually distinct. The left side plot depicts the examples taken from the original data while the right side plot depicts these examples containing
noise components drawn from a Gaussian density function with zero mean and the standard <image> <image> deviation alpha = 0.5. From this plot we can observe that the noise components corrupt the boundary of the given classes, and therefore the performance of a face recognition system can be affected. From these plots we can also observe that the boundaries between pairs of the classes can remain almost the same. This inspire us to exploit such a classification scheme to implement a pairwise neural-network system for face recognition. ## 3. A Pairwise Neural-Network System The idea behind the pairwise classification is to use two-class ANNs learning to classify all possible pairs of classes. Therefore for C classes the pairwise system should include *C*(C* - 1)/2 ANNs learnt to solve two-class problems. For example, for classes 1, 2, and 3 depicted in Fig. 2, the number of two-class ANNs is equal to 3. In this figure the lines f1/2, f1/3 and f2/3 are the dividing hyperplanes learnt by the ANNs. We can simply assume these functions are given the positive values for examples of the classes standing first in the lower indexes (1, 1, and 2) and the negative values for the classes standing second in there (2, 3, and 3).
<image> Now we can combine hyperplanes f1/2, f1/3 and f2/3 to build up the new dividing hyperplanes g1, g2, and g3. The first hyperplane g1 combines the functions f1/2 and f1/3 so that g1 = f1/2 + f1/3. These functions are taken with weights of 1.0 because both functions f1/2 and f1/3 give the positive output values on the examples of class 1. Likewise, the second and third hyperplanes are as follows: g2 = f2/3 - f1/2 and g3 = - f1/3 - f2/3. In practice each of hyperplanes g1, …, gC, can be implemented as a two-layer feedforward ANN with a given number of hidden neurons fully connected to the input nodes. Then we can introduce the output neuron summing all outputs of the ANNs to make a final decision. For example, the pairwise neural-network system depicted in Fig 3 consists of three neural networks performing the functions f1/2, f1/3, and f2/3. The three output neurons g1, g2, and g3 are connected to these networks with weights equal to (+1, +1), (–1, +1) and (–1, – 1), respectively. <image> In general, a pairwise neural-network system consists of C(C - 1)/2 neural networks, performing functions f1/2, …, fi/j, …, fC - 1/C, and C output neurons g1, …, gC, where i < j = 2, …, C. We can see that the weights of output neuron gi connected to the hidden neurons fi/k and fk/i should be equal to + 1 and –1, respectively.
## 4. Experiments The goal of our experiments is to compare the robustness of the proposed pairwise and standard multiclass neural-network systems on the Cambridge ORL face image data set [5] (in a full paper, the experiments will run on different face image data sets). To estimate the robustness we add noise components to the data and then estimate the performance on the test data within 5 fold cross-validation. The performances of the pairwise and multiclass systems are listed in Table 1 and shown in Fig. 4. | alpha. The performances are represented by the means and 2intervals. | | | | | | | | | |-------------------------------------------------------------------------|--------|--------|--------|--------|--------|--------|--------|--------| | alpha | 0.0 | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | 1.1 | 1.3 | | P, mean | 0.972 | 0.966 | 0.953 | 0.920 | 0.859 | 0.772 | 0.659 | 0.556 | | P, 2 | ±0.004 | ±0.013 | ±0.017 | ±0.013 | ±0.018 | ±0.030 | ±0.028 | ±0.031 | | M, mean | 0.952 | 0.951 | 0.932 | 0.898 | 0.802 | 0.678 | 0.557 | 0.419 | | M, 2 | ±0.017 | ±0.016 | ±0.025 | ±0.016 | ±0.015 | ±0.052 | ±0.036 | ±0.050 | From this table we can see that for alpha ranging between 0.0 and 1.3 the proposed pairwise <image> system significantly outperforms the multiclass systems. For alpha = 0.0 the improvement in the performance is 2.0% while for alpha = 1.1 the improvement becomes 10.2%.
## 5. Conclusion We have proposed a pairwise neural-network system for face recognition in order to reduce the negative effect of noise and corruptions in face images. Within such a classification scheme we expect that the improvement in the performance can be achieved on the base of our observation that boundaries between pairs of classes remain almost the same while a noise level increases. We have compared the performances of the proposed pairwise and standard multiclass neural-network systems on the face dataset [5]. Evaluating the mean values and standard deviations of the performances under different levels of noise in the data, we have found that the proposed pairwise system is superior to the multiclass neural-network system. Thus we conclude that the proposed pairwise system is capable of decreasing the negative effect of noise and corruptions in face images. Clearly this is a very desirable property for face recognition systems when the robustness is of crucial importance. ## 6. References 1. S.Y. Kung, M.W. Mak and S.H. Lin. Biometric Authentication: A Machine Learning Approach. Pearson Education, 2005 2. C. Liu and H. Wechler. Robust coding scheme for indexing and retrieval from large face database. IEEE Trans Image Processing, 9(1), 132-137, 2000 3. A.S. Tolba, A.H. El-Baz and A.A. El-Harby. Face Recognition: A Literature Review. IJSP. 2(2), 88-103, 2005 4. T. Hastie and R. Tibshirani. Classification by pairwise coupling. Advances in NIPS, 10, 507-513, 1998 5. E.S. Samaria. Face recognition using hidden Markov models. PhD thesis. University of Cambridge, 1994
# Ensemble Learning For Free With Evolutionary Algorithms ? Christian Gagn´e ∗ Informatique WGZ Inc., 819 avenue Monk, Qu´ebec (QC), G1S 3M9, Canada. christian.gagne@wgz.ca ## Abstract Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final **result. In the meanwhile, Ensemble Learning, one of the most** efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-EEL) or incrementally along evolution (On**-EEL).** Experiments on a set of benchmark problems show that Off**-EEL outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier** ensembles. ## Categories And Subject Descriptors I.5.2 [Pattern Recognition]: Design Methodology—Classifier design and evaluation; I.2.8 [**Artificial Intelligence**]: Problem Solving, Control Methods, and Search—**Heuristic** methods ## General Terms Algorithms ∗**This work has been mainly realized during a postdoctoral** fellowship of Christian Gagn´e at the University of Lausanne. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. GECCO'07, July 7–11, 2007, London, England, United Kingdom. Copyright 2007 ACM 978-1-59593-697-4/07/0007 ...$5.00. Mich`ele Sebag Equipe TAO - CNRS UMR 8623 / INRIA Futurs, ´ LRI, Bat. 490, Universit´e Paris Sud, F-91405 Orsay Cedex, France. michele.sebag@lri.fr Marc Schoenauer Equipe TAO - INRIA Futurs / CNRS UMR 8623, ´ LRI, Bat. 490, Universit´e Paris Sud, F-91405 Orsay Cedex, France. ## Marc.Schoenauer@Lri.Fr Marco Tomassini Information Systems Institute, Universit´e de Lausanne, CH-1015 Dorigny, Switzerland. marco.tomassini@unil.ch ## Keywords Ensemble Learning, Evolutionary Computation ## 1. Introduction Ensemble Learning, one of the main advances in Supervised Machine Learning since the early 90's, relies on: i) a weak learner (extracting hypotheses, aka classifiers, with error probability less than 1/2 − ǫ, ǫ > **0); ii) a diversification** heuristics used to extract sufficiently diverse classifiers; **iii)** a voting mechanism, aggregating the diverse classifiers constructed [1, 8]. If the classifiers are sufficiently diverse and their errors are independent, then their majority vote will reach an arbitrarily low error rate on the training set as the number of classifiers increases [6]. Therefore, up to some restrictions on the classifier space [25], the generalization **error** will also be low1. The most innovative aspect of Ensemble Learning w.r.t. the Machine Learning literature concerns the diversity requirement, implemented through parallel or sequential heuristics. In Bagging, diversity is enforced by considering independent sub-samples of the training set, and/or using different learning parameters [1]. Boosting iteratively constructs a sequence of classifiers, where each classifier focuses on the examples misclassified by the previous ones [8]. Diversity is also a key feature of Evolutionary Computation (EC): in contrast with all other stochastic optimization approaches, evolutionary algorithms proceed by evolving a population of solutions, and the diversity thereof has been stressed as a key factor of success since the beginnings of EC. Deep similarities between Ensemble Learning and EC thus appear; in both cases, diversity is used to escape from local minima, where any single "best" solution is only too easily trapped. Despite this similarity, Evolutionary Learning has most often (with some notable exceptions, see [14, 16, 18] among others) focused on single-hypothesis learning, where some single best-of-run hypothesis is returned as the solution. However, the evolutionary population itself could be used 1**In practice, the generalization error is estimated from the** error on a test set, disjoint from the training set. The reader is referred to [4] for a comprehensive discussion about the comparative evaluation of learning algorithms.
as a pool for recruiting the elements of an ensemble, enabling "Ensemble Learning for Free". Previous work along this line will be described in Section 2, mostly based on using an evolutionary algorithm as weak learner [17], or using evolutionary diversity-enforcing heuristics [16, 18]. In this paper, the "Evolutionary Ensemble Learning For Free" claim is empirically examined along two directions. The first direction is that of the classifier diversity; a new learning-oriented fitness function is proposed, inspired by the co-evolution framework [13] and generalizing the diversity-enforcing fitness proposed by [18]. The second direction is that of the selection of the ensemble classifiers within the evolutionary population(s). Selecting the best classifiers in a pool amounts to a feature selection problem, that is, a combinatorial optimization problem [12]. A greedy set-covering approach is used, build on a margin-based criterion inspired by Schapire et al. **[23]. Finally, the paper presents two** Evolutionary Ensemble Learning (EEL) approaches, called Off-EEL and On**-EEL, respectively tackling the selection of** the ensemble classifiers in the final population, or along evolution. Paper structure is as follows. Section 2 reviews and discusses some work relevant to Evolutionary Ensemble Learning. Section 3 describes the two proposed approaches OffEEL and On**-EEL, introducing the specific fitness function** and the ensemble classifier selection procedure. Experimental results based on benchmark problems from the UCI repository are reported in Section 4. The paper concludes with some perspectives for further research, discussing the priorities for a tight coupling of Ensemble Learning with Evolutionary Optimization in terms of dynamic systems [22]. ## 2. Related Work Interestingly, some early approaches in Evolutionary Learning were rooted on Ensemble Learning ideas2**. The Michigan** approach [14] evolves a population made of rules, whereas the Pittsburgh approach evolves a population made of sets of rules. What is gained in flexibility and tractability in the Michigan approach is compensated by the difficulty of assessing a single rule, for the following reason. A rule usually only covers a part of the example space; gathering the best rules (e.g. the rules with highest accuracy) does not result in the best ruleset. Designing an efficient fitness function, such that a good quality ruleset could be extracted from the final population, was found a tricky task. In the last decade, Ensemble Learning has been explored within Evolutionary Learning, chiefly in the context of Genetic Programming (GP). A first trend directly inspired from Bagging and Boosting aims at reducing the fitness computation cost [7, 16] and/or dealing with datasets which do not fit in memory [24]. For instance, Iba [16] divided the GP population into several sub-populations which are evaluated on subsets of the training set. Folino et al. **[7] likewise sampled the training set in a Bagging-like mode in the context** of parallel cellular GP. Song et al. **[24] used Boosting-like** heuristics to deal with training sets that do not fit in memory; the training set is divided into folds, one of which is loaded in memory and periodically replaced; at each generation, small subsets are selected from the current fold to 2**Learning Classifier Systems (LCS, [14, 15]) are mostly devoted to Reinforcement Learning, as opposed to Supervised** Machine Learning; therefore they will not be considered in the paper. compute the fitness function, where the selection is nicely based on a mixture of uniform and Boosting-like distributions. The use of Evolutionary Algorithms as weak learners within a standard Bagging or Boosting approach has also been investigated. Boosting approaches for GP have been applied for instance to classification [21] or symbolic regression [17]: each run delivers a GP tree minimizing the weighted sum of the training errors, and the weights were computed as in standard Boosting [8]. While such ensembles of GP trees result, as expected, in a much lower variance of the performance, they do not fully exploit the population-based nature of GP, as independent runs are launched to learn successive classifiers. Liu et al. **[18] proposed a tight coupling between Evolutionary Algorithms and Ensemble Learning. They constructed an ensemble of Neural Networks, using a modified** back-propagation algorithm to enforce the diversity of the networks; specifically, the back-propagation aims at both minimizing the training error and maximizing the negative correlation of the current network with respect to the current population. Further, the fitness associated to each network is the sum of the weights of all examples it correctly classifies, where the weight of each example is inversely proportional to the number of classifiers that correctly classify this example. While this approach nicely suggests that ensemble learning is a Multiple Objective Optimization (MOO) problem (minimize the error rate and maximize the diversity), it classically handles the MOO problem as a fixed weighted sum of the objectives. The MOO perspective was further investigated by Chandra and Yao in the DIVACE system, a highly sophisticated system for the multi-level evolution of ensemble of classifiers [2, 3]. In [3], the top-level evolution simultaneously minimizes the error rate (accuracy) and maximizes the negative correlation (diversity). In [2], the negative correlationinspired criterion is replaced by a **pairwise failure crediting**; the difference concerns the misclassification of examples that are correctly classified by other classifiers. Finally, the ensemble is constructed either by keeping all classifiers in the final population, or by clustering the final population (after their phenotypic distance) and selecting a classifier in each cluster. While the MOO perspective nicely captures the interplay of the accuracy and diversity goals within Ensemble Learning, the selection of the classifiers in the genetic pool as done in [2, 3] does not fully exploit the possibilities of evolutionary optimization, in two respects. On the one hand, it only considers the final population that usually involves up to a few hundred classifiers, while learning ensembles commonly involve some thousand classifiers. On the other hand, clustering-based selection proceeds on the basis of the phenotypic distance between classifiers, considering again that all examples are equally important, while the higher stress put on harder examples is considered the source of the better Boosting efficiency [5]. ## 3. Ensemble Learning For Free After the above discussion, Evolutionary Ensemble Learning (EEL) involves two critical issues: i) how to enforce both the predictive accuracy and the diversity of the classifiers in the population, and across generations; ii) how to best select the ensemble classifiers, from either the final population
only or all along evolution. Two EEL frameworks have been designed to study these interdependent issues. The first one dubbed Offline Evolutionary Ensemble Learning (Off**-EEL) constructs the ensemble from the final population only. The second one, called** Online Evolutionary Ensemble Learning (On**-EEL), gradually constructs the classifier ensemble as a selective archive** of evolution, where some classifiers are added to the archive at each generation. Both approaches combine a standard generational evolutionary algorithm with two interdependent components: a new diversity-enhancing fitness function, and a selection mechanism. The fitness function, presented in Section 3.1 and generalizing the fitness devised by Liu et al. **[18], is inspired from co-evolution [13]. The selection process is used** to extract a set of classifiers from either the final population (Off-EEL) or the current archive plus the current population (On**-EEL), and proceeds by greedily maximizing the** ensemble margin (Section 3.2). Only binary or multi-class classification problems are considered in this paper. The decision of the classifier ensemble is the majority vote among the classifiers (ties being arbitrarily broken). ## 3.1 Diversity-Enforcing Fitness Traditionally, Evolutionary Learning maximizes the number of correctly classified training examples (or equivalently minimizes the error rate). However, examples are not equally informative; therefore a rule correctly classifying a hard **example (e.g. close to the frontiers of the target concept) is** more interesting and should be more rewarded than a rule correctly classifying an example which is correctly classified by almost all rules. Co-evolutionary learning, first pioneered by Hillis [13], nicely takes advantage of the above remark, gradually forging more and more difficult examples to enforce the discovery of high-quality solutions. Boosting proceeds along the same lines, gradually putting the stress on the examples which have not been successfully predicted so far. A main difference between both frameworks is that Boosting exploits a finite set of labelled examples, while co-evolutionary learning has an infinite supply of labelled examples (since it embeds the oracle). A second difference is that the difficulty of an example depends on the whole sequence of classifiers in Boosting, whereas it only depends on the current classifier population in co-evolution. In other words, Boosting is a memory-based process, while co-evolutionary learning is a memoryless one. Both approaches thus suffer from opposite weaknesses. Being a memory-based process, Boosting can be misled by noisy examples; consistently misclassified, these examples eventually get heavy weights and thus destabilize the Boosting learning process. Quite the contrary, co-evolution can forget what has been learned during early stages and specific heuristics, e.g. the so-called Hall-of-Fame, archive of best-so-far individuals, are required to prevent co-evolution from cycling in the learning landscape [20]. Based on these ideas, the fitness of classifiers is defined in this work from a set of reference classifiers noted Q**. The** hardness of every training example x **is measured after the** number of classifiers in Q which misclassify x**. The fitness of** every classifier h **is then measured by the cumulated hardness of the examples that are correctly classified by** h. Three remarks can be made concerning this fitness function. Firstly, contrasting with standard co-evolution, there is no way classifiers can "unlearn" to classify the training examples, since the training set is fixed. Secondly, as in Boosting, the fitness of a classifier reflects its diversity with respect to the reference set. Lastly, the classifier fitness function is highly multi-modal compared to the simple error rate: good classifiers might correctly classify many easy examples, or sufficiently many hard enough examples, or a few very hard examples. Formally, let E = {(xi, yi), xi ∈ X , yi ∈ Y, i = 1 **. . . n**} denote the training set (referred to as set of fitness cases in the GP context); each fitness case or example (xi, yi**) is** composed of an instance xi **belonging to the instance space** X and the associated label yi **belonging to a finite set** Y . Any classifier h **is a function mapping the instance space** X onto Y . The loss function ℓ is defined as ℓ : Y × Y 7→ IR, where ℓ(y, y′**) is the (real valued) error cost of predicting** label y **instead of the true label** y ′ . The hardness or weight of every training example (xi, yi), noted w Q i , or wi when the reference set Q **is clear from** the context, is the average loss incurred by the reference classifiers on (xi, yi): $$w_{i}={\frac{1}{|{\mathcal{Q}}|}}\sum_{h\in{\mathcal{Q}}}\ell(h(\mathbf{x}_{i}),y_{i}).$$ $$(1)$$ The cumulated hardness fitness F is finally defined as follows: F(h**) is the sum over all training examples that are** correctly classified by h, of their weight wi **raised to power** γ. Parameter γ **governs the importance of the weights** wi (the cumulated hardness boils down to the number of correctly classified examples for γ **= 0) and thus the diversity** pressure. $$\mathcal{F}(h)=\sum_{\begin{subarray}{c}i=1...n\\ h(\mathbf{x}_{i})=y_{i}\end{subarray}}w_{i}^{\gamma}\tag{2}$$ where $h$ is the $i$-th column of $h$. Parameter γ **can also be adjusted depending on the level** of noise in the dataset. As noisy examples typically reach high weights, increasing the value of γ **might lead to retain** spurious hypotheses, which happen to correctly classify a few noisy examples. When ℓ **is set to the step loss function** (ℓ(y, y′**) = 0 if** y = y ′ , 1 otherwise) and γ **is set to 1, the** above fitness function is the same as the one used by Liu et al. [18]. The value of γ **is set to 2 in the experiments** (Section 4). 3.2 Ensemble Selection As noted earlier on, the selection of classifiers in a pool H = {h1, . . . , hT } **in order to form an efficient ensemble** is formally equivalent to a feature selection problem. The equivalence is seen by replacing the initial instance space X with the one defined from the classifier pool, where each instance xi is redescribed as the vector (h1(xi), . . . , hT (xi)). Feature selection algorithms [12] could thus be used for ensemble selection; unfortunately, feature selection is one of the most difficult Machine Learning problems. Therefore, a simple greedy selection process is used in this paper to select the classifiers in the diverse pools considered by the Off-EEL (Section 3.3) and On**-EEL (Section 3.4) algorithms. The novelty is the selection criterion, generalizing** the notion of margin [11, 23] to an ensemble of examples as follows.
plane is amenable to quadratic optimization, with: 1. Let P1 **be the first evolutionary population, and** h ∗ the classifier with minimal error rate on E. 2. L1 = Ensemble-Selection(P1, E, {h ∗ }) 3. For t = 2 **. . . T** : (a) Evolve Pt−1 → Pt, using Lt−1 **as reference set.** (b) Lt = Ensemble-Selection(Pt , E, Lt−1) 4. Return LT . classifier ensemble; like in Boosting, the goal is to find classifiers which overcome the errors of the past classifiers. While the ensemble selection algorithm is launched at every generation, it uses the biased current population as classifier pool. In fact, On**-EEL addresses a dynamic optimization** problem; if the classifier ensemble significantly changes between one generation and the next, the fitness landscape will change accordingly and several evolutionary generations might be needed to accommodate this change. On the other hand, as long as the current population does not perform well, the ensemble selection algorithm is unlikely to select further classifiers in the current ensemble; the fitness landscape thus remains stable. The population diversity does not directly result from the fitness function as in the Off**EEL case; rather, it relates with the dynamic aspects of the** fitness function. ## 4. Experimental Setting This section describes the experimental setting used to assess the EEL framework. ## 4.1 Datasets Experiments are conducted on the six UCI datasets [19] presented in Table 1. The performance of each algorithm is measured after a standard stratified 10-fold cross-validation procedure. The dataset is partitioned into 10 folds with same class distribution. Iteratively, all folds but the i-th one are used to train a classifier, and the error rate of this classifier on the remaining i**-th fold is recorded. The performance of the algorithm is averaged over 10 runs for each** fold, and over the 10 folds. ## 4.2 Classifier Search Space As mentioned earlier on, evolutionary ensemble learning can accommodate any type of classifier; Off-EEL and On**EEL could consider neural nets, genetic programs or decision** lists as genotypic search space. Our experiments will consider the most straightforward classifiers, namely separating hyperplanes, as these can easily be inspected and compared. Formally, let X = IRd**be the instance space, a separating** hyperplane classifier h is characterized as (w, b) ∈ IRd × IR with h(x) = < w, x > − b (< w, x > **denotes the scalar** product of w and x). The search for a separating hyper- $${\mathcal{F}}(h)=\sum_{i=1,\dots,n}\left(h(\mathbf{x}_{i})-y_{i}\right)^{2}.$$ $$({\mathfrak{h}})$$ 2. (6) As the above optimization problem can be tackled using standard optimization algorithms, it provides a well-founded baseline for comparison. Specifically, the first goal of the experiments is thus to assess the merits of evolutionary ensemble learning against three other approaches. The first baseline algorithm referred to as Least Mean Square (LMS) uses a stochastic gradient algorithm to determine the optimal separating hyperplane in the sense of criterion given by Equation 6 (see pseudo-code in Figure 3). The second baseline algorithm is an elementary evolutionary algorithm, producing the best-of-run separating hyperplane such that it minimizes the (training) error rate3. The third reference algorithm is the prototypical ensemble learning algorithm, namely AdaBoost with its default parameters [8]. AdaBoost uses simple decision stumps [23] baseline algorithm as weak learner (more on this below). The learning error is classically viewed as composed from a variance term and a bias term [1]. The bias term measures how far the target concept tc **is from the classifier** search space H**, that is, from the best classifier** h ∗ **in this** search space. The variance term measures how far away one can wander from h ∗ **, wrongly selecting other classifiers in** H (overfitting). The comparison of the first and second baseline algorithms gives some insight into the intrinsic difficulty of the problem. Stochastic gradient (LMS) will find the global optimum for criterion given by Equation 6, but this solution optimizes at best the training error. The comparison between the solutions respectively found by LMS and the simple evolutionary algorithm will thus reflect the learning variance term. Similarly, the comparison of the first baseline algorithm and AdaBoost gives some insight into how the ensemble improves on the base weak learner; this improvement can be interpreted in terms of variance as well as in terms of bias (since the majority vote of decision stumps allows for describing more complex regions than simple separating hyperplanes alone). ## 4.3 Experimental Setting The parameters for the LMS algorithm (see Figure 3) are as follows: the training rate, set to η(t**) = 1**/(n √t**), decreases** over the training epochs; the maximum number of epochs allowed is T **= 10000; the stopping criterion is when the** difference in the error rates over two consecutive epochs, is less that some threshold ǫ (ǫ = 10−7**). Importantly, LMS** requires a preliminary normalization of the dataset, (e.g. ∀i = 1 . . . n, xi ∈ [−1, 1]d**). The final result is the error on** the test set, averaged over 10 runs for each fold (because of the stochastic reordering of the training set) and averaged over 10 folds. The classical AdaBoost algorithm [8] uses simple decision stumps [23], and the number of Boosting iterations is limited to 2000. Decision stumps are simple binary classifiers that 3**For 3-classes problems, e.g.** bos or cmc**, the classifier is** characterized as two hyperplanes, respectively separating class 0 (resp. class 1) from the other two classes. In case of conflict (the example is simultaneously classified in class 0 by the first classifier and in class 1 by the second classifier), the tie is broken arbitrarily.
| Table 1: UCI datasets used for the experimentations. | | | | | |--------------------------------------------------------|------|----------|---------|-----------------------------------------------------------------------------------------------------------------------------------------| | # | # | | | | | Dataset | Size | features | classes | Application domain | | bcw | 683 | 9 | 2 | Wisconsin's breast cancer, 65 % benign and 35 % malignant. | | bld | 345 | 6 | 2 | BUPA liver disorders, 58 % with disorders and 42 % without disorder. | | bos | 508 | 13 | 3 | Boston housing, 34 % with median value v < 18.77 K$, 33 % with v ∈]18.77, 23.74], and 33 % with v > 23.74. | | cmc | 1473 | 9 | 3 | Contraceptive method choice, 43 % not using contraception, 35 % using short-term contraception, and 23 % using long-term contraception. | | pid | 768 | 8 | 2 | Pima indians diabetes, 65 % tested negative and 35 % tested positive for diabetes. | | spa | 4601 | 57 | 2 | Junk e-mail classification, 61 % tested non-junk and 39 % tested junk. | <image> | 1. Initialize w = 0 and b = 0 2. For t = 1 . . . T : (a) Shuffle the dataset E = {(xi, yi), i = 1 . . . n} (b) For i = 1 . . . n: ai = < w, xi > − b ∆i = 2η(t)(ai − yi) w = w + ∆ixi b = b − ∆i q 1 (c) Errt = P i=1...n (ai − yi) 2 (RMS error) n (d) If |Errt − Errt−1| < ǫ, stop | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| classify data according to a threshold value on one of the features of the data set. If the feature value of a given data is less (or greater) than the threshold, the data is assigned to a given class, otherwise it is assigned to another class. Decision stumps are trained deterministically, by looping over all features and all features threshold for a given training dataset, selecting the feature, threshold, and comparison operation on the threshold (> or <**) that maximize the classification accuracy on the training data set. Decision stumps** are the simplest possible linear classifiers, but generate good results in combination with AdaBoost. The elementary evolutionary algorithm is a real-valued generational GA using SBX crossover, Gaussian mutations, and tournament selection. The search space is IRd+1 for binary classification problems, and IR2d+2 for ternary classification problem, where d **is the number of attributes in the** problem domain. The evolutionary parameters are detailed in Table 2. All experiments with the real-valued GA rely on the C++ framework Open BEAGLE [9, 10]. ## 5. Results This section reports on the experimental results obtained by Off-EEL and On**-EEL, compared to the three baseline** methods respectively noted LMS (optimal linear classifier), GA (genetically evolved linear classifier) and Boosting (ensemble of decision stumps), on the six UCI data sets described in Table 1. For each method and problem, the average test error (over 100 independent runs as described in Section 4) and the associated standard deviation are displayed in Table 3. The average computational effort of Off**-EEL for** a run ranges from 30 seconds (on problem bld**) to 20 minutes (on problem** spa**), on AMD Athlon 1800+ computers** with 1G of memory. For On**-EEL, the average computational effort for a run ranges from 2 hours (on problem** pid) to 24 hours (on problem spa**), on the same computers.** With respect to the baseline algorithms, a first remark is that the LMS-based classifier is significantly outperformed by all other methods, on all problems but one (pid**). This is** explained as the criterion given by Equation 6 uselessly overconstrains the learning problem, replacing a set of linear inequalities with the minimization of the sum of quadratic terms. Similarly, the single-hypothesis evolutionary learning is dominated by all other methods on all problems but one (bcw**). Boosting shows its acknowledged efficiency as it is** the best algorithm on two out of six problems (Off**-EEL and** Boosting are both best performers for the cmc **problem).** Off**-EEL is the best method for three out of six problems** tested. Compared to AdaBoost, it generates ensemble with lower test error rate on four problems, with a tie for the cmc problem, and AdaBoost being the best on spa **problem. In** all cases, the number of classifiers is lower, with an average between 235 and 335 classifiers for Off**-EEL compared** with more than 750 on all problems but bcw **for Boosting.** This is understandable given that the ensembles are built with Off**-EEL starting from a population of 500 individuals. This raises the question on whether the evolutionary** learning accuracy could be improved by considering larger population sizes. But it should not be forgotten that the decision stumps classifier making the AdaBoost ensembles are significantly simpler than the evolved linear discriminants of Off**-EEL. No clear conclusion can thus be made on the** relative complexity of the ensembles generated by Off-EEL | Table 2: Parameters for the real-valued GA. | | |-----------------------------------------------|----------------------------| | Parameter | Value | | Population size | 500 | | Termination criteria | 100000 fitness evaluations | | Tournament size | 2 | | Initialization range | [-1,1] | | SBX crossover prob. | 0.3 | | SBX crossover n-value | n = 2 | | Gaussian mutation prob. | 0.1 | | Gaussian mutation std. dev. | σ = 0.05 |
| Measure | LMS | GA | Boosting | Off-EEL | On-EEL | |--------------------|--------------|--------------|----------------|--------------|-----------------| | bcw | | | | | | | Train error | 3.9% (0.2%) | 1.8% (0.2%) | 0.0% (0.0%) | 1.4% (0.2%) | 0.4% (0.4%) | | Test error | 4.0% (1.6%) | 3.2% (1.7%) | 5.3% (2.0%) | 3.4% (1.7%) | 3.5% (2.0%) | | Test error p-value | 0.00 | - | 0.00 | 0.09 | 0.04 | | Ensemble size | - | - | 291.6 (68.2) | 235.6 (66.8) | 116.3 (278.2) | | bld | | | | | | | Train error | 29.8% (0.9%) | 25.4% (1.2%) | 0.0% (0.0%) | 20.9% (1.5%) | 18.9% (2.0%) | | Test error | 30.4% (6.6%) | 32.7% (6.6%) | 30.4% (5.4%) | 29.2% (7.4%) | 29.5% (8.4%) | | Test error p-value | 0.04 | 0.00 | 0.14 | - | 0.64 | | Ensemble size | - | - | 1081.4 (166.1) | 301.0 (37.9) | 294.1 (154.2) | | bos | | | | | | | Train error | 32.2% (1.3%) | 23.4% (4.1%) | 0.0% (0.0%) | 16.7% (1.9%) | 20.9% (2.3%) | | Test error | 34.0% (6.7%) | 30.7% (7.5%) | 26.9% (4.2%) | 22.7% (5.7%) | 26.2% (7.2%) | | Test error p-value | 0.00 | 0.00 | 0.00 | - | 0.00 | | Ensemble size | - | - | 761.1 (40.8) | 303.8 (41.4) | 2960.9 (2109.3) | | cmc | | | | | | | Train error | 51.6% (0.4%) | 45.7% (1.4%) | 43.3% (0.7%) | 42.9% (1.2%) | 43.9% (1.4%) | | Test error | 51.8% (2.5%) | 50.4% (3.9%) | 46.8% (2.9%) | 46.8% (3.9%) | 47.7% (3.9%) | | Test error p-value | 0.00 | 0.00 | 0.99 | - | 0.04 | | Ensemble size | - | - | 4000.0 (0.0) | 326.4 (35.7) | 2707.7 (1696.1) | | pid | | | | | | | Train error | 22.0% (0.6%) | 20.2% (0.7%) | 0.6% (0.5%) | 19.8% (0.7%) | 20.0% (0.8%) | | Test error | 22.8% (3.5%) | 24.2% (3.9%) | 28.1% (5.0%) | 24.0% (4.0%) | 24.0% (3.9%) | | Test error p-value | - | 0.00 | 0.00 | 0.00 | 0.00 | | Ensemble size | - | - | 1978.1 (43.0) | 309.5 (37.6) | 1196.3 (765.7) | | spa | | | | | | | Train error | 11.1% (0.4%) | 7.9% (0.5%) | 1.4% (0.1%) | 6.1% (0.2%) | 7.6% (0.8%) | | Test error | 11.3% (1.2%) | 9.0% (1.3%) | 5.7% (0.8%) | 6.7% (1.2%) | 8.3% (1.4%) | | Test error p-value | 0.00 | 0.00 | - | 0.00 | 0.00 | | Ensemble size | - | - | 2000.0 (0.0) | 331.1 (28.4) | 6890.0 (2938.1) | Table 3: Results on the UCI datasets based on 10-folds cross-validation, using 10 independent runs over each fold. Values are averages (standard deviations) over the 100 runs. Statistical tests are p**-values of paired** t**-tests on the test error rate compared to that of the best method on the dataset (in bold).** compared to Boosting. Despite its larger ensemble size, On**-EEL is dominated by** Off**-EEL on all problems but** pid**, where both approaches** generate identical test error rates. A tentative explanation stems from the nature of the two approaches, with Off**-EEL** having a clear algorithm organized in two stages, classifiers evolution with diversity-enhancing fitness followed by ensemble construction, while On**-EEL is more complex, with** a succession of ensemble construction and classifiers evolution with diversity-enforcing measure taken relatively to the current ensemble. The dynamics of On**-EEL is hard** to understand, but it can be speculated that the iterative construction of the ensemble (without individual removal) is prone to be stuck in local optima. Indeed, the "construction path" taken to build the ensemble begins with a selection of some (supposed poor) individuals at the beginning of the evolution. As these individuals cannot be removed from the ensemble, they significantly influence the choice of other individuals, biasing and possibly misleading the whole process. ## 6. Discussion And Perspectives This paper has examined the "Evolutionary Ensemble Learning for Free" claim, based on the fact that, since Evolutionary Algorithms maintain a population of solutions, it comes naturally to use these populations as a pool for building classifier ensembles. Two main issues have been studied, respectively concerned with enforcing the diversity of the population of classifiers, and with selecting the classifiers either in the final population or along evolution. The use of a co-evolution-inspired fitness function, generalizing [18], was found sufficient to generate diverse classifiers. As already noted, there is a great similarity between the co-evolution of programs and fitness cases [13] and the Boosting principles [8]; the common idea is that good classifiers are learned from good examples, while good examples are generalized by good classifiers. The difference between Boosting and co-evolution is that in Boosting, the training examples are not evolved; instead, their weights are updated. However, the uncontrolled growth of some weights, typically in the case of noisy examples, actually appears as the Achilles' heel of Boosting compared to Bagging. Basically, AdaBoost can be viewed as a dynamic system [22]; the possible instability or periodicity of this dynamic system has undesired consequences on the ensemble learning performance. The use of co-evolutionary ideas, even though the set of ensemble does not evolve, seems to increase the
stability of the learning process. The two EEL frameworks investigated in this paper can be considered as promising. Off**-EEL constructs ensembles** with best performances while needing little modifications over a traditional evolutionary algorithm, with a diversityenhancing fitness and the construction of an ensemble from the final population. But the size of the ensembles generated suggests that bigger population would lead to bigger and possibly better ensembles. For the sake of scalability, this suggests that the ensemble should be gradually constructed along evolution, instead of considering only the final population. This has been explored with On**-EEL, with** lesser performance comparing to Off**-EEL. It is suggested** that ensemble construction with On**-EEL is prone to be** stuck in local minima, so some capability of removing individuals can be beneficial, at the risk of inducing an highly dynamic algorithm. Ultimately, the momentum and dynamics of EEL should be controlled by evolution itself, enforcing some trade-off between exploring new regions and preserving efficient optimization. This will be the subject of future researches. ## Acknowledgments This work was supported by postdoctoral fellowships from the ERCIM-SARIT (Europe), the Swiss National Science Foundation (Switzerland), and the FQRNT (Qu´ebec) to C. Gagn´e. The second and third authors gratefully acknowledge the support of the Pascal **Network of Excellence IST2002-506 778.** ## 7. References [1] L. Breiman. Arcing classifiers. **Annals of Statistics**, 26(3):801–845, 1998. [2] A. Chandra and X. Yao. Ensemble learning using multi-objective evolutionary algorithms. **J. of** Mathematical Modelling and Algorithms**, 5(4):417–425,** 2006. [3] A. Chandra and X. Yao. Evolving hybrid ensembles of learning machines for better generalisation. Neurocomputing**, 69:686–700, 2006.** [4] T. G. Dietterich. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation**, 10:1895–1923, 1998.** [5] T. G. Dietterich. Ensemble methods in machine learning. In **First Int. Workshop on Multiple Classifier** Systems**, pages 1–15, 2000.** [6] R. Esposito and L. Saitta. Monte Carlo theory as an explanation of Bagging and Boosting. In **Proc. of the** Int. Joint Conf. on Artificial Intelligence (IJCAI'03), pages 499–504, 2003. [7] G. Folino, C. Pizzuti, and G. Spezzano. Ensemble techniques for parallel genetic programming based classifiers. In **Proc. of the European Conf. on Genetic** Programming (EuroGP'03)**, pages 59–69, 2003.** [8] Y. Freund and R. Schapire. Experiments with a new Boosting algorithm. In **Proc. of the Int. Conf. on** Machine Learning (ICML'96)**, pages 148–156, 1996.** [9] C. Gagn´e and M. Parizeau. Genericity in evolutionary computation software tools: Principles and case-study. Int. J. on Artificial Intelligence Tools**, 15(2):173–194,** 2006. [10] C. Gagn´e and M. Parizeau. Open BEAGLE: An evolutionary computation framework in C++. http://beagle.gel.ulaval.ca**, 2006.** [11] R. Gilad-Bachrach, A. Navot, and N. Tishby. Margin based feature selection - theory and algorithms. In Proc. of the Int. Conf. on Machine Learning (ICML'04)**, pages 43–50, 2004.** [12] I. Guyon, S. Gunn, M. Nikravesh, and L. Zadeh, editors. **Feature Extraction: Foundations And** Applications**. Springer-Verlag, 2006.** [13] W. D. Hillis. Co-evolving parasites improve simulated evolution as an optimization procedure. **Physica D**, 42:228–234, 1990. [14] J. Holland. Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In **Machine Learning, An** Artificial Intelligence Approach**, volume 2, pages** 593–623. Morgan Kaufmann, 1986. [15] J. Holmes, P. Lanzi, W. Stolzmann, and S. Wilson. Learning classifier systems: New models, successful applications. **Information Processing Letters**, 82(1):23–30, 2002. [16] H. Iba. Bagging, Boosting, and bloating in genetic programming. In **Proc. of the Genetic and** Evolutionary Computation Conference (GECCO'99), pages 1053–1060, 1999. [17] M. Keijzer and V. Babovic. Genetic programming, ensemble methods, and the bias/variance tradeoff – introductory investigations. In **Proc. of the European** Conf. on Genetic Programming (EuroGP'00)**, pages** 76–90, 2000. [18] Y. Liu, X. Yao, and T. Higuchi. Evolutionary ensembles with negative correlation learning. **IEEE** Trans. on Evolutionary Computation**, 4(4):380–387,** 2000. [19] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html, 1998. [20] J. Paredis. Coevolving cellular automata: Be aware of the Red Queen! In **Proc. of the Int. Conf. on Genetic** Algorithms (ICGA'97)**, pages 393–400, 1997.** [21] G. Paris, D. Robilliard, and C. Fonlupt. Applying Boosting techniques to genetic programming. In Artificial Evolution 2001, volume 2310 of LNCS**, pages** 267–278. Springer Verlag, 2001. [22] C. Rudin, I. Daubechies, and R. E. Schapire. The dynamics of AdaBoost: Cyclic behavior and convergence of margins. **J. of Machine Learning** Research**, 5:1557–1595, 2004.** [23] R. Schapire, Y. Freund, P. Bartlett, and W. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. **Annals of Statistics**, 26(5):1651–1686, 1998. [24] D. Song, M. I. Heywood, and A. N. Zincir-Heywood. Training genetic programming on half a million patterns: an example from anomaly detection. **IEEE** Trans. on Evolutionary Computation**, 9(3):225–239,** 2005. [25] V. N. Vapnik. Statistical Learning Theory**. Wiley, New** York, NY (USA), 1998.
# Fault Classification In Cylinders Using Multilayer Perceptrons, Support Vector Machines And Guassian Mixture Models Tshilidzi Marwalaa, Unathi Maholaa **and Snehashish Chakraverty**b a *School of Electrical and Information Engineering* University of the Witwatersrand Private Bag x 3 Wits 2050 South Africa e-mail: t.marwala@ee.wits.ac.za b**Central Building Research Institute** Roorkee-247 667, U.A. India e-mail: sne_chak@yahoo.com ## Abstract Gaussian mixture models (GMM) and support vector machines (SVM) are introduced to classify faults in a population of cylindrical shells. The proposed procedures are tested on a population of 20 cylindrical shells and their performance is compared to the procedure, which uses multi-layer perceptrons (MLP). The modal properties extracted from vibration data are used to train the GMM, SVM and MLP. It is observed that the GMM produces 98%, SVM produces 94% classification accuracy while the MLP produces 88% classification rates. ## 1. Introduction Vibration data have been used with varying degrees of success to classify damage in structures [1]. In the fault classification process there are various stages involved and these are: data extraction, data processing, data analysis and fault classification. Data extraction process involves the choice of data to be extracted and the method of extraction. Data that have been used for fault classification include strains concentration in structures and vibration data where strain gauges and accelerometers are used respectively [1]. In this paper vibration data processed using modal analysis are used for fault classification. In the data processing stage the measured vibration data need to be processed. This is mainly due to the fact that the measured vibration data, which are in the time domain, are difficult to use in raw form. Thus far the time-domain vibration data may be transformed to the modal analysis, frequency domain analysis and time-frequency domain [2,3]. In this paper the time-domain vibration data set is transformed into the modal domain where it is represented as natural frequencies and mode shapes. The data processed need to be analysed and the general trend has been to automate the analysis process and thus automate the fault classification process. To achieve this goal intelligent pattern recognition process needs to be employed and methods such as neural networks have been widely applied [1]. There are many types of neural networks that have been employed and these include multi-layer perceptron (MLP), radial basis function (RBF) and Bayesian neural networks [4,5]. Recently, new pattern recognition methods called support vector machines (SVMs) and Gaussian mixture models (GMMs) have been proposed and found to be particularly suited to classification problems [6]. SVMs have been found to outperform neural networks [7]. One of the examples where the fault classification process summarized at the beginning of this paper has been implemented is fault classification in a population of nominally identical cylindrical
Materials & Structures, 10**: 540-547, 2001.** [10]S. Haykin. *Neural networks***. Prentice-Hall, Inc, New York, USA, 1995.** [11]G.E. Hinton. Learning translation invariant recognition in massively parallel networks. **Proceedings PARLE Conference on Parallel Architectures and** Languages, **1-13, 1987.** [12]M. Møller. A scaled conjugate gradient algorithm for fast-supervised learning. Neural Networks, 6**:525-533, 1993.** [13]K.R. Müller, S. Mika, G. Rätsch, K. Tsuda, B. SchÖ**lkopf. An introduction to** kernel-based learning algorithms. *IEEE Transactions on Neural Networks*, 12:**181-** 201, 2001. [14]V. Vapnik. *The Nature of Statistical Learning Theory***. New York: Springer Verlag,** 1995. [15]C.A. Burges, A tutorial on support vector machines for pattern recognition. **Data** Mining and Knowledge Discovery, 2**:121-167, 1998.** [16]E. Habtemariam, T. Marwala, M. Lagazio. Artificial intelligence for conflict management. **Proceedings of the IEEE International Joint Conference on Neural** Networks**, Montreal, Canada, 2583-2588, 2005.** [17]B. SchÖ**lkopf, A.J. Smola. A short introduction to learning with kernels.** Proceedings of the Machine Learning Summer School**, Springer, Berlin, 41–64,** 2003. [18]A. Dempster, N. Laird, D Rubin, Maximum likelihood from incomplete data via the EM algorithm, *Journal of the Royal Statistical Society* 39**:1-38, 1977.** [19]Y-F. Wang, P.J. Kootsookos. Modeling of Low Shaft speed bearing faults for condition monitoring. *Mechanical Systems and Signal Processing* 12**:415-426,** 1998. [20]R. Bellman. *Adaptive Control Processes: A Guided Tour.* **Princeton University** Press, New Jersey, USA, 1961. [21]I.T. Jollife. *Principal Component Analysis.* **Springer-Verlag , New York, USA.,** 1986. [22]D.J. Ewins. *Modal Testing: Theory and Practice***, Research Studies Press,** Letchworth, UK, 1995. [23]N.M.M. Maia, J.M.M. Silva, *Theoretical and Experimental Modal Analysis.* Research Studies Press, Letchworth, U.K, 1997.
shells [2,3,4]. The importance of fault identification process in a population of nominally identical structures is particularly important in areas such as the automated manufacturing process in the assembly line. Thus far various forms of neural networks such as MLP and Bayesian neural networks have been successfully used to classify faults in structures [8]. Worden and Lane [9] used SVMs to identify damage in structures. However, SVMs have not been used for fault classification in a population of cylinders. Based on the successes of SVMs observed in other areas, we therefore propose in this paper SVMs and GMMs for classifying faults in a population of nominally identical cylindrical shells. This paper is organized as follows: neural networks, SVMs and GMMs are summarized, experiment performed is discussed and results as well as conclusions are discussed. ## 2. Neural Networks Neural networks are parameterised graphs that make probabilistic assumptions about data and in this paper these data are modal domain data and their respective classes of faults. In this paper multi-layer perceptron neural networks are trained to give a relationship between modal domain data and the fault classes. As mentioned earlier, there are several types of neural network procedures such as multi-layer perceptron, radial basis function, Bayesian networks and recurrent networks [5] and in this paper the MLP is used. This network architecture contains hidden units and output units and has one hidden layer. For the MLP the relationship between the output y, representing fault class, and x, representing modal data, may be may be written as follows [5,10,11]: $$\mathbf{y}_{\mathrm{{\scriptsize~k}}}=\mathbf{f}_{\mathrm{outer}}\!\left(\sum_{\mathrm{{j=1}}}^{\mathrm{{M}}}\mathbf{w}_{\mathrm{{\scriptsize~kj}}}^{(2)}\mathbf{f}_{\mathrm{inner}}\!\left(\sum_{\mathrm{{i=1}}}^{\mathrm{{d}}}\mathbf{w}_{\mathrm{{\scriptsize~ji}}}^{(1)}\,\mathbf{x}_{\mathrm{{\scriptsize~i}}}+\mathbf{w}_{\mathrm{{\scriptsize~j0}}}^{(1)}\,\right)\!+\mathbf{w}_{\mathrm{{\scriptsize~k0}}}^{(2)}\,\right)$$ ykf outer wkj f w x w **w (1)** * [10] A. A. K. Here, )1( w ji and )2( w ji **indicate weights in the first and second layer, respectively, going** from input i to hidden unit j, M is the number of hidden units, d is the number of output units while )1( w 0j and )2( wk0 indicate the biases for the hidden unit j and the output unit k. In this paper, the function fouter(•) is logistic while finner **is a hyperbolic tangent function.** Training the neural network identifies the weights in equations 1 and in this paper the scaled conjugate gradient method is used [12]. ## 3. Support Vector Machines (Svms) According to [13], the classification problem can be formally stated as estimating a function f: RN → **{−1, 1} based on an input-output training data generated from an** independently, identically distributed unknown probability distribution P(x, y) such that f will be able to classify previously unseen (x, y) pairs. Here x is the modal data while y is the fault class. The best such function is the one that minimizes the expected error (risk) which is given by ∫ R ]f[ = ),x(f(l dP)y )y,x( **(2)** where l represents a loss function [13]. Since the underlying probability distribution P is unknown, equation 2 cannot be solved directly. The best we can do is finding an upper bound for the risk function [14] that is given by
However, assuming that we can only access the feature space using only dot products, equation 7 is transformed into a dual optimization problem by introducing Lagrangian multipliers αi**,i = 1, 2, ..., n and doing some minimisation, maximisation and saddle** point property of the optimal point [14,15,16] the problem becomes $$\sum_{\mathrm{{w,b}}}^{n}\sum_{\mathrm{{i=1}}}^{n}\alpha_{\mathrm{{i}}}-{\frac{1}{2}}\sum_{\mathrm{{i,j=1}}}^{n}\alpha_{\mathrm{{i}}}\alpha_{\mathrm{{j}}}\mathrm{{y_{{i}}y_{{j}}k(x_{\mathrm{{i}}},x_{\mathrm{{j}}})}}$$ which at t: $$({\boldsymbol{\delta}})$$ $$(\mathbf{9})$$ (8) $$\sum_{i=1}^{n}\alpha_{i}\mathbf{y}_{i}=0$$ subject to $$\alpha_{\mathrm{i}}\geq0,\mathrm{\boldmath~i=1,...,n~}$$ The Lagrangian coefficients, αi**'s, are obtained by solving equation 8 which in turn is** used to solve w to give the non-linear decision function [12] $$\begin{array}{c}{{\mathrm{f}(\mathrm{x})=\mathrm{sgn}\!\left(\sum_{\mathrm{i=1}}^{\mathrm{n}}\mathrm{y}_{\mathrm{i}}\alpha_{\mathrm{i}}\!\left(\Phi(\mathrm{x}).\Phi(\mathrm{x}_{\mathrm{i}})\right)+\mathrm{b}\right)}}\\ {{=\mathrm{sgn}\!\left(\sum_{\mathrm{i=1}}^{\mathrm{n}}\mathrm{y}_{\mathrm{i}}\alpha_{\mathrm{i}}\mathrm{k}(\mathrm{x},\mathrm{x}_{\mathrm{i}})+\mathrm{b}\right.}}\end{array}$$ = **(9)** In the case when the data is not linearly separable, a slack variable ξi**, i = 1,..., n is** introduced to relax the constraints of the margin as y ((w. x( )) )b 1 , ,0 i **1,...,n** i Φ i + ≥ − ζi ζi ≥ = **(10)** A trade off is made between the VC dimension and the complexity term of equation 3, which gives the optimisation problem $$\begin{array}{r l}{\operatorname*{min}}\\ {\mathrm{W},\mathbf{b},\xi}\end{array}\qquad{\frac{1}{2}}\left\|\mathbf{w}\right\|^{2}+\mathbf{C}\sum_{\mathrm{i=1}}^{1}\xi_{\mathrm{i}}$$ 1 **(11)** where C > 0 is a regularisation constant that determines the above-mentioned trade-off. The dual optimisation problem is then given by [12] max $$\sum_{\mathrm{i=1}}^{n}\alpha_{\mathrm{i}}-{\frac{1}{2}}\sum_{\mathrm{i,j=1}}^{n}\alpha_{\mathrm{i}}\alpha_{\mathrm{j}}\mathrm{y}_{\mathrm{i}}\mathrm{y}_{\mathrm{j}}\mathrm{k}(\mathrm{x}_{\mathrm{i}},\mathrm{x}_{\mathrm{j}})$$ Let $\alpha_{\mathrm{i}}$ be a unit vector. $$(11)$$ $$(12)$$ 1 **(12)** $\mathbf{i}=1,\ldots,\mathbf{n}$ . subject to $$\bar{\sum_{i\in I}}\alpha_{i}y_{i}=0$$ 0 i,C **1,...,** .n ≤ α ≤ = n i 1 i A Karush-Kuhn-Tucker (KKT) condition which says that only the αi**'s associated with** the training values xi's on or inside the *margin* **area have non-zero values, is applied to** the above optimisation problem to find the αi**'s and the threshold variable b reasonably** and the decision function f [17]. ## 4. Gaussian Mixture Models (Gmms) GMM non-linear pattern classifier also works by creating a maximum likelihood model for each fault case given by [18], λ= { w, µ, Σ } **(13)** where, w, µ, Σ **are the weights, means and diagonal covariance of the features. Given a** collection of training vectors the parameters of this model are estimated by a number of algorithms such as the Expectation-Maximization (EM) algorithm [18]. In this paper,
the EM algorithm is used since it has reasonable fast computational time when compared to other algorithms. The EM algorithm finds the optimum model parameters by iteratively refining GMM parameters to increase the likelihood of the estimated model for the given fault feature modal vector. For the EM equations for training a GMM, the reader is referred to [19]. Fault detection or diagnosis using this classifier is then achieved by computing the likelihood of the unknown modal data of the different fault models. This likelihood is given by [18] $\hat{\mathbf{s}}=\arg\max_{\|\mathbf{f}\|\leq\mathbf{F}}\sum_{\mathbf{k}=\mathbf{l}}^{\mathbf{k}}\log\mathbf{p}(\mathbf{x}_{\mathbf{k}}\,|\,\lambda_{\mathbf{f}}\,)$ where $\mathbf{F}$ represent the number of faults. K (14) where, F, represent the number of faults to be diagonalized, X x{ x, **,...,** x } = 1 2 K is the unknown D-dimension fault modal data and p(xk|λf**) is the mixture density function** given by [18] $\mathrm{p}(\mathrm{x}\,|\,\lambda)=\sum_{i=1}^{M}\mathrm{w}_{i}\mathrm{p}(\mathrm{x})$ with, $\mathrm{p}_{i}(\mathrm{x}_{i})=\frac{1}{\left(2\pi\right)^{D/2}}\sqrt{\sum_{i}}\exp\{-\frac{1}{2}\left(\mathrm{x}_{k}-\mu_{i}\right)^{T}\left(\sum_{i}\right)^{-1}\left(\mathrm{x}_{k}-\mu_{i}\right)\}$ It should be noted that the mixture weights, wi**, satisfy the constrains,** w 1 $\sum_{\mathbf{v}}^{\mathbf{M}}\mathbf{w}_{\mathbf{v}}$ $$(14)$$ $$(15)$$ $$(16)$$ i 1 ∑ i = = $_{\text{in}}\;=\;1$ . ## 5. Input Data This section describes the inputs that are used to test the SVM, MLP and GMM. When modal analysis is used for fault classification it is often found that there are more parameters extracted from the vibration data than can be possibly used for MLP, SVM and GMM training. These data must therefore be reduced because of a phenomenon called the curse of dimensionality [18], which refers to the difficulties associated with the feasibility of density estimation in many dimensions. However, this reduction process must be conducted such that the loss of essential information is minimized. The techniques implemented in this paper to reduce the dimension of the input data remove parts of the data that do not contribute significantly to the dynamics of the system being analysed or those that are too sensitive to irrelevant parameters. To achieve this, we implement the principal component analysis, which is discussed in the next section. ## 5.1 Principal Component Analysis In this paper we use the principal component analysis (PCA) [20;21] to reduce the input data into independent input data. The PCA orthogonalizes the components of the input vector so that they are uncorrelated with each other. In the PCA, correlations and interactions among variables in the data are summarised in terms of a small number of underlying factors. ## 6. Foundations Of Dynamics As indicated earlier, in this paper modal properties i.e. natural frequencies and mode shapes are extracted from the measured vibration data and used for fault classification. For this reason the foundation of these parameters are described in this section. All elastic structures may be described the time domain as [22]
[M]{ }''X + [C]{ }'X + [K]}{X} = { }F **(17)** where [M], [C] and [K] are the mass, damping and stiffness matrices respectively, and {X}, {X′} and {X′′**} are the displacement, velocity and acceleration vectors,** respectively, while {F} is the applied force vector. If equation 17 is transformed into the modal domain to form an eigenvalue equation for the ith **mode, then [23]** ( [M] j [C] [K]){ } }0{ i i 2 −ωi + ω + φ = **(18)** where j = −1 , ωi is the ith **complex eigenvalue, with its imaginary part corresponding** to the natural frequency ωi**, {0} is the null vector, and** i {φ} is the ith **complex mode** shape vector with the real part corresponding to the normalized mode shape {φ}i**. From** equation 18 it is evident that changes in the mass and stiffness matrices cause changes in the modal properties and thus modal properties are damage indicators. ## 7. Example: Cylindrical Shells 7.1 Experimental Procedure In this section the procedures of using GMM and SVM are experimentally validated and compared to the procedure of using MLP. The experiment is performed on a population of cylinders, which are supported by inserting a sponge rested on a bubble-wrap, to simulate a 'free-free' environment [see Figure 2]. The impulse hammer test is performed on each of the 20 steel seam-welded cylindrical shells. The impulse is applied at 19 different locations as indicated in Figure 2. More details on this experiment may be found in [4]. Each cylinder is divided into three equal substructures and holes of 10-15 mm in <image> diameter are introduced at the centers of the substructures to simulate faults.
For one cylinder the first type of fault is a zero-fault scenario. This type of fault is given the identity [0 0 0], indicating that there are no faults in any of the three substructures. The second type of fault is a one-fault-scenario, where a hole may be located in any of the three substructures. Three possible one-fault-scenarios are [1 0 0], [0 1 0], and [0 0 1] indicating one hole in substructures 1, 2 or 3, respectively. The third type of fault is a two-fault scenario, where holes are located in two of the three substructures. Three possible two-fault-scenarios are [1 1 0], [1 0 1], and [0 1 1]. The final type of fault is a three-fault-scenario, where a hole is located in all three substructures, and the identity of this fault is [1 1 1]. There are 8 different types of fault-cases considered (including [0 0 0]). Each cylinder is measured three times under different boundary conditions by changing the orientation of a rectangular sponge inserted inside the cylinder. The number of sets of measurements taken for undamaged population is 60 (20 cylinders × **3 for different** boundary conditions). In the 8 possible fault types, two fault types [0 0 0] and [1 1 1] has 60 number of occurrences while the rest has 24. It should be noted that the numbers of one- and two-fault cases are each 72. This is because as mentioned above, increasing the sizes of holes in the substructures and taking vibration measurements generated additional one- and two-fault cases. Fault cases used to train and test the networks are shown in Table 1. | Fault | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | |--------------|---------|---------|---------|---------|---------|---------|---------|---------| | Training set | 21 | 21 | 21 | 21 | 21 | 21 | 21 | 21 | | Test set | 39 | 3 | 3 | 3 | 3 | 3 | 3 | 39 | The impulse and response data are processed using the Fast Fourier Transform (FFT) to convert the time domain impulse history and response data into the frequency domain. The data in the frequency domain are used to calculate the frequency response functions (FRFs). From the FRFs, the modal properties are extracted using modal analysis [21]. The number of modal properties identified is 340 (17 modes×**19 measured mode-shapeco-ordinates+17 natural frequencies). The PCA are used to reduce the dimension of the** input data from 340×264 modal properties to 10×**264. Here 264 correspond to the** number of fault cases measured. ## 8. Results And Discussion The measured data was used for the MLP training and the MLP architecture contained 10 input units, 8 hidden units and 3 output units. The scaled conjugate gradient method was used for training the MLP network [12]. The average time it took to train the MLP networks was 12 CPU seconds on the Pentium II computer. The results obtained are shown in Table 2. In this table the actual fault cases are listed against the predicted fault cases. These results show that the MLP classify fault cases to the accuracy of 88%. In Table 1 it was shown that some fault cases are more numerous than others. In this case the measure of accuracy as a ratio of the sum of fault cases classified correctly divided
by the total number of cases can be misleading. This is the case if the fault cases classified incorrectly are those from the less numerous cases. To remedy this situation a measure of accuracy called geometrical accuracy (GA) is used and defined as: $$\mathrm{G}\mathrm{A}=$$ $\left(19\right)^2$ 1 n 1 n q **...q** c **...c** GA∏ ∏ = **(19)** where ∏ is the product, cn is the number of nth **fault cases classified correctly while q**n is the nth **fault class. Using this measure the MLP gives the accuracy of 0.7.On training** the SVM, there are different parameters that can be changed namely the capacity and the e-insensitivity, the amount of training inputs and the function to be used for the kernel. Some of the functions that can be used are: linear, radial basis function, sigmoid, and spline. In this paper the exponential radial basis function is used as a kernel. The training process took 45 CPU seconds and the capacity was set to infinity. The results obtained are shown in Table 3 and these results show that the SVM gives accuracy of 94% while it gives the GA of 0.92. GMM architecture on the other hand used diagonal covariance matrix with 3 centers. The main advantage of using the diagonal covariance matrix is that this de-correlates the feature vectors. The training process took 45 CPU seconds. Table 4 shows that the GMM gives accuracy of 98% while it gives the GA of 0.95. As can be seen from Tables 2, 3 and 4, the GMM outperforms, the SVM network which out-performed the MLP. The MLP network. Table 2. **The confusion matrix obtained when the MLP network is used for fault classification** | | Predicted | | | | | | | | | |--------|-------------|-------|-------|-------|-------|-------|-------|-------|----| | | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | | | | [000] | 39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | [100] | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | [010] | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | | Actual | [001] | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 6 | | | [110] | 0 | 0 | 0 | 0 | 3 | 1 | 0 | 0 | | | [101] | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | | | [011] | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | | | [111] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 29 | Table 3. **The confusion matrix obtained when the SVM network is used for fault classification** | | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | | |--------|---------|---------|---------|---------|---------|---------|---------|---------|----| | | [000] | 39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | [100] | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | [010] | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | | Actual | [001] | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 5 | | | [110] | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | | | [101] | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | | | [011] | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 1 | | | [111] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 33 |
Table 4. **The confusion matrix obtained when the GMM network is used for fault classification** **Predicted** | | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | | |--------|---------|---------|---------|---------|---------|---------|---------|---------|----| | | [000] | 39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | [100] | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | | | [010] | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | | Actual | [001] | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 1 | | | [110] | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 1 | | | [101] | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | | | [011] | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | | | [111] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 35 | ## 8. Conclusions In this paper GMM and SVM were introduced to classify faults in a population of cylindrical shells and compared to the MLP. The GMM was observed to give more accurate results than the SVM, which was in turn observed to give more accurate results than the MLP. ## Reference [1] **S.W. Doebling, C.R. Farrar, M.B. Prime, D.W. Shevitz. Damage identification and** health monitoring of structural and mechanical systems from changes in their vibration characteristics: a literature review. *Los Alamos Technical Report LA13070-MS.,* **Los Alamos National Laboratory, New Mexico, USA, 1996.** [2] **T. Marwala. On fault identification using pseudo-modal-energies and modal** properties. *American Institute of Aeronautics and Astronautics Journal,* 39**:1608-** 1618, 2001. [3] **T. Marwala. Probabilistic fault identification using a committee of neural networks** and vibration data. *American Institute of Aeronautics and Astronautics, Journal of* Aircraft, 38**:138-146, 2001.** [4] T. Marwala. Fault *Identification Using Neural Networks and Vibration Data.* University of Cambridge Ph.D. Thesis, Cambridge, UK, 2001. [5] C.M. Bishop. *Neural Networks for Pattern Recognition.* **Oxford University Press,** Oxford, UK, 1995. [6] **J. Joachims. Making large-scale SVM learning practical. Scholkopf, C. J. C. Burges** and A. J. Smola, editors, *Advances in Kernel Methods-Support Vector Learning*, 169-184, MIT Press, Cambridge, MA: ,1999. [7] **M.M. Pires and T. Marwala. American option pricing using multi-layer perceptron** and support vector machines, **Proceedings of the IEEE Conference in Systems, Man** and Cybernetics, **The Hague, 1279-1285, 2004.** [8] **A. Iwasaki, A Todoroki, Y. Shimamura, et al. An unsupervised statistical damage** detection method for structural health monitoring (applied to detection of delamination of a composite beam). *Smart Materials and Structures,* 13**:80-85,** 2004. [9] K. Worden, A.J. Lane. Damage identification using support vector machines. Smart
# Learning To Bluff Evan Hurwitz And Tshilidzi Marwala Abstract— The act of bluffing confounds game designers to this day. The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically. Through the use of intelligent, learning agents, and carefully designed agent outlooks, an agent can in fact learn to predict its opponents' reactions based not only on its own cards, but on the actions of those around it. With this wider scope of understanding, an agent can in learn to bluff its opponents, with the action representing not an "illogical" action, as bluffing is often viewed, but rather as an act of maximising returns through an effective statistical optimisation. By using a TD(λ) learning algorithm to continuously adapt neural network agent intelligence, agents have been shown to be able to learn to bluff without outside prompting, and even to learn to call each other's bluffs in free, competitive play. ## I. Introduction W HILE many card games involve an element of bluffing, simulating and fully understanding bluffing yet remains one of the most elusive tasks presented to the game design engineer. The entire process of bluffing relies on performing a task that is unexpected, and is thus misinterpreted by one's opponents. For this reason, static rules are doomed to failure since once they become predictable, they cannot be misinterpreted. In order to create an artificially intelligent agent that can bluff, one must first create an agent that is capable of learning. The agent must be able to learn not only about the inherent nature of the game it is playing, but also must be capable of learning trends emerging from its opponent's behaviour, since bluffing is only plausible when one can anticipate the opponent's reactions to one's own actions. Firstly the game to be modelled will be detailed, with the reasoning for its choice being explained. The paper will then detail the system and agent architecture, which is of paramount importance since this not only ensures that the correct information is available to the agent, but also has a direct impact on the efficiency of the learning algorithms utilised. Once the system is fully illustrated, the actual learning of the agents is shown, with the appropriate findings detailed. ## Ii. Lerpa The card game being modelled is the game of Lerpa [4]. While not a well-known game, its rules suit the purposes of this research exceptionally well, making it an ideal testbed application for intelligent agent Multi-Agent Modelling (MAM). The rules of the game first need to be elaborated upon, in order to grasp the implications of the results obtained. Thus, the rules for Lerpa now follow. The game of *Lerpa* is played with a standard deck of cards, with the exception that all of the 8s, 9s and 10s are removed from the deck. The cards are valued from greatest- to least-valued from ace down to 2, with the exception that the 7 is valued higher than a king, but lower than an ace, making it the second most valuable card in a suit. At the end of dealing the hand, the dealer has the choice of *dealing himself in* - which entails flipping his last card over, unseen up until this point, which then declares which suit is the *trump suit* [4]. Should he elect not to do this, he then flips the next card in the deck to determine the trump suit. Regardless, once trumps are determined, the players then take it in turns, going clockwise from the dealer's left, to elect whether or not to play the hand (to *knock*), or to drop out of the hand, referred to as *folding* (if the Dealer has *dealt himself in*, as described above, he is then automatically required to play the hand). Once all players have chosen, the players that have elected to play then play the hand, with the player to the dealer's left playing the first card. Once this card has been played, players must then play in suit - in other words, if a heart is played, they must play a heart if they have one. If they have none of the required suit, they may play a trump, which will win the trick unless another player plays a higher trump. The highest card played will win the trick (with all trumps valued higher than any other card) and the winner of the trick will lead the first card in the next trick. At any point in a hand, if a player has the Ace of trumps and can legally play it, he is then required to do so [4]. The true risk in the game comes from the betting, which occurs as follows: At the beginning of the round, the dealer pays the table 3 of whatever the basic betting denomination is (referred to usually as 'chips'). At the end of the hand, the chips are divided up proportionately between the winners, i.e. if you win two tricks, you will receive two thirds of whatever is in the pot. However, if you stayed in, but did not win any tricks, you are said to have been *Lerpa'd*, and are then required to match whatever was in the pot for the next hand, effectively costing you the pot. It is in the evaluation of this risk that most of the true skill in *Lerpa* lies.
## Iii. Lerpa Mam As with any optimisation system, very careful consideration needs to be taken with regards to how the system is structured, since the implications of these decisions can often result in unintentional assumptions made by the system created. With this in mind, the Lerpa Multi-Agent System (MAS) has been designed to allow the maximum amount of freedom to the system, and the agents within, while also allowing for generalisation and swift convergence in order to allow the intelligent agents to interact unimpeded by human assumptions, intended or otherwise. ## A. System Overview The game is, for this model, going to be played by four players. Each of these players will interact with each other indirectly, by interacting directly with the *table*, which is their shared environment, as depicted in Fig. 1. <image> Over the course of a single hand, an agent will be required to make three decisions, once at each interactive stage of the game. These three decision-making stages are: 1) To play the hand, or drop (knock or *fold*). 2) Which card to play first. 3) Which card to play second. Since there is no decision to be made at the final card, the hand can be said to be effectively finished from the agent's perspective after it has played its second card (or indeed after the first decision should the agent fold). Following on the TD(λ) algorithm [5], each agent will update its own neural network at each stage, using its own predictions as a reward function, only receiving a true reward after its final decision has been made. This decision making process is illustrated below, in Fig. 2. <image> Each hand played will be viewed as an independent, stochastic event, and as such only information about the current hand will be available to the agent, who will have to draw on its own learned knowledge base to draw deductions not from previous hands ## B. Agent Ai Design A number of decisions need to be made in order to implement the agent artificial intelligence (AI) effectively and efficiently. The type of learning to be implemented needs to be chosen, as well as the neural network architecture [7]. Special attention needs to be paid to the design of the inputs to the neural network, as these determine what the agent can 'see' at any given point. This will also determine what assumptions, if any, are implicitly made by the agent, and hence cannot be taken lightly. Lastly, this will determine the dimensionality of the network, which directly affects the learning rate of the network, and hence must obviously be minimized. 1) *Input Parameter Design:* In order to design the input stage of the agent's neural network, one must first determine all that the network may need to know at any given decision-making stage. All inputs, in order to optimise stability, are structured as binary-encoded inputs. When making its first decision, the agent needs to know its own cards, which agents have stayed in or folded, and which agents are still to decide [9]. It is necessary for the agent to be able to determine which specific agents have taken their specific actions, as this will allow for an agent to learn a particular opponent's characteristics, something impossible to do if it can only see a number of players in or out. Similarly, the agent's own cards must be specified fully, allowing the agent to draw its own conclusions about each card's relative value. It is also necessary to tell the agent which suit has been designated the trumps suit, but a more elegant method has been found to handle that information, as will be seen shortly. Fig. 3 below illustrates the initial information required by the network. <image> The agent's hand needs to be explicitly described, and the obvious solution is to encode the cards exactly, i.e. four suits, and ten numbers in each suit, giving forty possibilities for each card. A quick glimpse at the number of options available shows that a raw encoding style provides a sizeable problem of dimensionality, since an encoded hand can be one of 403 possible hands (in actuality, only 40P3 hands could be selected, since cards cannot be repeated, but the raw encoding
scheme would in fact allow for repeated cards, and hence 403 options would be available). The first thing to notice is that only a single deck of cards is being used, hence no card can ever be repeated in a hand. Acting on this principle, consistent ordering of the hand means that the base dimensionality of the hand is greatly reduced, since it is now combinations of cards that are represented, instead of permutations. The number of combinations now represented is 40C3. This seemingly small change from nPr to nCr reduces the dimensionality of the representation by a factor of r!, which in this case is a factor of 6. Furthermore, the representation of cards as belonging to discrete suits is not optimal either, since the game places no particular value on any suit by its own virtue, but rather by virtue of which suit is the trump suit. For this reason, an alternate encoding scheme has been determined, rating the 'suits' based upon the makeup of the agent's hand, rather than four arbitrary suits. The suits are encoded as belonging to one of the following groups, or new "suits": - Trump suit - Suit agent has multiple cards in (not trumps) - Suit in agent's highest singleton - Suit in agent's second-highest singleton - Suit in agent's third-highest singleton This allows for a much more efficient description of the agent's hand, greatly improving the dimensionality of the inputs, and hence the learning rate of the agents. These five options are encoded in a binary format, for stability purpose, and hence three binary inputs are required to represent the suits. To represent the card's number, ten discrete values must be represented, hence requiring four binary inputs to represent the card's value. Thus a card in an agent's hand is represented by seven binary inputs, as depicted in Fig. 4. <image> Next must be considered the information required in order to make decisions two and three. For both of these decisions, the cards that have already been played, if any, are necessary to know in order to make an intelligent decision as to the correct next card to play. For the second decision, it is also plausible that knowledge of who has won a trick would be important. The most cards that can ever be played before a decision must be made is seven, and since the table after a card is played is used to evaluate and update the network, eight played cards are necessary to be represented. Once again, however, simply utilising the obvious encoding method is not necessarily the most efficient method. The actual values of the cards played are not necessarily important, only their values relative to the cards in the agent's hand. As such, the values can be represented as one of the following, with respect to the cards in the same suit in the agent's hand: - Higher than the card/cards in the agent's hand - Higher than the agent's second-highest card - Higher than the agent's third-highest card - Lower than any of the agent's cards - Member of a void suit (number is immaterial) Also, another suit is now relevant for representation of the played cards, namely a void suit - a suit in which the agent has no cards. Lastly, a number is necessary to handle the special case of the Ace of trumps, since its unique rules mean that strategies are possible to develop based on whether it has or has not been played. The now six suits available still only require three binary inputs to represent, and the six number groupings now reduce the value representations from four binary inputs to three binary inputs, once again reducing the dimensionality of the input system. With all of these inputs specified, the agent now has available all of the information required to draw its own conclusions and create its own strategies, without human-imposed assumptions affecting its "thought" patterns. 2) *Network Architecture Design:* With the inputs now specified, the hidden and output layers need to be designed. For the output neurons, these need to represent the prediction P that the network is making [2]. A single hand has one of five possible outcomes, all of which need to be catered for. These possible outcomes are: - The agent wins all three tricks, winning 3 chips. - The agent wins two tricks, winning 2 chips. - The agent wins one trick, winning 1 chip. - The agent wins zero tricks, losing 3 chips. - The agent elects to fold, winning no tricks, but losing no chips. This can be seen as a set of options, namely [-3 0 1 2 3]. While it may seem tempting to output this as one continuous output, there are two compelling reasons for breaking these up into binary outputs. The first of these is in order to optimise stability, as elaborated upon in Section five. The second reason is that these are discrete events, and a continuous representation would cover the range of [-3 0], which does not in fact exist. The binary inputs then specified are: $$\begin{array}{r l}{\bullet}&{{}\mathrm{P(O=3)}}\\ {\bullet}&{{}\mathrm{P(O=2)}}\\ {\bullet}&{{}\mathrm{P(O=1)}}\\ {\bullet}&{{}\mathrm{P(O=-3)}}\end{array}$$ With a low probability of all four catering to folding, winning and losing no chips. Consequently, the agent's predicted return is: P = 3A + 2B + C − 3D (1) $$A=P(O=3)$$ $$\left(2\right)$$ where A = P(O = 3) (2)
$$B=P(O=2)$$ B = P(O = 2) (3) $$C=P(O=1)$$ $$D=P(O=-3)$$ C = P(O = 1) (4) D = P(O = −3) (5) The internal structure of the neural network uses a standard sigmoidal activation function [7], which is suitable for stability issues and still allows for the freedom expected from a neural network. The sigmoidal activations function varies between zero and one, rather than the often-used one and minus one, in order to optimise for stability [7]. Since a high degree of freedom is required, a high number of hidden neurons are required, and thus fifty have been used. This number is iteratively achieved, trading off training speed versus performance. The output neurons are linear functions, since while they represent not binary effects, but rather a continuous probability of particular binary outcomes 2) *Agent Decision Making:* With its own predictor specified [2], the agent is now equipped to make decisions when playing. These decisions are made by predicting the return of the resultant situation arising from each legal choice it can make. An ε-greedy policy is then used to determine whether the agent will choose the most promising option, or whether it will explore the result of the less appealing result. In this way, the agent will be able to trade off exploration versus exploitation. ## Iv. The Intelligent Model With each agent implemented as described above, and interacting with each other as specified in Section III, we can now perform the desired task, namely that of utilising a multi-agent model to analyse the given game, and develop strategies that may "solve" the game given differing circumstances. Only once agents know how to play a certain hand can they then begin to outplay, and potentially bluff each other. ## A. Agent Learning Verification In order for the model to have any validity, one must establish that the agents do indeed learn as they were designed to do. In order to verify the learning of the agents, a single intelligent agent was created, and placed at a table with three 'stupid' agents. These 'stupid' agents always stay in the game, and choose a random choice whenever called upon to make a decision. The results show quite conclusively that the intelligent agent soon learns to consistently outperform its opponents, as shown in Fig. 5. The agents named Randy, Roderick and Ronald use random decision-making, while AIden has the TD(λ) AI system implemented [5]. The results have been averaged over 40 hands, in order to be more viewable, and to also allow for the random $$({\mathfrak{I}})$$ $$(4)$$ $$({\boldsymbol{S}})$$ <image> 1) *Cowardice:* In the learning phase of the abovementioned intelligent agent, an interesting and somewhat enlightening problem arises. When initially learning, the agent does not in fact continue to learn. Instead, the agent quickly determines that it is losing chips, and decides that it is better off not playing, and keeping its chips! This is illustrated in Fig. 6. <image> As can be seen, AIden [8] quickly decides that the risks are too great, and does not play in any hands initially. After forty hands, AIden decides to play a few hands, and when they go badly, gets scared off for good. This is a result of the penalising nature of the game, since bad play can easily mean one loses a full three chips, and since the surplus of lost chips is nor carried over in this simulation, a bad player loses chips regularly. While insightful, a cowardly agent is not of any particular use, and hence the agent must be given enough 'courage' to play, and hence learn the game. In order to do this, one option is to increase the value of ε for the ε-greedy policy, but this makes the agent far too much like a random player without any intelligence. A more successful, and sensible solution is to force the agent to play when it knows nothing, until such a stage as it seems prepared to play. This was done by forcing AIden [8] to play the first 200 hands it had ever seen, and thereafter leave AIden to his own devices [8], the result of which has been shown already in Fig. 5.
## B. Parameter Optimisation A number of parameters need to be optimised, in order to optimise the learning of the agents. These parameters are the learning-rate α, the memory parameter λ and the exploration parameter ε. The multi-agent system provides a perfect environment for this testing, since four different parameter combinations can be tested competitively . By setting different agents to different combinations, and allowing them to play against each other for an extended period of time (number of hands), one can iteratively find the parameter combinations that achieve the best results, and are hence the optimum learning parameters [3]. Fig. 7 shows the results of one such test, illustrating a definite 'winner', whose parameters were then used for the rest of the multi-agent modeling. It is also worth noting that as soon as the dominant agent begins to lose, it adapts its play to remain competitive with its less effective opponents. This is evidenced at points 10 and 30 on the graph (games number 300 and 900, since the graph is averaged over 30 hands) where one can see the dominant agent begin to lose, and then begins to perform well once again. <image> over 30 hands Surprisingly enough, the parameters that yielded the most competitive results were α = 0.1; λ = 0.1 and ε = 0.01. while the ε value is not particularly surprising, the relatively low α and λ values are not exactly intuitive. What they amount to is a degree of temperance, since higher values would mean learning a large amount from any given hand, effectively over-reacting when they may have played well, and simply have fallen afoul of bad luck. ## C. Mas Learning Patterns With all of the agents learning in the same manner, it is noteworthy that the overall rewards they obtain are far better than those obtained by the random agents, and even by the intelligent agent that was playing against the random agents [3]. A sample of these results is depicted in Fig. 8. R1 to R3 are the Random agents, while AI1 is the intelligent agent playing against the random agents. AI2 to AI 5 depict intelligent agents playing against each other. As can be seen, the agents learn far better when playing against intelligent opponents, an attribute that is in fact mirrored in human competitive learning. <image> ## D. Agent Adaptation In order to ascertain whether the agents in fact adapt to each other or not, the agents were given pre-dealt hands, and required to play them against each other repeatedly. The results one such experiment, illustrated in Fig. 9, shows how an agent learns from its own mistake, and once certain of it changes its play, adapting to better gain a better return from the hand. The mistakes it sees are its low returns, returns of -3 to be precise. At one point, the winning player obviously decides to explore, giving some false hope to the losing agent, but then quickly continues to exploit his advantage. Eventually, at game \#25, the losing agent gives up, adapting his play to suit the losing situation in which he finds himself. Fig. 9 illustrates the progression of the agents and the adaptation described. <image> ## E. Strategy Analysis The agents have been shown to successfully learn to play the game, and to adapt to each other's play in order to maximise their own rewards. These agents form the pillars of the multi-agent model, which can now be used to analyse, and attempt to 'solve' the game. Since the game has a nontrivial degree of complexity, situations within the game are to be solved, considering each situation a sub-game of the overall game. The first and most obvious type of analysis is a static analysis, in which all of the hands are pre-dealt. This system can be said to have stabilised when the results and the playout become constant, with all agents content to play the hand out in the same manner, each deciding nothing better can be
# Soft Constraint Abstraction Based On Semiring Homomorphism ∗ Sanjiang Li and Mingsheng Ying † Department of Computer Science and Technology Tsinghua University, Beijing 100084, China ## Abstract The semiring-based constraint satisfaction problems (semiring CSPs), proposed by Bistarelli, Montanari and Rossi [3], is a very general framework of soft constraints. In this paper we propose an abstraction scheme for soft constraints that uses semiring homomorphism. To find optimal solutions of the concrete problem, the idea is, first working in the abstract problem and finding its optimal solutions, then using them to solve the concrete problem. In particular, we show that a mapping preserves optimal solutions if and only if it is an order-reflecting semiring homomorphism. Moreover, for a semiring homomorphism α and a problem P over S, if t is optimal in α(P), then there is an optimal solution t¯ of P such that t¯ has the same value as t in α(P). Keywords: Abstraction; Constraint solving; Soft constraint satisfaction; Semiring homomorphism; Order-reflecting. ## 1 Introduction In the recent years there has been a growing interest in soft constraint satisfaction. Various extensions of the classical constraint satisfaction problems (CSPs) [10, 9] have been introduced in the literature, e.g., Fuzzy CSP [11, 5, 12], Probabilistic CSP [6], Weighted CSP [15, 7], Possibilistic CSP [13], and Valued CSP [14]. Roughly speaking, these extensions are just like classical CSPs **except** that each assignment of values to variables in the constraints is associated to an element taken from a semiring. Furthermore, nearly all of these **extensions,** as well as classical CSPs, can be cast by the semiring-based constraint solving framework, called SCSP (for *Semiring CSP***), proposed by Bistarelli, Montanari** and Rossi [3]. ∗**Work partially supported by National Nature Science Foundation of China** (60673105,60621062, 60496321). †**lisanjiang@tsinghua.edu.cn (S. Li), yingmsh@tsinghua.edu.cn (M. Ying)** arXiv:0705.0734v1 [cs.AI] 5 May 2007
The next lemma shows that α **preserves optimal solutions only if it is a** semiring homomorphism. Lemma 4.3. Let α be a mapping from c-semiring S to c-semiring Se *such that* α(0) = 0e, α(1) = 1e. Suppose α : S → Se *preserves optimal solutions. Then* α is a semiring homomorphism. Proof. By Lemma 4.1, we know α **satisfies Equation 5. We first show that** α is monotonic. Take u, v ∈ S, u ≤S v. Suppose α(u) 6≤Se α(v). Then α(v)+eα(v) = α(v) <Se α(u)+eα(v). By Equation 5, we have v = v + v <S u + v = v**. This is a** contradiction, hence we have α(u) ≤Se α(v). Next, for any u, v ∈ S, we show α(u+v) = α(u)+eα(v). Since α **is monotonic,** we have α(u + v) ≥Se α(u)+eα(v). Suppose α(u + v)+eα(u + v) = α(u + v) >Se α(u)+eα(v). By Equation 5 again, we have (u + v) + (u + v) >S u + v**, also a** contradiction. Finally, for u, v ∈ S, we show α(u × v) = α(u)×eα(v**). Suppose not and set** w = α(u)×eα(v)+eα(u×v**). Then we have either** α(u)×eα(v) <Se w or α(u×v) <Se w. Since α(0) = e0, α(1) = 1e**, these two inequalities can be rewritten respectively** as α(u)×eα(v) + α(1)×eα(0) <Se α(u)×eα(v)+eα(u × v)×eα(1e) and α(1)×eα(0) + α(u × v)×eα(1) <Se α(u)×eα(v)+eα(u × v)×eα(1e). By Equation 5 again, we have either u × v + 1 × 0 <S u × v + (u × v) × 1 or 1 × 0 + (u × v) × 1 <S u × v + (u × v) × 1**. Both give rise to a contradiction.** This ends the proof. We now achieve our main result: Theorem 4.1. Let α be a mapping from c-semiring S to c-semiring Se *such* that α(0) = 0e, α(1) = e1. Then α preserves optimal solutions for all constraint systems if and only if α is an order-reflecting semiring homomorphism. Proof. **The necessity part of the theorem follows from Lemmas 4.2 and 4.3. As** for the sufficiency part, we need only to show that, if α **is an order-reflecting** semiring homomorphism, then α **satisfies Equation 5. Suppose** $$\widetilde{\sum}_{i=1}^{-n}\widetilde{\prod}_{j=1}^{m}\alpha(u_{i j})<_{\widetilde{S}}\widetilde{\sum}_{i=1}^{-n}\widetilde{\prod}_{j=1}^{m}\alpha(v_{i j}).$$ Clearly we have $\frac{1}{2}$ Clearly we have $$\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}u_{ij})=\widetilde{\sum}_{i=1}^{n}\widetilde{\prod}_{j=1}^{m}\alpha(u_{ij})<_{\widetilde{S}}\widetilde{\sum}_{i=1}^{n}\widetilde{\prod}_{j=1}^{m}\alpha(v_{ij})=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}v_{ij})\,$$ since $\alpha$ commutes with $\sum$ and $\prod$. By order-reflecting, we have immediately $$\sum_{i=1}^{n}\prod_{j=1}^{m}u_{ij}<_{S}\sum_{i=1}^{n}\prod_{j=1}^{m}v_{ij}.$$ This ends the proof. $\square$ $$10$$
## 5 Computing Concrete Optimal Solutions From Abstract Ones In the above section, we investigated conditions under which all **optimal solutions of concrete problem can be related** *precisely* **to those of abstract problem.** There are often situations where it suffices to find *some* **optimal solutions or simply a good approximation of the concrete optimal solutions. This section shows** that, even without the order-reflecting condition, semiring homomorphism can be used to find some optimal solutions of concrete problem using abstract ones. Theorem 5.1. Let α : S → Se *be a semiring homomorphism. Given an SCSP* P over S*, suppose* t ∈ Opt(α(P)) has value v in P and value ev in α(P)*. Then* there exists t¯ ∈ Opt(P) ∩ Opt(α(P)) with value v¯ ≥S v in P *and value* ve in α(P)*. Moreover, we have* α(¯v) = α(v) = ev. Proof. **Suppose** P = hC, coni, C = {ci} m i=1 and ci = hdefi, conii**. Set** con = con ∪ S{conj} m j=1 and k = |con|. Suppose t **is an optimal solution of** α(P), with semiring value ve in α(P) and v in P**. By definition of solution, we have** $$v=S o l(P)(t)=\sum_{t^{\prime}|_{\mathrm{con}}^{\mathrm{cons}}=t\,j=1}^{m}\prod_{\operatorname*{def}_{j}(t^{\prime}|_{\mathrm{con}_{j}})}.$$ Denote $T(t)=\{t':t\}$ ′is a |k|**-tuple with** t Set n = |T (t)|, and write T = {t1, · · · , tn}. For each 1 ≤ i ≤ n **and each** 1 ≤ j ≤ m**, set** vij = defj (ti|conj $$t^{\prime}|_{\mathrm{con}}^{\mathrm{con}}=t\}.$$ ). Then $$v=\sum_{i=1}^{n}\prod_{j=1}^{m}v_{i j},\;\;\widetilde{v}=\widetilde{\sum}_{i=1}^{n}\widetilde{\prod}_{j=1}^{m}\alpha(v_{i j}).$$ Since α **preserves sums and products, we have** $$\alpha(v)=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}v_{i j})=\widehat{\sum}_{i=1}^{n}\alpha(\prod_{j=1}^{m}v_{i j})=\widehat{\sum}_{i=1}^{n}\widehat{\prod}_{j=1}^{m}\alpha(v_{i j})=\widehat{v}.$$ $$\mathbf{r}\mathbf{a}$$ Notice that if t is also optimal in P, then we can choose t¯ = t**. Suppose** t is not optimal in P. Then there is a tuple t¯ that is optimal in P**, say with value** v >S v**. Denote** T (t**¯) =** {t ′: t ′is a |k|**-tuple with** t ′| $$\stackrel{\mathrm{ion}}{|\mathrm{con}}={\bar{t}}\}.$$ Clearly |T (t¯)| = |T (t)| = n. Write T (t¯) = {t¯1, · · · ,t¯n}. For each 1 ≤ i ≤ n and each 1 ≤ j ≤ m**, set** uij = defj (t¯i|conj**). Then** $${\overline{{v}}}=\sum_{i=1}^{n}\prod_{j=1}^{m}u_{i j}.$$ $$11$$
Now we show α(v) ≤Se ve. $\alpha$ we show $\alpha(v)$ -$\alpha$. By $v<_{S}\overline{v}$, we have $\alpha(v)\leq_{S}\alpha(\overline{v})$. Then $$\widetilde{v}=\widetilde{\sum_{i=1}^{n}\prod_{j=1}^{m}}\alpha(v_{ij})$$ $$=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}v_{ij})$$ $$=\alpha(v)\leq_{\widetilde{S}}\alpha(\overline{v})=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}u_{ij})=\widetilde{\sum_{i=1}^{n}\prod_{j=1}^{m}}\alpha(u_{ij})=\widetilde{\overline{v}}$$ where the last term, ev, is the value of t¯ in α(P). Now since t **is optimal in** α(P), we have ve = α(v) = α(v) = ev. That is, t¯ is also optimal in α(P**) with value** ve. Remark 5.1. **If our aim is to find some instead of all optimal solutions of the** concrete problem P**, by Theorem 5.1 we could first find all optimal solutions** of the abstract problem α(P), and then compute their values in P**, tuples that** have maximal values in P are optimal solutions of P**. In this sense, this theorem** is more desirable than Theorem 4.1 because we do not need the assumption that α **is order-reflecting.** Theorem 5.1 can also be applied to find good approximations of the optimal solutions of P. Given an optimal solution t ∈ Opt(α(P)) with value ˜v ∈ Se**, then** by Theorem 5.1 there is an optimal solution t¯ ∈ Opt(P**) with value in the set** {u ∈ S : α(u) = ve}. Note that Theorem 5.1 requires α **to be a semiring homomorphism. This** condition is still a little restrictive. Take the probabilistic semiring S**prop** = h[0, 1], max, ×, 0, 1i and the classical semiring SCSP = h{T, F}, ∨, ∧**, F, T** i as example, there are no nontrivial homomorphisms between Sprop and SCSP **. This** is because α(a × b) = α(a) ∧ α(b**) requires** α(a n) = α(a) for any a ∈ [0, **1] and** any positive integer n, which implies (∀a > 0)α(a) = 1 or (∀a < 1)α(a**) = 1.** In the remainder of this section, we relax this condition. Definition 5.1 (quasi-homomorphism). A mapping ψ **from semiring** hS, +, ×, 0, 1i to semiring hS, e +e, ×e, 0e, 1ei **is said to be a** *quasi-homomorphism* **if for any** a, b ∈ S - ψ(0) = 0e, ψ(1) = 1e**; and** - ψ(a + b) = ψ(a)+eψ(b**); and** - ψ(a × b) ≤Se ψ(a)×eψ(b). The last condition is exactly the *locally correctness* of ×e w.r.t. × **[1].** Clearly, each monotonic surjective mapping between Sprop and SCSP **is a quasihomomorphism.** The following theorem shows that a quasi-homomorphism is also useful.
Set t = (d2, d2). Clearly, t is an optimal solution of α(P**) with value** {q} in α(P), and value ∅ in P. Notice that t¯= (d1, d1**) is the unique optimal solution** of P. Since α({a}) = {p} 6⊆ {q}, there is no optimal solution tˆ of P **such that** α(tˆ) ⊆ {q}. ## 6 Related Work Our abstraction framework is closely related to the work of Bistarelli et al. [1] and de Givry et al. [4]. ## 6.1 Galois Insertion-Based Abstraction Bistarelli et al. [1] proposed a Galois insertion-based abstraction scheme for soft constraints. The questions investigated here were studied in [1]. In particular, Theorems 27, 29, 31 of [1] correspond to our Theorems 4.1, **5.2, and 5.1,** respectively. We recall some basic notions concerning abstractions used in [1]. Definition 6.1 (Galois insertion [8]). Let (C, ⊑) and (A, ≤**) be two posets (the** concrete and the abstract domain). A *Galois connection* hα, γi : (C, ⊑) ⇄ (A, ≤) is a pair of monotonic mappings α : C → A and γ : A → C **such that** (∀x ∈ C)(∀y ∈ A) α(x) ≤ y ⇔ x ⊑ γ(y**) (6)** In this case, we call γ the upper adjoint (of α), and α **the lower adjoint (of** γ). A Galois connection hα, γi : (C, ⊑) ⇄ (A, ≤**) is called a** *Galois insertion* (of A in C**) if** α ◦ γ = idA. Definition 6.2 (abstraction). A mapping α : S → Se **between two c-semirings** is called an *abstraction* if 1. α has an upper adjoint γ such that hα, γi : S ⇋ Se **is a Galois insertion** 2. ×e is *locally correct* with respect to ×, i.e. (∀**a, b** ∈ S) α(a × b) ≤Se α(a)×eα(b). Theorem 27 of [1] gives a sufficient condition for a Galois insertion preserving optimal solutions. This condition, called *order-preserving***, is defined as follows:** Definition 6.3 ([1]). Given a Galois insertion hα, γi : S ⇄ Se, α **is said to be** order-preserving if for any two sets I1 and I2**, we have** $$\widetilde{\prod}_{x\in I_{1}}\alpha(x)\leq_{\widetilde{S}}\widetilde{\prod}_{x\in I_{2}}\alpha(x)\Rightarrow\prod_{x\in I_{1}}x\leq_{S}\prod_{x\in I_{2}}x.$$ x. (7) This notion plays an important role in [1]. In fact, several results ([1, Theorems 27, 39, 40, 42]) require this property. The next proposition, **however, shows** that this property is too restrictive, since an order-preserving Galois insertion is indeed a semiring isomorphism. $$\left(7\right)$$
## References [1] S. Bistarelli, P. Codognet, F. Rossi, Abstracting soft constraints: Framework, properties, examples, *Artificial Intelligence* 139 **(2002) 175-211.** [2] S. Bistarelli, H. Fargier, U. Montanari, F. Rossi, T. Schiex, G. Verfaillie, Semiring-Based CSPs and Valued CSPs: Basic Properties and Comparison, in: *Over-Constrained Systems, Lecture Notes in Computer Science,* Vol.1106, Springer, Berlin, 1996, pp.111-150. [3] S. Bistarelli, U. Montanari, F. Rossi, Semiring-based constraints **solving** and optimization, *Journal of the ACM* 44 **(2) (1997) 201-236.** [4] S. de Givry, G. Verfaillie, T. Schiex, Bounding the Optimum of Constraint Optimization Problems, in: G. Smolka (Ed.), Proc. CP-97, Lecture Notes in Computer Science**, Vol.1330, Springer, Berlin, 1997, pp.405-419.** [5] D. Dubois, H. Fargier, H. Prade, The calculus of fuzzy restrictions as a basis for flexible constraint satisfaction, in: Proc. IEEE International Conference on Fuzzy Systems, IEEE**, 1993, pp. 1131-1136.** [6] H. Fargier, J. Lang, Uncertainty in constraint satisfaction problems: A probabilistic approach, in: *Proc. European Conference on Symbolic and* Qualitative Approaches to Reasoning and Uncertainty (ECSQARU), Lecture Notes in Computer Science**, Vol. 747, Springer, Berlin, 1993, pp.** 97-104. [7] E.C. Freuder, R.J. Wallace, Partial constraint satisfaction, *Artificial Intelligence* 58**(1992) 21-70.** [8] G. Gierz, K.H. Hofmann, K. Keimel, J.D. Lowson, M.W. Mislove, and D.S. Scott, *A Compendium of Continuous Lattices***, Springer, Berlin, 1980.** [9] A.K. Mackworth, Constraint satisfaction, in: S.C. Shapiro (Ed.), *Encyclopedia of AI***, Vol. 1, 2nd edition, Wiley, New York, 1992, pp. 285-293.** [10] U. Montanari, Networks of constraints: Fundamental properties and application to picture processing, *Information Science*, 7 **(1974)95-132.** [11] A. Rosenfeld, R. Hummel, and S. Zucker, Scene labelling by relaxation operations. *IEEE Transactions on Systems, Man and Cybernetics* 6**(1976)(6).** [12] Zs. Ruttkay, Fuzzy constraint satisfaction, in: *Proc. 3rd IEEE International Conference on Fuzzy Systems***, 1994, pp. 1263-1268.** [13] T. Schiex, Possibilistic Constraint Satisfaction Problems, or "How to Handle Soft Constraints?", in: *Proc. UAI-92***, 1992, pp.269-275.**
[14] T. Schiex, H. Fargier, G. Verfaillie, Valued Constraint Satisfaction Problems: Hard and Easy Problems, in: *Proc. IJCAI-95***, Montreal, Quebec,** Morgan Kaufmann, San Mateo, CA, 1995, pp.631-639. [15] L. Shapiro and R. Haralick, Structural descriptions and inexact matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 3**(1981)** 504-519.
Compared with classical CSPs, SCSPs are usually more difficult to process and to solve. This is mainly resulted by the complexity of the underlying **semiring structure. Thus working on a simplified version of the given problem would** be worthwhile. Given a concrete SCSP, the idea is to get an abstract **one by** changing the semiring values of the constraints without changing the structure of the problem. Once the abstracted version of a given problem is available, one can first process the abstracted version and then bring back **the information obtained to the original problem. The main objective is to find an optimal** solution, or a reasonable estimation of it, for the original problem. The translation from a concrete problem to its abstracted version **is established via a mapping between the two semirings. More concretely, suppose** P is an SCSP over S, and Se is another semiring (possibly simpler than S**). Given a** mapping α : S → Se, we can translate the concrete problem P to another problem, α(P), over Se **in a natural way. We then ask when is an optimal solution of** the concrete problem P also optimal in the abstract problem α(P**)? and, given** an optimal solution of α(P**), when and how can we find a reasonable estimation** for an optimal solution of P? The answers to these questions will be helpful in deriving useful information on the abstract problem and then taking some useful information back to the concrete problem. This paper is devoted to the investigation of **the above** questions. These questions were first studied in Bistarelli, Codognet and Rossi **[1],** where they established a Galois insertion-based abstraction framework for soft constraint problems. In particular, they showed that [1, Theorem 27] if α is an *order-preserving* **Galois insertion, then optimal solutions of the concrete** problem are also optimal in the abstract problem. This sufficient condition, however, turns out to be equivalent to say α **is a semiring isomorphism (see** Proposition 6.1), hence too restrictive. Theorem 29 of [1] concerns computing bounds that approximate an optimal solution of the concrete problem. The statement of this theorem as given there is incorrect since a counter-example (see Soft Problem 4 in this paper) shows that the result holds conditionally. This paper shows that semiring homomorphism plays an important role in soft constraint abstraction. More precisely, we show that (Theorem 4.1) a mapping preserves optimal solutions if and only if it is an order-reflecting **semiring** homomorphism, where a mapping α : S → Se is *order-reflecting* **if for any two** a, b ∈ S, we have a <S b from α(a) <Se α(b). Moreover, for a semiring homomorphism α and a problem P over S, if t is optimal in α(P**), then there** is an optimal solution t¯ of P such that t¯ has the same value as t in α(P**) (see** Theorem 5.1). This paper is organized as follows. First, in Section 2 we give a summary of the theory of soft constraints. The notion of α-*translation* **of semiring CSPs is** introduced in Section 3, where we show that α **preserves problem ordering if and** only if α **is a semiring homomorphism. Section 4 discusses when a translation** α **preserves optimal solutions, i.e. when all optimal solutions of the concrete** problem are also optimal in the abstract problem. In Section 5, we discuss, given
Definition 2.5 (constraints [3]). Given a constraint system CS = h**S, D, V** i, where S = hS, +, ×, 0, 1i, a constraint over CS **is a pair** hdef, coni **where** - con is a finite subset of V **, called the** *type* **of the constraint;** - def : Dk → S **is called the** *value* **of the constraint, where** k = |con| **is the** cardinality of con. In the above definition, if def : Dk → S **is the maximal constant function,** namely def(t) = 1 for each k-tuple t**, we call** hdef, coni the *trivial* **constraint** with type con. Definition 2.6 (constraint ordering [3]). **For two constraints** c1 = hdef1, coni and c2 = hdef2, coni **with type** con over CS = hS, D, V i**, we say** c1 is constraint below c2, noted as c1 ⊑S c2**, if for all** |con|**-tuples** t, def1(t) ≤S def2(t). This relation can be extended to sets of constraints in an obvious way. Given two (possibly infinite) sets of constraints C1 and C2**, assuming that both contain** no two constraints of the same type, we say C1 is *constraint below* C2**, noted as** C1 ⊑S C2**, if for each type** con ⊆ V **one of the following two conditions holds:** (1) There exist two constraints c1 and c2 **with type** con in C1 and C2 **respectively, such that** c1 ⊑S c2; (2) C2 **contains no constraints of type** con, or C2 **contains the trivial constraint** of type con. Two sets of constraints C1 and C2 **are called (**constraint) *equal*, if C1 ⊑S C2 and C2 ⊑S C1. In this case, we write C1 = C2**. This definition is in accordance with** the basic requirement that adding to a set of constraints C **a trivial constraint** should not change the meaning of C. Definition 2.7 (soft constraint problem [3]). **Given a constraint system** CS = hS, D, V i, a soft constraint satisfaction problem (SCSP) over CS **is a pair** hC, coni, where C **is a finite set of constraints, and** con**, the type of the problem,** is a finite subset of V **. We assume that no two constraints with the same type** appear in C. Naturally, given two SCSPs P1 = hC1, coni and P2 = hC2, coni**, we say** P1 is constraint below P2, noted as P1 ⊑S P2, if C1 ⊑S C2. Also, P1 and P2 **are said** to be (constraint) *equal*, if C1 and C2 **are constraint equal. In this case, we also** write P1 = P2**. We call this the** *constraint ordering* **on sets of SCSPs with type** con over CS**. Clearly, two SCSPs are constraint equal if and only if they differ** only in trivial constraints. To give a formal description of the solution of an SCSP, we need two additional concepts. Definition 2.8 (combination [3]). **Given a finite set of constraints** C = {hdefi, conii : i = 1, · · · , n}**, their** *combination* NC **is the constraint** hdef, coni **defined by** con = Sn i=1 coni and def(t) = Qn i=1 defi(t| con coni ), where by t| X Y we mean the projection of tuple t, which is defined over the set of variables X**, over the set of** variables Y ⊆ X.
Definition 2.9 (projection [3]). **Given a constraint** c = hdef, coni **and a subset** I of V **, the** *projection* of c over I, denoted by c ⇓I **, is the constraint** hdef′, con′i where con′ = con ∩ I and def′(t ′) = P{def(t) : t| con con∩I = t ′}**. Particularly, if** I = ∅, then c ⇓∅: {ε} → S **maps 0-tuple** ε to P{def(t) : t **is a tuple with type** con}, which is the sum of the values associated to all |con|**-tuples.** Now the concept of solution can be defined as the projection of the **combination of all constraints over the type of the problem.** Definition 2.10 (solution and optimal solution). The *solution* **of an SCSP** P = hC, coni **is a constraint of type** con **which is defined as:** ## Sol(P**) = (**C ∗ × Oc) ⇓Con (1) where c ∗**is the maximal constraint with type** con. Write Sol(P) = hdef, coni, a |con|-tuple t **is an** *optimal solution* of P if def(t) is maximal, that is to say there is no t ′**such that** def(t ′) >S def(t**). We write** Opt(P) for the set of optimal solutions of P**. For any** |con|-tuple t**, we also write** Sol(P)(t**) for** def(t). ## 3 Translation And Semiring Homomorphism Let S = hS, +, ×, 0, 1i and Se = hS, e +e, ×e, e0, 1ei **be two c-semirings and let** α : S → Se be an arbitrary mapping from S to Se. Also let D **be a nonempty finite** set and let V **be an ordered set of variables. Fix a type** con ⊆ V **. We now** investigate the relation between problems over S **and those over** Se. Definition 3.1 (translation). Let P = hC, coni be an SCSP over S **where** C = {c0, · · · , cn}, ci = hdefi, conii**, and** defi: D|coni| → S**. By applying** α to each constraints respectively, we get an SCSP hC, e coni over Se**, called the** α-*translated problem* of P, which is defined by Ce = {ce1 **· · ·** cen}, cei = hdef gi, conii, and def gi = α ◦ defi: D|coni| → Se. <image> We write α(P) for the α**-translated problem of** P. Without loss of generality, in what follows we assume α(0) = 0e, and α(1) = 1e**. We say** α *preserves problem ordering*, if for any two SCSPs P, Q over S**, we** have Sol(P) ⊑S Sol(Q) ⇒ Sol(α(P)) ⊑Se Sol(α(Q**)) (2)** The following theorem then characterizes when α **preserves problem ordering.**
Bayesian Approach to Neuro-Rough Models for Modelling HIV Tshilidzi Marwala and Bodie Crossingham University of the Witwatersrand Private Bag x3 $${\mathrm{Wits}},\,2050$$ South Africa e-mail: t.marwala@ee.wits.ac.za This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model. Keywords: **Neuro-rough model, multi-layered perceptron, Bayesian, HIV modelling** ## Introduction The role of machine learning is to be able to make predictions given a set of inputs. However, the other role is to extract rules that govern interrelationships within the data. Machine learning tools such as neural networks are quite good at making predictions
The biases in equation 12 may be absorbed into the weights by including extra input variables set permanently to 1 making x 0 = 1 and z 0 = 1 , to give: $$y_{{}_{k}}=f_{\rm outer}\Biggl{(}\sum_{j=0}^{M}\gamma_{{}_{k}}(G_{{}_{x}},R,N_{{}_{r}})_{{}_{kj}}f_{\rm inner}\Biggl{(}\sum_{i=0}^{d}w_{ji}^{(i)}x_{i}\Biggr{)}\Biggr{)}\tag{13}$$ The function fouter( ) may be logistic, linear, or sigmoid while finner is a hyperbolic tangent function. The equation may be represented schematically by Figure 1. <image>
Bayesian training on rough sets The Bayesian framework can be written as in (Marwala, 2007a,b**; Bishop, 2006):** $$P(M\mid D)=\frac{P(D\mid M)\,p(M)}{p(D)}\quad\text{where}\quad M=\begin{cases}w\\ G_{x}\\ N_{r}\\ R\end{cases}\tag{14}$$ The parameter P(M | D) **is the probability of the rough set model given the observed** data, P(D | M ) **is the probability of the data given the assumed rough set model also** called the likelihood function, P(M **)is the prior probability of the rough set model and** P(D) **is the probability of the data and is also called the evidence. The evidence can be** treated as the normalisation constant. The likelihood function and the resulting error may be estimated as follows: $$P(D\,|\,M\,)=\frac{1}{z_{1}}\exp(-error)=\frac{1}{z_{1}}\exp\bigl{\{}A(w,N_{r},R,G_{x})-1\bigr{\}}\tag{1}$$ $$(15)$$ $$error=\sum_{i}^{L}\sum_{k}^{K}\Biggl{(}t_{ik}-\Biggl{(}f_{out}\Biggl{(}\sum_{j=0}^{M}\gamma_{kj}(G_{x},R)_{kj}f_{inout}\Biggl{(}\sum_{i=0}^{d}w_{ji}^{(1)}x_{i}\Biggr{)}\Biggr{)}\Biggr{)}_{R}\Biggr{)}^{2}\tag{16}$$ Here 1 z is the normalisation constant, L is the number of outputs while K **is the number of** training examples. The prior probability in this problem is linked to the concept of reducts, which was explained earlier and it is the prior knowledge that the best rough set model is the one with the minimum number of rules (Nr**) and that the best network is the** one whose weights are of the same order of magnitude. Therefore, the prior probability may be written as follows: $$P(M)=\frac{1}{z_{2}}\mathrm{exp}\Biggl\{-\alpha N_{r}-\beta\sum w^{2}\Biggr\}$$ P M α r β (17) $$(17)$$
equation 18. From equation 18 and by following the rules of probability theory, the distribution of the output parameter, y, **is written as (Marwala, 2007**b): ∫ p( y | x,D) = p( y | x,M ) p(M | D)dM **(19)** Equation 19 depends on equation 18, and is difficult to solve analytically due to relatively high dimension of the combined granule and weight space. Thus the integral in equation 19 may be approximated as follows: $$\Xi\frac{1}{L}\sum_{i=I}^{Z+L-1}\!F(M_{i})$$ $$(19)$$ $$(20)$$ y **(20)** Here F is a mathematical model that gives the output given the input, ỹ **is the average** prediction of the Bayesian neuro-rough set model (Mi), Z **is the number of initial states** that are discarded in the hope of reaching a stationary posterior distribution function described in equation 18 and L **is the number of retained states. In this paper, MCMC** method is implemented by sampling a stochastic process consisting of random variables {gw1,gw2*,…,gw*n} through introducing random changes to granule-weight vector *{gw}* **and** either accepting or rejecting the state according to Metropolis et al. **algorithm given the** differences in posterior probabilities between two states that are in transition (Metropolis et al., 1953). This algorithm ensures that states with high probability form the majority of the Markov chain and is mathematically represented as: $$(21)$$ If ) ( | ) ( | P Mn+1 D > P Mn D **then accept** Mn+1 , (21) else accept if > ξ +( | ) ( | ) 1P M D n n where ξ ∈ ]1,0[ (22) $$(22)$$ P M D else reject and randomly generate another model Mn+1 .
Basically the steps described above may be summarised as follows: Step 1: Randomly generate the granule weight vector *{gw}*n Step 2: Calculate the posterior probability pn using equation 18 and vector *{gw}*n Step 3: Introduce random changes to vector *{gw}*n to form vector {gw}*n+1* Step 4: Calculate the posterior probability pn+1 using equation 18 and vector {gw}*n+1* Step 5: Accept or reject vector {gw}*n+1* **using equations 21 and 22** Step 6**: Go to step 3 and repeat the process until enough samples of distribution in** equation 18 have been achieved ## Experimental Investigation: Modelling Of Hiv The proposed method is applied to create a model that uses demographic characteristics to estimate the risk of HIV. In the last 20 years, over 60 million people have been infected with HIV, and of those cases, 95% are in developing countries (Lasry et al, 2007). HIV has been identified as the cause of AIDS. Early studies on HIV/AIDS focused on the individual characteristics and behaviors in determining HIV risk and Fee and Krieger (1993) refer to this as biomedical individualism. But it has been determined that the study of the distribution of health outcomes and their social determinants is of more importance and this is referred to as social epidemiology (Poundstone et. al., 2004). This study uses individual characteristics as well as social and demographic factors in determining the risk of HIV using neuro-rough models formulated using Bayesian approach and trained using Markov Chain Monte Carlo method. Previously, computational intelligence techniques have been used extensively to analyse HIV and Leke et al (2006, 2007) used autoencoder network classifiers, inverse neural networks, as
well as conventional feed-forward neural networks to estimate HIV risk from demographic factors. Although good accuracy was achieved when using the autoencoder method, it is disadvantageous due to its "black box" nature which is that it is not transparent. To improve transparency Bayesian rough set theory (RST) was proposed to forecast and interpret the causal effects of HIV (Marwala and Crossingham, 2007) and good accuracy and relevant rules that govern the relationships between demographic characteristics and HIV were identified. Rough sets have been used in various biomedical and engineering applications (Ohrn, 1999; Pe-a et. al, 1999; Tay and Shen, 2003; Golan and Ziarko, 1995). But in most applications, RST is used primarily for prediction. Rowland et. al. (1998) compared the use of RST and neural networks for the prediction of ambulation spinal cord injury, and although the neural network method produced more accurate results, its "black box" nature makes it impractical for the use of rule extraction problems. Poundstone et. al. (2004) related demographic properties to the spread of HIV. In their work they justified the use of demographic properties to create a model to predict HIV from a given database. The data set used in this paper was obtained from the South African antenatal sero-prevalence survey of 2001 (Department of Health, 2001). The data was obtained through questionnaires completed by pregnant women attending selected public clinics and was conducted concurrently across all nine provinces in South Africa. The six demographic variables considered are: race, age of mother, education, gravidity, parity and, age of father, with the outcome or decision being either HIV positive or negative. The HIV status is the decision represented in binary form as either a 0 or 1, with a 0 representing HIV negative and a 1 representing HIV positive. The input data was discretised into four partitions. This number was chosen as it gave a good balance
between computational efficiency and accuracy. The parents' ages are given and discretised accordingly, education is given as an integer, where 13 is the highest level of education, indicating tertiary education. Gravidity is defined as the number of times that a woman has been pregnant, whereas parity is defined as the number of times that she has given birth. It must be noted that multiple births during a pregnancy are indicated with a parity of one. Gravidity and parity also provide a good indication of the reproductive health of pregnant women in South Africa. The neuro-rough models were trained by sampling in the granule and weight space and accepting or rejecting samples using Metropolis *et. al.* **algorithm (1953).** As with many surveys, there are incomplete entries and such cases are removed from the data set. The second irregularity was information that is false for example an instance where gravidity (number of pregnancies) was zero and parity (number of births) was at least one, which is impossible because for a woman to have given birth she must necessarily have been pregnant. Such cases were removed from the data set. Only 12945 cases remained from a total of 13087. The input data was therefore the demographic characteristics explained earlier and the output were the plausibility of HIV with 1 representing 100% plausibility that a person is HIV positive and -1 indicating 100% plausibility of HIV negative. The neuro-rough model constructed had 7 inputs, 5 hidden nodes, hyperbolic tangent function in the inner layer (f*inner***) and logistic function in the** outer layer (f*outer***). When training the neuro-rough models using Markov Chain Monte** Carlo, 500 samples were accepted and retained meaning that 500 sets of rules and weights where each set contained 50 up to 550 numbers of rules with an average of 88
<image> The average accuracy achieved was 62%. This accuracy can be compared with the accuracy of Bayesian multi-layered perceptron trained using hybrid Monte Carlo by Tim and Marwala (2006), which gave the accuracy of 62%, and Bayesian rough sets by Marwala and Crossingham (2007), which gave the accuracy of 58%, all on the same database. The results show that the incorporation of rough sets into the multi-layered perceptron neural network to form neuro-rough model does not compromise on the results obtained from a traditional Bayesian neural network but it added a dimension of rules. When the interaction between the neural network and the rough set components was conducted by analysing the average magnitudes of the weights ( w ) and the
magnitudes of the γ **which is the output of the rough set model, it was found that the** rough set model contributes (average γ **of 0.43) to the neuro-rough set model less than** the neural network component (average w **of 0.49). The receiver operator characteristics** (ROC) curve was also generated and the area under the curve was calculated to be 0.59 and is shown in Figure 4. This shows that the neuro-rough model proposed is able to estimate the HIV status. <image> Training neuro-rough model with Bayesian framework allows us to determine how confident we are on the HIV status we predict. For example, Figure 5 shows that the average HIV status predicted is 0.8 indicating that a person is probably HIV positive.
given input parameters but are not sufficiently transparent to allow the extraction of linguistic rules that govern the predictions they make. Consequently, they are called 'black-box' tools because they do not give a transparent view of the rules that govern the relationships that make predictions possible. Rough set theory (RST) was introduced by Pawlak (1991) and is a mathematical tool, which deals with vagueness and uncertainty, and is based on set of rules, which are in terms of linguistic variables. Rough sets are of fundamental importance to computational intelligence and cognitive science and are highly applicable to the tasks of machine learning and decision analysis, especially in the analysis of decisions in which there are inconsistencies. As a consequence of the fact that they are rule-based, rough sets are very transparent but they are not as accurate, and most certainly are not universal approximators, as other machine learning tools such as neural networks in their predictions. It can thus be concluded that in machine learning there is always a trade-off between prediction accuracy and transparency. This paper proposes a combined architecture that takes elements from both rough sets and multi-layered perceptron neural networks. It is, therefore, postulated that this architecture will give a balanced view of the data in terms of both the transparency and accuracy they give. Rough sets are based on lower and upper approximations of decision classes (Inuiguchib and Miyajima, 2006) and are often contrasted to compete with fuzzy set theory (FST), but it in fact complements it. One of the advantages of RST is that it does not require a priori knowledge about the data set, and it is for this reason that statistical methods are not
The variance of the distribution shown, which is from the 500 samples identified, gives us some measure of the probability distribution of that prediction. This in essence indicates that the Bayesian formulation allows us to interpret the predictions of neurorough models in probability terms as can be viewed from a probability distribution. <image> ## Rule Extraction Once Bayesian neuro-rough model was applied to the HIV data, unique distinguishable cases and indiscernible cases were extracted. From the data set of 12945 cases, the data is only a representative of 452 cases out of the possible 4096 unique combinations. The lower approximation cases are rules that always hold, or are definite cases while the upper approximation can only be stated with certain plausibility. Examples of both cases that were extracted from the approach in this paper is: If Race = AF and **Mother's Age =** Young and Education = Secondary and Gravidity = Low and Parity = Low and **Father's** Age = Young then γ = − 75.0 meaning probably HIV negative. This demonstrates that
the Bayesian neuro-rough model allows us to extract rules that can be represented in linguistic terms. ## Conclusion Neuro-rough model was formulated using Bayesian framework and then trained using Markov Chain Monte Carlo method. The model is able to balance transparency of the rough set model with the accuracy of neural networks. When implemented for HIV estimation it gives 62% accuracy compared to 62% for Bayesian multi-layered networks trained using hybrid Monte Carlo and 59% for Bayesian rough set models. ## Reference 1. Bishop, C.M. 2006 *Pattern recognition and machine intelligence.* **Springer, Berlin,** Germany. 2. Department of Health, 2001 *National HIV and syphilis sero-prevalence survey of* women attending public antenatal clinics in South Africa. http://www.info.gov.za/otherdocs/2002/hivsurvey01.pdf. 3. **Deja, A. & Peszek, P. 2003 Applying rough set theory to multi stage medical** diagnosing. Fundamenta *Informaticae***. 54, 387–408..** 4. **Fee, E. & Krieger, N. 1993 Understanding AIDS: Historical interpretations and the** limits of biomedical individualism. American Journal of Public *Health***, 83, 1477–** 1486.
14. Marwala, T. 2007b *Computational Intelligence for Modelling Complex Systems*. Research India Publishers (in press). 15. Marwala, T & Crossingham, B. 2007 Bayesian approach to rough set. *arXiv* 0704.3433. 16. **Leke, B.B., Marwala, T., Tim, T. & Lagazio, M. 2006 Prediction of HIV status from** demographic data using neural networks. **In Proceedings of the IEEE International** Conference on Systems, Man and Cybernetics. Taiwan**, pp. 2339-2444.** 17. **Leke, B.B., Marwala & Tettey, T. 2007 Using inverse neural network for HIV** adaptive control. *International Journal of Computational Intelligence Research***, 3,** 11–15. 18. **Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. & Teller, E., 1953.** Equations of state calculations by fast computing machines. **Journal of Chemical** Physics**. 21, 1087-1092.** 19. **Nishino, T., Nagamachi, M. & Tanaka, H. 2006 Variable precision Bayesian rough** set model and its application to human evaluation data. **Proceedings of SPIE - The** International Society for Optical Engineering**, 6104, 294-303.** 20. Ohrn, A. 1999 Discernibility and rough sets in medicine: tools and applications. *PhD* Thesis, Department of Computer and Information Science Norwegian University of Science and Technology. 21. **Ohrn, A. & Rowland, T. 2000 Rough sets: A knowledge discovery technique for** multifactorial medical outcomes. **American Journal of Physical Medicine and** Rehabilitation, 79, 100-108.
22. Pawlak, Z. 1991 *Rough sets, theoretical aspects of reasoning about data***, Kluwer** Academic Publishers, 1991. 23. **Pe-a, J., Ltourneau, S., & Famili, A. 1999 Application of rough sets algorithms to** prediction of aircraft component failure. *In Proceedings of the third international* symposium on intelligent data analysis**. Amsterdam.** 24. **Poundstone, K.E., Strathdee, S.A. & Celentano, D.D. 2004 The social epidemiology** of human immunodeficiency virus/acquired immunodeficiency syndrome. *Epidemiol* Reviews**, 26, 22–35.** 25. **Rowland, T., Ohno-Machado, L. & Ohrn, A. 1998 Comparison of multiple prediction** models for ambulation following spinal cord injury. *In Chute***, 31, 528–532.** 26. **Slezak, D. & Ziarko, W. 2005 The investigation of the Bayesian rough set model.** International Journal of Approximate Reasoning**, 40 (1-2), 81-91** 27. **Tim, T.N. & Marwala, T. Computational intelligence methods for risk assessment of** HIV*. In Imaging the Future Medicine, Proceedings of the IFMBE***, 2006, Vol. 14, pp.** 3581-3585, Springer-Verlag, Berlin Heidelberg. Eds. Sun I. Kim and Tae Suk Sah. 28. Tay, F.E.H. & Shen, L. 2003 Fault diagnosis based on rough set theory. *Engineering* Applications of Artificial Intelligence**, 16, 39-43.** 29. **Witlox, F. & Tindemans, H. 2004 The application of rough sets analysis in activity** based modelling. Opportunities and constraints. *Expert Systems with Applications***, 27,** 585-592.
## Rough Set Theory Rough set theory deals with the approximation of sets that are difficult to describe with the available information (Orhn and Rowland, 2006). It deals predominantly with the classification of imprecise, uncertain or incomplete information. Some concepts that are fundamental to RST theory are given in the next few sections. The data is represented using an information table and an example for the HIV data set for the ith **object is given** in Table 1: | | Race | Mothers' | Education | Gravidity | Parity | Fathers' | HIV | |--------|--------|------------|-------------|-------------|----------|------------|-------| | Age | Age | Status | | | | | | | Obj(1) | 1 | 32 | 1 | 1 | 2 | 34 | 0 | | Obj(2) | 2 | 27 | 13 | 2 | 1 | 28 | 1 | | Obj(3) | 2 | 25 | 8 | 2 | 0 | 23 | 1 | | . | . | . | . | . | . | . | . | | Obj(i) | 3 | 27 | 4 | 3 | 1 | 22 | 0 | Once the information table is obtained, the data is discretised into partitions as mentioned earlier. An information system can be understood by a pairΛ = (U, A), where U **and** A, are finite, non-empty sets called the universe, and the set of attributes, respectively (Deja
and Peszek, 2003). For every attribute a an element of A, we associate a set Va**, of its** values, where Va **is called the value set of** a. a: U→ Va **(1)** Any subset B of A determines a binary relation *I(B)* on U**, which is called an** indiscernibility relation. The main concept of rough set theory is an indiscernibility relation (indiscernibility meaning indistinguishable from one another). Sets that are indiscernible are called elementary sets, and these are considered the building blocks of RST's knowledge of reality. A union of elementary sets is called a crisp set, while any other sets are referred to as rough or vague. More formally, for given information system Λ, then for any subset B ⊆ A, there is an associated equivalence relation *I(B)* called the B − *indiscernibility* **relation and is represented as shown as:** (x, y)∈ I(B) iff a(x) = a( y**) (2)** RST offers a tool to deal with indiscernibility and the way in which it works is, for each concept/decision X, the greatest definable set containing X **and the least definable set** containing X **are computed. These two sets are called the lower and upper approximation,** respectively. The sets of cases/objects with the same outcome variable are assembled together. This is done by looking at the "purity" of the particular objects attributes in relation to its outcome. In most cases it is not possible to define cases into crisp sets, in such instances lower and upper approximation sets are defined. The lower approximation is defined as the collection of cases whose equivalence classes are fully contained in the set of cases we want to approximate (Ohrn and Rowland, 2006). The lower approximation of set X is denoted BX **and is mathematically it is represented as:** BX = {x∈U : B(x) ⊆ X} (3)
the number of cases in the upper approximation (where 0 ≤ (X ) ≤1 α p ) and can be written as: $$\alpha_{p}(X)=\frac{\left|\underline{B}X\right|}{\left|\underline{B}X\right|}\tag{6}$$ Rough Sets Formulation The process of modeling the rough set can be broken down into five stages. The first stage would be to select the data while the second stage involves pre-processing the data to ensure that it is ready for analysis. The second stage involves discretising the data and removing unnecessary data (cleaning the data). If reducts were considered, the third stage would be to use the cleaned data to generate reducts. A reduct is the most concise way in which we can discern object classes (Witlox and Tindermans, 2004). In other words, a reduct is the minimal subset of attributes that enables the same classification of elements of the universe as the whole set of attributes (Pawlak, 1991). To cope with inconsistencies, lower and upper approximations of decision classes are defined (Ohrn, 2006; Deja and Peszek, 2003). Stage four is where the rules are extracted or generated. The rules are normally determined based on condition attributes values (Goh and Law, 2003). Once the rules are extracted, they can be presented in an **if CONDITION(S)-then** DECISION **format (Leke, 2007). The final or fifth stage involves testing the newly** created rules on a test set to estimate the prediction error of rough set model. The equation representing the mapping between the inputs x to the output γ **using rough set** can be written as: f (G ,N ,R) = x r γ (7)
where γ is the output, Gx **is the granulisation of the input space into high, low, medium** etc, Nr is the number of rules and R **is the rules. So for a given nature of granulisation, the** rough set model will be able to give the optimal number and nature of rules and the accuracy of prediction. Therefore, in rough set modeling there is always a trade-off between the degree of granulisation of the input space (which affects the nature and size of rules) and the prediction accuracy of the rough set model. ## Multi-Layer Perceptron Model The other component of the neuro-rough model is the multi-layered network. This network architecture contains hidden units and output units and has one hidden layer. The bias parameters in the first layer are weights from an extra input having a fixed value of x0=1**. The bias parameters in the second layer are weights from an extra hidden unit, with** the activation fixed at z0=1**. The model is able to take into account the intrinsic** dimensionality of the data. The output of the j th **hidden unit is obtained by calculating the** weighted linear combination of the d **input values to give (Bishop, 2006; Marwala, 2001):** $$a_{j}=\sum_{i=1}^{d}w_{j i}^{(1)}x_{i}+w_{j0}^{(1)}$$ $\eqref{eq:walpha}$. $$({\boldsymbol{8}})$$ (7) Here, )1( wji indicates weight in the first layer, going from input i to hidden unit j **while** )1( wj0 indicates the bias for the hidden unit j. The activation of the hidden unit j **is obtained** by transforming the output aj in equation 7 into zj**, as follows:** ( ) j **inner** a j z = f **(8)** The f*inner* **function represents the activation function of the inner layer and functions such** as hyperbolic tangent function may be used (Bishop, 1996; Marwala, 201; Marwala,
2007a**). The output of the second layer is obtained by transforming the activation of the** second hidden layer using the second layer weights. Given the output of the hidden layer zj **in equation 8, the output of unit k may be written as:** $$a_{k}=\sum_{j=1}^{M}w_{k j}^{(2)}\,z_{j}\,+w_{k0}^{(2)}$$ )2( **(9)** Similarly, equation 9 may be transformed into the output units by using some activation function as follows: $$y_{k}=f_{o u t e r}(a_{k})$$ y = f a **(10)** If equations 7, 8, 9 and 10 are combined, it is possible to relate the input x **to the output** y by a two-layered non-linear mathematical expression that may be written as follows (Bishop, 1995; Haykin, 1995; Hinton, 1987): $$y_{k}=f_{\mbox{\tiny{out}}}\!\left(\sum_{j=1}^{M}w_{kj}^{(2)}f_{\mbox{\tiny{inner}}}\!\left(\sum_{i=1}^{d}w_{ji}^{(1)}x_{i}+w_{j0}^{(1)}\right)+w_{k0}^{(2)}\right)\tag{11}$$ $$({\mathfrak{g}})$$ Models of this form can approximate any continuous function to arbitrary accuracy if the number of hidden units M **is sufficiently large. The MLP may be expanded by** considering several layers but it has been demonstrated by the Universal Approximation Theorem (Haykin, 1999) that a two-layered architecture is adequate for the multi-layer perceptron. ## Neuro-Rough Model If equations 7 and 11 are combined, it is possible to relate the input x to the output y **by a** two-layered non-linear mathematical expression that may be written as follows: $$y_{k}=f_{o u e r}\Biggl(\sum_{j=1}^{M}\gamma_{k j}(G_{x},R,N_{r})f_{i m e r}\Biggl(\sum_{i=1}^{d}w_{j i}^{(1)}x_{i}+w_{j0}^{(1)}\Biggr)+w_{k0}^{(2)}\Biggr)$$ $$(12)$$ ykf*outer* γ kj Gx R Nrf w x w w (12)
# Artificial Neural Networks And Support Vector Machines For Water Demand Time Series Forecasting Ishmael S. Msiza, Fulufhelo V. Nelwamondo and Tshilidzi Marwala Abstract - Water plays a pivotal role in many physical processes, and most importantly in sustaining human life, animal life and plant life. Water supply entities therefore have the responsibility to supply clean and safe water at the rate required by the consumer. It is therefore necessary to implement mechanisms and systems that can be employed to predict both short-term and long-term water demands. The increasingly growing field of computational intelligence techniques has been proposed as an efficient tool in the modelling of dynamic phenomena. The primary objective of this paper is to compare the efficiency of two computational intelligence techniques in water demand forecasting. The techniques under comparison are the Artificial Neural Networks (ANNs) and the Support Vector Machines (SVMs). In this study it was observed that the ANNs perform better than the SVMs. This performance is measured against the generalisation ability of the two. Keywords: **Water Demand Forecasting, Artificial Neural** Networks, Support Vector Machines, Artificial Neural Genius, Support Vector Genius, Overall Genius ## I. I**Ntroduction** he modeling of water resource variables is a very broad field that includes modeling of water quality, water demand, water reticulation networks, to mention but a few. This paper is focused on the modeling of only one water resource variable which is water demand and the study is restricted to South Africa's Gauteng Province. The Republic of South Africa has now of late been experiencing a situation whereby the demand of water is much higher than the rate at which the water is being supplied [1]. This is attributable to a number of factors such as the average annual rainfall of 497mm which is way below the world's average of 860mm [2]. However, most of the factors that contribute towards the water demand exceeding the water supply are due to human interventions. These include population growth and the economic expansion of the South African citizens, especially in the Gauteng Province. The more affluent people become; the more water they will use [3], and the more the population grows, there will be an increased demand for water. The province of Gauteng is of particular interest because of its status as the industrial powerhouse of South Africa and it houses and provides employment to almost a quarter of the South African population, some 9 million people [4]. The Gauteng Province consumes about 86% of the total water supply provided by a bulk supplier called Rand Water. With the current population growth rate of 3.13% per annum [4], the water demand in this province is definitely set to increase. Another factor that has a major influence on the demand of water is the issue of HIV/AIDS. An increase in the HIV/AIDS related deaths can have a negative effect on the population growth rate. This therefore implies that the population growth rate will not always be positive, but can at times be negative. An approach that can be employed to offset the effects of this population dynamics is to develop two models, one with T the effects of HIV/AIDS neglected and another one with these effects taken into account. This will result in a reliable model because the actual water demand will be inside the envelope formed by these two extremes. ## Ii. Literature I**Nspection** The modelling of water resource variables is a very active field of study and definitely there still is a lot of work to be done. In the initial stages, modelling of water resource variables was done using the traditional statistical models. In recent years, modern techniques have been proposed as efficient modelling tools. There is a large pool of these techniques, and hence there is always a need to investigate which technique is the most efficient for a particular application. Gamal El-Din *et al* **[5] used artificial neural networks to** model wastewater treatment processes. This was a comparative study between conventional deterministic models and artificial neural networks. They observed that, in addition to the information contained in the conventional models, neural networks contained a great deal of additional information with regard to the system being modelled. Jain *et al* **[6] used artificial neural** networks to model the short-term water demand at the Indian Institute of Technology (IIT) in Kanpur, India. Six neural network models, five regression models and two time series models were developed and compared. All the neural network models generally displayed better performance when measured against the other models. Maier *et al* **[7] conducted a study reviewing 43 research** papers that employed neural networks in the prediction and forecasting of water resources variables. They observed that neural network models always work well and their use in the study of water is on the increase due to their ability to handle large amounts of non-linear, nonparametric data. Khan and Coulibaly [8] conducted a comparative study between support vector machines, artificial neural networks and the traditional seasonal autoregressive model (SAR) in the forecasting of lake water levels. They observed that the support vector machine is generally compatible with the other two models, but when it comes to long-term forecasting, the support vector machine displays better performance. Mukherjee *et al* **[9]** conducted a study to predict chaotic time series using support vector machines. The performance of support vector machines stood out when compared to other approximation methods such as polynomial and rational approximation, local polynomial techniques and artificial neural networks. Other forecasting applications that employed support vector machines include the work of Mohandes *et al* **in the prediction of wind speed [10]. They** observed that the performance of the support vector machines is comparable to that of artificial neural networks. All of these studies confirm that there is a need to compare the performance of various approximation
techniques. The study that lead to this paper carries some element of novelty since it is the first one to carry out water demand forecasting using computational intelligence techniques in the Republic of South Africa. ## Iii. Theoretical F**Oundation** A. Water Scarcity The scarcity of water in the Republic of South Africa is also soaring to new heights, especially in the Gauteng Province. In order to offset the effects of this scarcity, Rand Water has introduced the idea of supplementary water schemes. Since 1974, the water in the Vaal River has been supplemented through the inter-basin transfer of water from the Tugela River in the Kwa-Zulu Natal Province. This is what became to be known as the TugelaVaal Transfer Scheme [11]. Another transfer scheme takes water from the Orange River in Lesotho to supplement the Vaal dam. This is what came to be known as the Lesotho Highlands Water Project [12]. The development of supplementary water schemes is indicative of the fact that the issue of water scarcity in the Republic of South Africa is a serious one. This therefore implies that there is an urgent need for the development of tools that will assist in the effective management of water resources, and artificial neural networks have a significant role to play to that effect. ## B. Regression Approximation Unlike using conventional software development techniques to make programs, learning methodology uses examples to synthesize these programs. The particular case where the examples are input-output pairs is called supervised learning. There are different types of learning problems and these are binary classification, multi-class classification and regression [13]. Binary classification is a problem with binary (1 or 0; true or false; LOW or HIGH) outputs. Multi-class classification is a problem with a finite number of outputs, and regression is a problem with real-valued outputs. Water demand forecasting can be regarded as a regression problem because the water time series has non-linear nature and hence the output of the predicting model has to be a real value depicting the amount of water that will be needed on a specified date. C. The Theory of Artificial Neural Networks in Regression Artificial Neural Networks (ANNs) are mathematical models that can be employed in the modeling of complex systems. They can be used both for classification and regression problems. ANNs consist of three layers, namely, the input layer, the hidden layer and the output layer. The input layer represents the model inputs and the output layer represents the model outputs. The hidden layer consists of nodes that, during optimization, attempt to functionally map the model inputs to the model outputs. There are numerous ANN architectures but this study focuses on only two architectures. These are the multilayer perceptron and the radial basis function. 1) The multi-layer perceptron (MLP) Networks that have more than one layer of adaptive weights are known as multi-layer perceptrons. A multilayer perceptron has three layers of units taking values in the range (0 to 1). Each layer is nourished with the previous layers, and hence it is also called a Jump Connection Network (JCN) [14]. MLPs can have any number of weighted connections, but networks with only two weighted connections are very much capable of approximating just about any functional mapping [15]. The MLP is mathematically represented by: $$y_{k}=f_{outer}\left[\sum_{j=1}^{M}w_{kj}^{(2)}f_{inner}\left[\sum_{i=1}^{d}w_{ji}^{(1)}x_{i}+w_{j0}^{(1)}\right]+w_{k0}^{(2)}\right]\tag{1}$$ where $y_{k}$ represents the k-th output, $f_{outer}$ represents the output layer transfer function, *inner* f **represents the input** layer transfer function, w **represents the weights and** biases, (i) **represent the i-th layer.** 2) The radial basis function (RBF) In this class of neural networks, the activation of the hidden unit is determined by the distance between the input vector and the prototype vector [15]. The internal representation of the hidden units of the RBF network leads to a two stage training procedure. The first stage is concerned with the determination of the centre of the network using unsupervised methods. The second stage is concerned with the determination of the final-layer weights. The RBF networks provide a basis function (an interpolation function) which passes through each and every data point. A simple representation of the RBF network is depicted in figure 2. The RBF is mathematically represented by: $$y_{k}\left(x\right)=\sum_{j=0}^{M}w_{kj}\phi_{j}\left(x\right)\tag{2}$$ where $y_{k}$ represents the k-th output, $w$ represents the $$2)$$ weights and biases, andφ **represents the activation** functions of the output layer. $$(x))+b$$ D. The Theory of Support Vector Machines in Regression Like ANNs, support vector machines (SVMs) can be used both for classification and regression problems. A support vector machine (SVM) is a classifier derived from statistical learning theory and was first introduced by Vapnik *et al* **[16] in COLT-92. In regression problems, a** non-linear function is learned by a linear learning machine in a kernel induced feature space, while the capacity of the system is controlled by a parameter that does not depend on the dimensionality of the space [13]. The process of employing SVMs in regression problems is referred to Support Vector Regression (SVR). In SVR, the basic idea is to map the input space x **to the** high dimensional feature space Φ(x)**in a non-linear** manner. This relationship is depicted in (3) where b **is the** threshold. f (x) = (w⋅Φ(x))+ b **(3)** Both b and the constant w**are estimated by minimizing** the sum of the empirical risk and a complexity term. In (4) below, the first term denotes the empirical risk, and the second term denotes the complexity term.
$$R_{reg}\left[f\right]=R_{emp}\left[f\right]+\lambda\left\|w\right\|^{2}=C\sum_{i=1}^{Z}\left(f(x_{i})-y_{i}\right)+\lambda\left\|w\right\|^{2}\tag{4}$$ 1 i = Where Z denotes the size of the sample,C(⋅) **is a cost** function and λ **is the regularization constant.** IV. STRUCTURED M**ETHODOLOGY** This part of the paper describes the very structured methodology employed in order to get to the most optimum results of the study. The roadmap of this methodology is as depicted in fig. 1 below. <image> comparative study The first stage of the methodology is to manipulate the data used in the study followed by the initialization of the model parameters. This stage is followed by two experiments that run in parallel, one for support vector regression and one for the artificial neural networks. A performance analysis is executed on both sides, and that is followed by the determination of the Support Vector Genius (SVG) and the Artificial Neural Genius (ANG). The SVG is the SVM model that outperforms all the other SVM models in the SVR experiment. The ANG is that ANN architecture that outperforms all the other models in the ANN experiment. The SVG and the ANG are thereafter compared in order to establish the overall Genius in the study. ## V. Experimental S**Etup** A) Data Manipulation The data used in this study consists of the previous daily water demands and the annual estimated population size of the Gauteng Province. This data is manipulated in two forms, namely, normalization and division. The population figures depicted an increasing trend, as shown on table I, but the water demand figures are of arbitrary complexity as depicted in table II. $${\mathrm{le,}}\,C(\cdot){\mathrm{~is~a~}}$$ | ESTIMATES | | |-------------|------------------------------| | Year | Mid-year Population Estimate | | 1994 | 7 830 904 | | 1995 | 7 992 219 | | 1996 | 8 156 857 | | 1997 | 8 324 886 | | 1998 | 8 496 376 | | FIGURES | | |-------------|----------------------| | Date | Demand (Mega Liters) | | 04-Jan-1997 | 1 849.95 | | 05-Jan-1997 | 2 137.14 | | 06-Jan-1997 | 1 982.94 | | 07-Jan-1997 | 2 188.65 | | 08-Jan-1997 | 2 254.14 | 1) Data Normalization In order to simplify the task of the network, the data was scaled or normalized by making use of (5). $${\widetilde{x}}={\frac{x-x_{M i N}}{x_{M a X}\ -x_{M i N}}}$$ $$(5)$$ Where $\widetilde{x}$ is the scaled data point, $x$ is the original data. point, MIN x and MAX x **are the minimum and maximum** values in the data set, respectively. This is done in order to ensure that the minimum value in the data set is scaled to zero, and that the maximum value is scaled to one. 2) Data Division The water figures obtained from Rand Water's database where comprised of 3 474 data points from the 4th **of** January 1997 to the 09th **of July 2006. There was only one** data point missing and this was on the 25th **of March 1999.** The effects of this missing data point were removed by discarding it from the database. Consequently the data bank remained with a sum of 3 473 data points. In order to employ the cross-validation technique, the data bank was divided into three interdependent data sets. These are the training set, the validation set and the testing set. The distribution and sum of these data sets is depicted in table III below. TABLE III THE DISTRIBUTION AND THE SUM OF DATA POINTS IN EACH DATA SET Data Set Distribution Total Training Set 294 × 5 **1 470** Validation Set 201 × 5 **1 005** Testing Set 199 × 5 **995** B) Model Initialization This section deals with the issues of the number of model inputs. A short investigation had to be carried out and this was done from the ANN perspective. Initially the model is given a total of two inputs, followed by three, four, five and six inputs. A five input network reflects the least amount of training error and hence is adopted. The
It is evident from table V above that the model with the most optimum approximation is the one with a linear kernel function. This is due to the fact that it has 100% accuracy, and 3.94% validation error. It is therefore regarded as the Support Vector Genius (SVG). C) Determination of the ANG The ANN experiment has two architectures to investigate, and in turn, these architectures have many different activation functions. For the sake of simplicity, the experiments of the two architectures are separated and the results are compared. 1) The MLP experiment and results The MLP network is trained using three different output unit activation functions and three different training algorithms. The activation functions are 'linear', 'logistic' and 'softmax'. The three different training algorithms are the scaled conjugate gradient (SCG), conjugate gradient (Conjgrad) and quasinewton (Quasinew) [17]. The softmax activation function gives a straight line approximation and hence its results are redundant. The experiment is therefore conducted with the other two activation functions and the three different optimization algorithms. The MLP ANN configurations are labelled as depicted in table VI. After the optimization of each of the network nomenclatures listed in table VI, the validation error analysis and accuracy check is executed using (6) and the results are shown in table VII. TABLE VI LABELING OF THE MLP ANNs ACCORDING TO THE ACTIVATION FUNCTION AND THE NUMBER OF HIDDEN UNITS ANN Label Function Units Algorithm AZ1 Linear 9 SCG AZ2 Linear 10 SCG AZ3 Linear 9 Conjgrad AZ4 Linear 10 Conjgrad AZ5 Linear 9 Quasinew AZ6 Linear 10 Quasinew AZ7 Logistic 9 SCG AZ8 Logistic 10 SCG AZ9 Logistic 9 Conjgrad AZ10 Logistic 10 Conjgrad AZ11 Logistic 9 Quasinew AZ12 Logistic 10 Quasinew Both AZ2 and AZ11 have an accuracy of 99%. However AZ2 has a validation error that is less than that of AZ11. This therefore implies that the MLP ANN with the most suitable functional mapping is AZ2. AZ2 is a network with a linear output activation function, ten hidden units and the scaled conjugate gradient optimization algorithm. AZ3 32% 5% 184.109s AZ4 10% 87% 156.828s AZ5 63% 0% 73.594s AZ6 35% 7% 20.875s AZ7 15% 73% 96.703s AZ8 6% 97% 20.281s AZ9 9% 93% 90.781s AZ10 18% 59% 154.984s AZ11 7% 99% 76.515s AZ12 9% 96% 146.968s 2) The RBF Experiment and Results The RBF network is trained in a manner that assesses the effects of three different activation functions. First, a network with Gaussian activations (Gaussian) is created and a two-stage training approach is used. It uses a small number of iterations of the Expectation-Maximization (EM) algorithm [17] to position the centres of the network and then the pseudo-inverse of the design matrix to find the second layer weights. The second layer has thin plate spline (TPS) activation functions and it makes use of the centres from the previous network to calculate the second layer weights. The third layer has r logr 4 **(R4logr)** activation functions. The combination of these activation functions and the number of the hidden units in the RBF network is labelled as in table VIII. Similarly, after the optimization of each of the RFB network the error analysis (validation error) and accuracy check is executed using (4) and (5) respectively. TABLE VIII LABELLING OF THE RBF ANNS ACCORDING TO THE ACTIVATION FUNCTIONS AND THE NUMBER OF HIDDEN UNITS ANN Label Function Units AX1 Gaussian 9 AX2 Gaussian 10 AX3 TPS 9 AX4 TPS 10 AX5 R4logr 9 AX6 R4logr 10 Table IX shows ANN configurations with 100% accuracy. These are AX3, AX4, AX5 and AX6. In order to select the most optimum one, the validation error is observed to select the smallest. Both AX4 and AX6 have the same smallest validation error. In order to select the most optimum one, the error obtained during training is observed. ## Ax4 Training Error = 2.4651% Ax6 Training Error = 2.4272% TABLE IX THE RESULTS OBTAINED FROM THE RBF VALIDATION FOR THE DIFFERENT ACTIVATION FUNCTIONS ANN Error Accuracy Elapsed Time AX1 28% 37% 12.969s AX2 15% 71% 9.671s AX3 3.7% 100% 12.969s AX4 3.6% 100% 9.671s | MLP CONFIGURATIONS | | | | |----------------------|-------|----------|--------------| | ANN | Error | Accuracy | Elapsed time | | AZ1 | 23% | 38% | 76.954s | | AZ2 | 6% | 99% | 81.360s |
| AX5 | 4.2% | 100% | 12.969s | |-------|--------|--------|-----------| | AX6 | 3.6% | 100% | 9.671s | This is a small difference but AX6 has the smallest training error and hence the most optimum functional mapping. This therefore implies that the best RBF is AX6. When comparing AZ2 and AX6 it is apparent that AX6 provides the most optimum approximation of the target values of the validation set. This is due to the fact that it has 100% accuracy and a validation error of 3.6%. As a result, it is regarded as the Artificial Neural Genius (ANG). VII. DISCUSSION OF R**ESULTS** The object of this section is to compare the SVG and ANG in order to determine the OG. This is done by employing the third data set which is the testing set. This is done in order to determine the generalisation ability of these two geniuses. SVG Error 5.46519% SVG Accuracy 100% ANG Error 2.95995% ANG Accuracy 100% It is apparent from this analysis that the ANG has better generalization ability than the SVG. This therefore implies that artificial neural networks are better approximation tools for this particular study. The functional mapping of the ANG is plotted in the fig 2 below. <image> A further development would be to even further scrutinize the generalisation ability of this ANG. This can be done by determining the common trends in the water demand data. A model will be developed for each trend, and the theory of Hidden Markov Models [18] will be employed to determine whether the predicted value belongs to that particular date or season by looking at the model of that particular season. ## Viii C**Onclusions** Two machine learning techniques have been investigated in this study. These are the artificial neural networks (ANNs) and the support vector machines (SVMs). An approach adopted was to conduct two parallel experiments, one for the ANNs and one for the SVMs. The ANN experiment encapsulated two architectures, the multi-layer perceptron (MLP) and the radial basis function (RBF). The results from the two architectures were compared to come up with the Artificial Neural Genius (ANG). The SVM experiment was comprised of many models with different kernel functions and some of these kernel functions had additional arguments such as the degree and the scale. These models were compared against each other in order to determine the Support Vector Genius (SVG). The performance criteria used to determine the geniuses from each experiment were the validation error and the accuracy in their approximation of the target values of the validation data set. The two geniuses were then compared against each other in order to determine the overall genius (OG). The performance parameter used to determine the OG is the generalisation ability of each genius. The ANG has proved to outperform the SVG. ## A**Cknowledgement** The authors hereby thank Rand Water's Thomas Phetla for taking his time to make the data available. The financial support of the South African National Research Foundation is hereby acknowledged. ## R**Eferences** [1] Media release by the South African Department of Water Affairs and Forestry, "Water Shortage a reality for South Africa", 18 Jan 2005. [2] R. Turner, K. van den Bergh, T. Soderqvist, A. Barendregt, J. van der Straaten, E. Maltby and E. van Ierland, "Ecological - Economic analysis of wetlands: scientific integration for management and policy," *Ecological Economics***, pp. 7– 3, Jan 2000.** [3] Water Issues Study Group, School of Oriental and African Studies (SOAS), "Water Demand Management (WMD): A Case Study from South Africa," *Technical Report***, 18 Jan 1999.** [4] Water Services: National Information System. National Profile: Population and Growth. South African Department of Water and Forestry, April 2006. [5] A Gamal El-Din, D. W. Smith and M. Gamal El-Din, "Application of artificial neural networks in wastewater treatment," **J. Environ. Eng.** Sci.**, pp. 81-95, Jan 2004.** [6] A. Jain, A. K. Varshney, U. C. Joshi. "Short-Term Water Demand Forecast Modelling at IIT Kanpur Using Artificial Neural Networks," IEE Transactions on Water Resources Management**, vol. 15, no. 1, pp** 299–321, Aug 2001. [7] H. R. Maier, G. C. Dandy, "Neural networks for the prediction and forecasting of water resources variables: a review of modelling issues and applications," *Environmental Modelling & Software***, pp** 101–124, Jan 2000. [8] M.S. Khan, and P. Coulibaly, "Application of Support Vector Machine in Lake Water Level Prediction," *J. Hydrologic. Engrg***, vol. 11, no. 3,** pp. 199-205, Jun 2006. [9] S. Mukherjee, E. Osuna and F. Girosi, "Nonlinear Prediction of Chaotic Time Series Using Support Vector Machines", **IEEE** NNSP'97**, pp 24–26** [10] M.A. Mohandes, T.O. Halawani, S. Rehman and A.A. Hussain, "Support vector machines for wind speed prediction", *Renewable* Energy**, vol. 29, no. 6, pp 939–947** [11] Rand Water, "100 Years of Excellence," Jan. 2006. [12] Lesotho Highlands Water Project, "Analysis of the Minimum Degradation, Treaty, Design Limitation and Fourth Scenarios for Phase 1 Development," *Contract LHDA 678***, June 2002.** [13] N. Cristianini and J. Shawe-Taylor, *An introduction to Support Vector* Machines and other kernel-based learning methods**, first edition,** 2000. [14] M. Bosque, *Understanding 99% of Artificial Neural Networks***. Writers** Club Press, first edition, 2002. [15] C. M. Bishop, *Neural Networks for Pattern Recognition***, Oxford** University Press, first edition, 1995. [16] B.E. Boser, I.M. Guyon and V.N. Vapnik, "A training algorithm for optimal margin classifiers", Annual Workshop on Computational Learning Theory, 144–152, Jan. 1992. [17] I. T. Nabney. *Algorithms for Pattern Recognition***. Springer, second** edition, 2003. [18] L.R. Rabiner, "A Tutorial in Hidden Markov Models and Selected Applications in Speech Recognition", Proceedings of the IEEE, vol. 77, no. 2, pp 1–30, Oct. 1988.
# Fuzzy Artmap And Neural Network Approach To Online Processing Of Inputs With Missing Values ## F.V. Nelwamondo* And T. Marwala* * School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, Private Bag 3, Wits, 2050, South Africa. Abstract: **An ensemble based approach for dealing with missing data, without predicting or imputing** the missing values is proposed. This technique is suitable for online operations of neural networks and as a result, is used for online condition monitoring. The proposed technique is tested in both classification and regression problems. An ensemble of Fuzzy-ARTMAPs is used for classification whereas an ensemble of multi-layer perceptrons is used for the regression problem. Results obtained using this ensemble-based technique are compared to those obtained using a combination of autoassociative neural networks and genetic algorithms and findings show that this method can perform up to 9% better in regression problems. Another advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time. Key words: **Autoencoder neural networks, Fuzzy-ARTMAP, Genetic algorithms, Missing data, MLP** ## 1. **Introduction** Real time processing applications that are highly dependent on the newly arriving data often suffer from the problem of missing data. In cases where decisions have to be made using computational intelligence techniques, missing data become a hindering factor. The biggest challenge on one hand is that most computational intelligence techniques such as neural networks are not able to process input data with missing values and hence, cannot perform classification or regression when some input data are missing. Various heuristics for missing data have however been proposed in the literature [1]. The simplest method is known as 'listwise deletion' and this method simply deletes instances with missing values [1]. The major disadvantage of this method is the dramatic loss of information in data sets. There is also a well documented evidence showing that ignorance and deletion of cases with missing entries is not an effective strategy [1-2]. Other common techniques are imputation methods based on statistical procedures such as mean computation, imputing the most dominant variable in the database, hot deck imputation and many more. Some of the best imputation techniques include the Expectation Maximization (EM) algorithm [3] as well as neural networks coupled with optimisation algorithms such as genetic algorithms as used in [4] and [5]. Imputation techniques where missing data are replaced by estimates are increasingly becoming popular. A great deal of research has been done to find more accurate ways of approximating these estimates. Among others, Abdella and Marwala [4] used neural networks together with Genetic Algorithms (GA) to approximate missing data. Gabrys [6] has also used Neuro-fuzzy techniques in the presence of missing data for pattern recognition problems. The other challenge in this work is that, online condition monitoring uses time series data and there is often a limited time between the readings depending on how frequently the sensor is sampled. In classification and regression tasks, all decisions concerning how to proceed must be taken during this finite time period. Methods using optimisation techniques may take longer periods to converge to a reliable estimate and this depends entirely on the complexity of the objective function being optimised. This calls for better techniques to deal with this missing data problem. We argue in this paper that it is not always necessary to have the actual missing data predicted. Differently said, it is not in all cases that the decision is dependent on *all* actual values. Therefore, a vast amount of computational resources is wasted in attempts to predict the missing values, whereas the ultimate result could have been achieved without such values. In light of this challenge, this paper investigates a problem of condition monitoring where computational intelligence techniques are used to classify and regress in the presence of missing data without the actual prediction of missing values. A novel approach where no attempt is made to recover the missing values, for both regression and classification problems, is presented. An ensemble of fuzzy-ARTMAP classifiers to classify in the presence of missing data is proposed. The algorithm is further extended to a regression application where Multi-layer Perceptron (MLP) is used in an attempt to get the correct output with limited input variables. The proposed method is
compared to a technique that combines neural networks with Genetic Algorithm (GA) to approximate the missing data. ## 2. **Missing Data Theory** According to Little and Rubin [1], missing data are categorized into three basic types namely: 'Missing at Random', (MAR), 'Missing Completely at Random', (MCAR) and 'Missing Not at Random', (MNAR). MAR is also known as the ignorable case [3]. The probability of datum d from a sensor S **to be missing at random is** dependent on other measured variables from other sensors. A simple example of MAR is when sensor T is only read if sensor S **reading is above a certain threshold.** In this case, if the value read from sensor S **is below the** threshold, there will be no need to read sensor T **and** hence, readings from T **will be declared missing at** random. MCAR on the other hand refers to a condition where the probability of S **values missing is independent** of any observed data. In this regard, the missing value is neither dependent on the previous state of the sensor nor any reading from any other sensor. Lastly, MNAR occurs when data is neither MAR nor MCAR and is also referred to as the non-ignorable case [1, 3] as the missing observation is dependent on the outcome of interest. A detailed description of missing data theory can be found in [3]. In this paper, we shall assume that data is MAR. ## 3. **Background** 3.1 Neural Network: Multi-Layer Perceptrons Neural networks may be viewed as systems that learn the complex input-output relationship from any given data. The training process of neural networks involves presenting the network with inputs and corresponding outputs and this process is termed *supervised learning*. There are various types of neural networks but we shall only discuss the MLP since they are used in this study. MLPs are feed-forward neural networks with an architecture comprising of the input layer, hidden layer and the output layer. Each layer is formed from smaller units known as neurons. Neurons receive the input signals x **and propagate them forward to the network and maps** the complex relationship between inputs and the output. The first step in approximating the weight parameters of the model is finding the approximate architecture of the MLP, where the architecture is characterized by the number of hidden units, the type of activation function, as well as the number of input and output variables. The second step estimates the weight parameters using the training set [7]. Training estimates the weight vector W r that ensures that the output is as close to the target vector as possible. This paper implements the autoencoder neural network as discussed below. Autoencoder neural networks**: Autoencoders, also known as** auto-associative neural networks, are neural networks trained to recall the input space. Thompson *et al* **[8]** distinguish two primary features of an autoencoder network, namely the auto-associative nature of the network and the presence of a bottleneck that occurs in the hidden layers of the network, resulting into a butterfly-like structure. In cases where it is necessary to recall the input, autoencoders are preferred due to their remarkable ability to learn certain linear and non-linear interrelationships such as correlation and covariance inherent in the input space. Autoencoders project the input onto some smaller set by *intensively squashing* it into smaller details. The optimal number of the hidden nodes of the autoencoder, though dependent on the type of application, must be smaller than that of the input layer [8]. Autoencoders have been used in various applications including the treatment of missing data problem by a number of researchers including [4] and [9]. In this paper, auto-encoders are constructed using the <image> MLP networks and trained using back-propagation. The structure of an autoencoder constructed using an MLP network is shown in Figure 1. The first step in approximating the weight parameters of the model is finding the approximate architecture of the MLP, where the architecture is characterized by the number of hidden units, the type of activation function, as well as the number of input and output variables. The second step estimates the weight parameters using the training set [7]. Training estimates the weight vector W r to ensure that the output is as close to the target vector as possible. The problem of identifying the weights in the hidden layers is solved by maximizing the probability of the weight parameter using Bayes' rule [8] as follows: $$P({\vec{W}}\,|D)\;\;\;\mathrm{is}\;\;\;\mathrm{th}$$ r r $$p(\vec{W}\,|\,D)=\frac{P(D\,|\,\vec{W})P(\vec{W})}{P(D)}\qquad\qquad\qquad(1)$$ Where: D is the training data, P(W r |D) **is the posterior** probability, *P(D|*W r ) **is called the likelihood term that** balances between fitting the data well and helps in
<image> Considering that the method proposed here uses an autoencoder, one will expect the input to be very similar to the output for a well chosen architecture of the autoencoder. This is, however, only expected on a data set similar to the problem space from which the intercorrelations have been captured. The difference between the target and the actual output is used as the error and this error is defined as follows: $$\varepsilon={\vec{x}}-f({\vec{W}},{\vec{x}})$$ r r r ε = − **(3)** where x r**and** W r are input and weight vectors respectively. To make sure the error function is always positive, the square of the equation is used. This leads to the following equation: $$\varepsilon=(\vec{x}-f(\vec{W},\vec{x}))^{2}$$ r r ε = − **(4)** $$\epsilon=\left(\left\{\begin{array}{c}X_{\ k}\\ X_{\ u}\end{array}\right\}-f\left(\left\{\begin{array}{c}X_{\ k}\\ X_{\ u}\end{array}\right\},w\right)\right)^{2}\tag{5}$$ ``` Since the input vector consist of both the known, Xk and unknown, Xu entries, the error function can be written as follows: 2 , − = w X X f X X u k u k ε (5) ``` and this equation is used as the objective function that is minimized using GA. ## 5. **Proposed Method: Ensemble Based** Technique For Missing Data The algorithm proposed here uses an ensemble of neural networks to perform both classification and regression in the presence of missing data. Ensemble based approaches have well been researched and have been found to improve classification performances in various applications [14-15]. The potential of using ensemble based approach for solving the missing data problem remains unexplored in both classification and regression problems. In the proposed method, batch training is performed whereas testing is done online. Training is achieved using a number of neural networks, each trained with a different combination of features. For a condition monitoring system that contains n **sensors, the user has to** state the value of n*avail***, which is the number of features** most likely to be available at any given time. Such information can be deduced from the reliability of the sensors as specified by manufacturers. Sensor manufacturers often state specifications such as *Meantime-between failures* (MTBF) and *Mean-time-to-failure* (MTTF) which can help in detecting which sensors are most likely to fail than others. MTTF is used in cases where a sensor is replaced after a failure, whereas MTBF denotes time between failures where the sensor is repaired. There is nevertheless, no guarantee that failures will follow manufacturers' specifications. When the number of sensors most likely to be available has been determined, the number of all possible networks can be calculated using: $$N=\!\!\left(\begin{array}{c}n\\ n_{avail}\end{array}\right)=\frac{n!}{n(n-n_{avail})!}\tag{6}$$ $$({\mathfrak{I}})$$ where N **is the total number of all possible networks,** n **is** the total number of features and n*avail* **is the number of** features most likely to be available at any time. Although the number n*avail* **can be statistically calculated, it has an** effect on the number of networks that can be available. Let us consider a simple example where the input space has 5 feature, labelled : a, b, c, d and e **and there are 3** features that are most likely to be available at any time. Using equation (6), variable N **is found to be 10. These** classifiers will be trained with features [abc, abd, abe, acd, ace, ade, bcd, bce, bde, cde**]. In a case where one** variable is missing, say, a**, only four networks can be** used for testing, and these are the classifiers that do not use a **in their training input sequence. If we get a situation** where two variables are missing, say a and b**, we remain** with one classifier. As a result, the number of classifiers reduces with an increase in a number of missing inputs per instance. $$(4)$$ Each neural network is trained with n*avail* **features. The** validation process is then conducted and the outcome is used to decide on the combination scheme. The training process requires complete data to be available as training is done off-line. The available data set is divided into the 'training set' and the 'validation set'. Each network created is tested on the validation set and is assigned a weight according to its performance on the validation set. A diagrammatic illustration of the proposed ensemble approach is presented in Figure 4.
Figure 4: Diagrammatic illustration of the proposed <image> ensemble based approach for missing data For a classification task, the weight is assigned using the weighted majority scheme given by [16] as: $$\alpha_{i}={\frac{1-E_{i}}{\sum_{j=1}^{N}(1-E_{i})}}$$ α **(7)** $$(7)$$ where Ei is the estimate of model i**'s error on the** validation set. This kind of weight assignment has its roots in what is called boosting and is based on the fact that a set of networks that produces varying results can be combined to produce better results than each individual network in the ensemble [16]. The training algorithm is presented in *Algorithm 1* and the parameter *ntwk*i represents the ith **neural network in the ensemble.** The testing procedure is different for classification and regression. In classification, testing begins by selecting an elite classifier. This is chosen to be the classifier with the best classification rate on the validation set. To this *elite* classifier, two more classifiers are gradually added, ensuring that an odd number is maintained. Weighted majority voting is used at each instance until the performance does not improve or until all classifiers are utilised. In a case of regression, all networks are used all at once and their predictions, together with their weights are used to compute the final value. The final predicted value is computed as follows: regression value of an instance j**. As a result** ∑=≠ N**usable** We try to solve this by recalculating the weights such that <image> the sum of all weights corresponding to N*usable* **is 1.** ## 6. **Experimental Results And Discussion** $$({\boldsymbol{\delta}})$$ This section presents the results obtained in the experiments conducted using the two techniques presented above. Firstly, the results of the proposed technique in a classification problem will be presented and later the method will be tested in a regression problem. In both cases, the results are compared to those obtained after imputing the missing values using the neural network-genetic algorithm combination as discussed above. $$f(x)=y\equiv\sum_{i=1}^{N}\alpha_{i}f_{i}(x)$$ N ( ) α ( ) **(8)** where α **is the weight assigned during the validation** stage when no data were missing and N **is the total** number of regressors. The parameter α **is assigned such** that∑== N i i 1 α 1. **Considering that not all networks shall** be available during testing, we define N*usable* **as the** number of regressors that are usable in obtaining the
## 6.1 Application To Classification Data set: **The experiment was performed using the** Dissolved Gas Analysis (DGA) data obtained from a transformer bushing operating on-site. The data consist of 10 features, which are the gases that dissolved in the oil. The hypothesis in this experiment is to determine if the bushing condition (faulty or healthy) can be determined while some of the data are missing. The data was divided into the training set and the validation, each containing 2000 instances. Experimental setup: **The classification test was** implemented using an ensemble of Fuzzy-ARTMAP networks. Two inputs were considered more likely to be missing and as a result, 8 were considered most likely to be available. The online process was simulated where data is sampled one instance at a time for testing. The network parameters were empirical determined and the vigilance parameter of 0.75 was used for the FuzzyARTMAP. The results obtained were compared to those obtained using the the NN-GA approach, where for the GA, the crossover rate of 0.1 was used over 25 generations, each with a population size of 20. All these parameters were empirically determined. Results: **Using equation (6), a total of 45 networks was** <image> found to be the maximum possible. The performance was calculated only after 4000 cases have been evaluated and is shown in Figure 5. The classification increases with an increase in the number of classifiers used. Although all these classifiers were not trained with all the inputs, their combination seems to work better than one network. The classification accuracy obtained under missing data goes as high as 98.2% which compares very closely to a 100 % which is obtainable when no data is missing. Using the NN-GA approach, a classification of 96% was obtained. Results are tabulated in Table 1 below. Table 1: Comparison between the proposed method and the NN-GA approach | | Proposed | NN-GA | | | |-------------------|------------|---------|------|------| | Algorithm | | | | | | Number of missing | 1 | 2 | 1 | 2 | | Accuracy (%) | 98.2 | 97.2 | 99 | 89.1 | | Run time (s) | 0.86 | 0.77 | 0.67 | 1.33 | The results presented in Table 1 clearly show that the proposed algorithms can be used as a means of solving the missing data problem. The proposed algorithm compares very well to the well know NN-GA approach. The run time for testing the performance of the method varies considerably. It can be noted from the table that for the NN-GA method, run time increase with increasing number of missing variables per instance. Contrary to the NN-GA, our proposed method offers run times that decrease with increasing number of inputs. The reason for this is that the number of Fuzzy-ARTMAP networks available reduces with an increasing number of inputs as mentioned earlier. However, this improvement in speed comes at a cost of the diversity. We tend to have less diversity as the number of training inputs increase. Furthermore, this method will completely come to a failure in a case where more than navl **inputs will be** missing at the same time. ## 6.1 Application To Regression In this section, we extend the algorithm implemented in the above section to a regression problem. Instead of using an ensemble of Fuzzy ARTMAP networks as in classification, MLP networks are used. The reasons for this practice are two fold; firstly because MPL's are excellent regressors and secondly, to show that the proposed algorithm can be used with any architecture of neural networks. Database: **The data from a model of a Steam Generator** at Abbott Power Plant [17] was used for this task. This data has four inputs, which are the *fuel, air, reference* level and the *disturbance* **which is defined by the load** level. There are two outputs which we shall try to predict using the proposed approach in the presence of missing data. These outputs are *drum pressure* and the *steam flow*. Experimental setup: **Although Fuzzy-ARTMAP could** not be used for regression, we extended the same approach proposed above using MLP neural networks for regression problem. As before, this work regresses in order to obtain two outputs which are the *drum pressure* and the *steam flow*. We assume navl **= 2 is the case and as** a result, only two inputs can be used. We create an ensemble of MLP networks, each with five hidden nodes and trained only using two of the inputs to obtain the output. Due to limited features in the data set, this work
shall only consider a maximum of one sensor failure per instance. Each network was trained with 1200 training cycles using the scaled conjugate gradient algorithm and a hyperbolic tangent activation function. All these training parameters were again empirically determined. Results: **Since testing is done online where one input** arrives at a time, evaluation of performance at each instance would not give a general view of how the algorithm works. The work therefore evaluates the general performance using the following formula only after N **instances have been predicted.** $$E r r o r=\frac{n_{\tau}}{N}\times100\%$$ Error τ **(9)** where nτ is the number of predictions within a certain tolerance. In this paper, a tolerance of 20% is used and was arbitrarily chosen. Results are summarized in Table 2 Table 2: Regression accuracy obtained without estimating the missing values. | | Proposed | NN-GA | | | |-------------------|------------|---------|----------|------| | Algorithm | | | | | | Number of missing | 1 | 2 | 1 | 2 | | | Perf (%) | Time | Perf (%) | Time | | Drum Pressure | 98.2 | 97.2 | 68 | 126 | | Steam Flow | 86 | 0.77 | 84 | 98 | 'Perf' indicates the accuracy in percentage whereas time indicates the running time in seconds. Results show that the proposed method is well suited for the problem under investigation. The proposed method performs better than the combination of GA and autoencoder neural networks in the regression problem under investigation. The reason is that the errors that are made when inputting the missing data in the NN-GA approach are further propagated to the output-prediction stage. The ensemble based approach proposed here does not suffer from this problem as there is no attempt to approximate the missing variables. It can also be observed that the ensemble based approach takes less time that the NN-GA method. The reason for this is that GA may take longer times to converge to reliable estimates of the missing values depending on the objective function to be optimised. Although, the prediction times are negligibly small, an ensemble based technique takes more time to train since training involves a lot of networks. ## 7. **Conclusion** In this paper a new techniques for dealing with missing data for online condition monitoring problem was presented and studied. Firstly the problem of classifying in the presence of missing data was addressed, where no attempts are made to recover the missing values. The problem domain was then extended to regression. The proposed technique performs better than the NN-GA approach, both in accuracy and time efficiency during testing. The advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time. Future work will explore the incremental learning ability of the Fuzzy ARTMAP in the proposed algorithm. ## Acknowledgements The financial assistance of the National Research Foundation (NRF) of South Africa and the Carl and Emily Fuchs Foundation is hereby acknowledged. ## 8. **References** $\eqref{eq:walpha}$. [1] R. J. A. Little and D. B. Rubin, **Statistical analysis with missing** data**. New York: John Wiley, 1987.** [2] J. Kim and J. Curry, "The treatment of missing data in multivariate analysis," *Sociological Methods and Research***, vol. 6,** pp. 215–241, 1977. [3] J. Schafer and J. Graham, "Missing data: Our view of the state of the art," *Psychological Methods***, vol. 7, pp. 147– 177, 2002.** [4] M. Abdella and T. Marwala, "The use of genetic algorithms and neural networks to approximate missing data in database," Computing and Informatics**, pp. 1001–1013, 2006.** [5] S. M. Dhlamini, F. V. Nelwamondo, and T. Marwala, "Condition monitoring of hv bushings in the presence of missing data using evolutionary computing," **WSEAS Transactions on Power** Systems**, vol. 1, pp. 296–302, 2006.** [6] B. Gabrys, "Neuro-fuzzy approach to processing inputs with missing values in pattern recognition problems," **International** Journal of Approximate Reasoning**, vol. 30, pp. 149–179, 2002.** [7] N. Japkowicz, "Supervised learning with unsupervised output separation," **In International Conference on Artificial Intelligence** and Soft Computing**, pp. 321–325, 2002.** [8] B. B. Thompson, R. Marks, and M. A. El-Sharkawi, "On the contractive nature of autoencoders: Application to sensor restoration," **Proceedings of the IEEE International Joint** Conference on Neural Networks**, pp. 3011– 3016, 2003.** [9] A. Frolov, A. Kartashov, A. Goltsev, and R. Folk, "Quality and efficiency of retrieval for willshaw-like autoassociative networks," Computation in Neural Systems**, vol. 6, 1995.** [10] C. M. Bishop, *Neural Networks for Pattern Recognition***. New** York: Oxford University Press, 2003. [11] D. Goldberg, *Genetic Algorithms in Search, Optimization and* Machine Learning**. Reading, MA: Addison-Wesley, 1989.** [12] G. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds, and D. Rosen, "Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of multidimensional maps," **IEEE** Transactions on Neural Networks**, vol. 3, pp. 698–713, 1992.** [13] R. Javadpour and G. Knapp, "A fuzzy neural network approach to condition monitoring," *Sociological Methods and Research***, vol.** 45, pp. 323–330, 2003. [14] Y. Freund and R. Schapire, "A decision theoretic generalization of online learning and an application to boosting," in *Proceedings of* the Second European Conference on Computational Learning Theory**, pp. 23–37, 1995.** [15] L. I. Kuncheva, **Combining Pattern Classifies, Methods and** Algorithms**. New York: Willey Interscience, 2005.** [16] E. McGookin and D. Murray-Smith, "Using correspondence analysis to combine classifiers," *Machine Learning***, vol. 14, pp.** 1–26, 1997. [17] B. De Moor, "Database for the identification of systems, department of electrical engineering, esat/sista." Internet Listing, Last Acessed: 2 April 2006 1998. URL: http://http://www.esat.kuleuven.ac.be/sista/daisy.
# Artificial Intelligence For Conflict Management E. Habtemariam* M. Lagazio T. Marwala* Department of Politics and International Relations ***School of Electrical and Information Engineering University of Kent** University of the Witwatersrand Canterbury, Kent Johannesburg, South Africa **E-mail: M.Lagazio@kent.ac.uk** E-mail: e.habtemariam@ee.wits.ac.za tmarwala@yahoo.com ## Abstract Militarised conflict is one of the risks that have a significant impact on society. Militarised Interstate Dispute (MID) is defined as an outcome of interstate interactions, which result on either peace or conflict. Effective prediction of the possibility of conflict between states is an important decision support tool for policy makers. In a previous research, neural networks (NNs) have been implemented to predict the MID. Support Vector Machines (SVMs) have proven to be very good prediction techniques and are introduced for the prediction of MIDs in this study and compared to neural networks. The results show that SVMs predict MID better than NNs while NNs give more consistent and easy to interpret sensitivity analysis than SVMs. Keywords**: Militarised Interstate Disputes, Support vector machines, Artificial Neural Networks** ## I. Introduction Militarised Interstate Disputes (MID) as defined by (Gochman and Maoz, 1984) and by (Marwala and Lagazio 2004) refers to the threat of using military force between sovereign states in an explicit way. In other words, MID is a state that results from interactions between two states, which can be either peace or conflict. These interactions are expressed in the form of dyadic attributes which are two states' parameters considered to influence the probability of
less democratic state plays a determinant role for an occurrence of conflict (Oneal and Russett, 1999)]. *Dependency* **is measured as the sum of a country's import and export with its partner** divided by the Gross Domestic Product of the stronger country. It is a continuous variable that measures the level of economic interdependence (dyadic trade as a portion of a state's gross domestic product) of the less economically dependent state in the dyad. ## Iii. Method A. Neural Networks Neural network requires selecting the best architecture to give good classification results. The best combination of the number of hidden units, activation function, training algorithm and training cycles that can result in a network which is able to generalise the test data in the best possible way is searched for during the model selection process. This helps to avoid the risk of over/under training. A multi-layer perceptron (MLP) trained with the scaled conjugate gradient (SCG) method (Moller, 1993) neural network was used to classify the MID input data. Logistic and hyperbolic activation functions for the output and hidden layers respectively, M **= 10 number of hidden units** and 100 training cycles resulted in an optimal architecture. ## B. Support Vector Machines SVM employs a method of mapping the input into a feature space of higher dimensionality and then finds a linear separating hyperplane with maximum margin of separation. There are various mapping or kernel functions in use, the most common of which are linear, polynomial, radial basis function (RBF) and sigmoid. The choice of a kernel function depends on the type of problem at hand and the RBF: K x( x, ) **exp(** || x x || , 0 2 i j = −γ i − jγ > can handle non-linear data better than the
linear kernel function. Moreover, the polynomial kernel has a number of hyperparameters which influences the complexity of the model and some times its values may become infinity or zero as the degree becomes large. This makes RBF to be a common choice for use. Similar to the neural network, SVM also requires selection of a model that gives an optimal result. The conducted experiments show that RBF gives best results for the classification of MID data in much less time. When RBF is used as the kernel function, there are two parameters that are required to be adjusted to give the best result. These are the penalty parameter of the error term C and the γ **parameter of the RBF kernel function. Cross-validation and grid-search are two methods** which are used to find an optimal model. For this experiment, a 10-fold cross-validation technique and a simple grid-search for the variables C and γ were used. The parameters C = 1 and γ **= 16.75** gave the best results. ## C. Mid Data The data sets for the experiment, as discussed in the previous section, came from the Correlates of War (COW) that was compiled and used by Russett and Oneal (Russett and Oneal, 2001). It includes politically relevant dyads for the cold war and immediate post-cold war period (CW), from 1946 to 1992. As it is described in (Marwala and Lagazio, 2004; Lagazio and Russett, 2003; Oneal and Russett, 2001; Oneal and Russett, 1999), politically relevant dyads refer to all those which are contiguous and contain major power. Distant and weak dyads are omitted from the data set because it is less probable for them to have conflicts. Since the aim is to predict the onset of a conflict rather than its continuation, the dyads include only those with no disputes or only the initial year of the
militarised conflict. The unit of analysis is a dyad-year. After the omission, a total of 27737 with 26845 peace and 892 conflict dyad-years were filtered out. The dyadic data was classified into two sets, which are training and testing set. In their study (Lagazio and Russett, 2003), have given a detailed discussion on how the training set should be chosen. They have found out that a balanced set, equal number of conflict and peace dyads, gives best results as training set for the neural network. The same principle was adhered to in this study. The training set contains 1000 randomly chosen dyads, 500 from each group. The test set contains 26737 dyads of which 392 are conflict and 26345 non-conflict dyads. ## Iv. Results And Discussion Neural network and support vector machine were employed to classify the MID data. The main focus of the result is to look at the percentage of correct MID prediction of the test data set by each technique. Table I depicts the confusion matrix of the results. Although NN performed as good as SVM in predicting true conflicts (true positives), this is achieved at the expense of reducing the number of correct peace (true negatives) prediction. SVM picked up the true conflicts (true positives) better than NN without effectively minimising the number of true peace (true negatives). SVM is able to pick 1450 more cases of the true peace (true negatives) than NN, which makes it a better choice than NN. Over all, SVM is able to predict peace and conflict with 79% and 75%, respectively. The corresponding results for NN are 74% and 76%, respectively for peace and conflict. The combined results of correct predictions are 79% for SVM and 74% for NN.
other hand, keeping all the variables to their maximum values while assigning one variable to its minimum value resulted in a peaceful outcome. In other words, no single variable is able to change the outcome if all the other variables are set to their possible maximum values for NN. A similar experiment conducted for SVM shows that it is not able to pick the influence of a variable using the same approach, as it is possible with NN. That is, whether the variables are set to their minimum or maximum gives a peace outcome. 2) *Experiment two***: This experiment was done to measure the sensitivity of the variables in the** spirit of partial derivatives as (Zeng, 1999) has put it. The idea is basically to see the change in the output for a change in one of the input variables. The experiment looks at how the MID varies when one variable is assigned to its possible maximum and minimum values while keeping all the other variables constant. The results found for both NN and SVM are shown in II. Our test data set has 26737 cases of peace and 392 cases of war. The first line of the table shows the correct number of peace and war prediction when all variables are used. Assigning each variable to take its possible maximum and minimum values while keeping the other variables fixed then generated different testing data sets. Each subsequent line of the table depicts the number of correct prediction of peace and war. NN result: It shows *democracy level* **has the maximum effect in reducing conflict while** capability ratio is second in conformance to the first experiment. Allowing *democracy* **to have its** possible maximum value for the whole data set was able to avoid conflict totally. *Capability ratio* reduced the occurrence of conflict by 98%. Maximising alliance between the dyads reduced the number of conflicts by 20%. Maximising *dependency* **has a 6% effect in reducing possible** conflicts. Reducing *major power* **was able to cut the number of conflicts by 3%. Minimising the** contiguity of the dyads to their possible lower values and maximising the *distance* reduced the