In fact, Figure 7.1 only depicts the prior distributions for each variable. This is useful but not very innovative information. An important strength of Bayesian networks, however, is to compute posterior probability distributions of the variable under consideration, given the fact that values of some other variables are known. In this case, the known states of variables can be entered as evidence in the network. When evidence is entered, this is likely to change the states of other variables as well, since they are conditionally dependent. This is demonstrated by entering the evidence in the network that the 'Mode choice' variable is equal to 'car'. In this case, evidence on 'Mode choice' now arrives in the form of P*(Mode Choice = bike; Mode Choice = car) = (0; 1), where P* indicates that we are calculating posterior probabilities (i.e. after entering evidences).

This means that the joint probability table for 'Choice', 'Number of cars', 'Driving License' and 'Gender' is updated by multiplying by the new distributions and dividing by the old ones. The multiplication consists of annihilating all entries with 'Choice'='bike'. The division by P(Mode Choice) only has an effect on entries with Mode Choice ='car', so therefore the division is by P(Mode Choice ='car'). For this simple example, the calculations can be found in Table 7.1b. The distributions P*(Number of cars), P*(Gender) and P*(Driving License) are calculated through marginalization of P* (Choice, Gender, Number of cars, Driving License). This means that

P*(Number of cars = l;Number of cars > 1) = (0.255;0.745); and P*(Driving License = yes; Driving License = no) = (0.522; 0.478), when evidence was entered that the 'Mode choice' variable equals car. Obviously, the calculation of this example is simple, however, in real-life situations it is likely that conditionally dependent relationships between the 'choice' variable and other variables exist as well, and as a result the evidence will propagate through the whole network. More information about efficient algorithms for propagation of evidence in Bayesian networks can be found in Pearl (1988) and in Jensen, Lauritzen, and Olesen (1990).

Found a mistake? Please highlight the word and press Shift + Enter