A Bayes net (also known as a belief network or probabilistic causal network) captures believed relations (which may be uncertain, stochastic, or imprecise) between a set of variables, which are relevant to some problem. They might be relevant because they will be observable, because their value is needed to take some action or report some result, or because they are intermediate or internal variables that help express the relationships between the rest of the variables.

Example: A classic example of the use of Bayes nets is in the medical domain. Here each new patient typically corresponds to a new case, and the problem is to diagnose the patient (i.e. find beliefs for the unmeasurable disease variables), predict what is going to happen to the patient, or find an optimal prescription, given the values of observable variables (symptoms). A doctor may be the expert used to define the structure of the net, and provide the initial relations between variables (often in the form of conditional probabilities), based on his medical training and experience with previous cases. Then the net probabilities may be fine-tuned by using statistics from previous cases, and from new cases as they arrive.

Bayes Net Construction: When the Bayes net is constructed, one node is used for each scalar variable, which may be discrete, continuous, or propositional (true/false). Because of this, the words “node” and “variable” are used interchangeably throughout this document, but “variable” usually refers to the real world or the original problem, while “node” usually refers to its representation within the Bayes net.

The nodes are then connected up with directed links. If there is a link from node A to node B, then node A is sometimes called the parent, and node B the child (of course, B could be the parent of another node). Usually a link from node A to node B indicates that A causes B, that A partially causes or predisposes B, that B is an imperfect observation of A, that A and B are functionally related, or that A and B are statistically correlated. The precise definition of a link is based on conditional independence, and is explained in detail in a reference like Neapolitan90 or Pearl88. However, most people seem to intuitively grasp the notion of links, and use them effectively without concentrating on the precise definition.

Finally, probabilistic tables are provided for each node, which express the probabilities of that node taking on each of its values, conditioned on the values of its parent nodes. Some nodes may have a deterministic relation, which means that the value of the node is given as a direct function of the parent node values.

Probabilistic Inference: After the Bayes net is constructed, it may be applied to a particular case. For each variable you know the value of, you enter that value into its node as a finding (also known as “evidence”). Then Netica does probabilistic inference to find beliefs for all the other variables. Suppose one of the nodes corresponds to the variable “Temperature”, and it can take on the values cold, medium and hot. Then an example belief for temperature could be: [cold - 0.1, medium - 0.6, hot - 0.3], indicating the subjective probabilities that the temperature is cold, medium or hot.

Depending on the structure of the net, and which nodes receive findings or display beliefs, Netica may do diagnosis, prediction, classification, logic, arithmetic calculation, or any combination of these, to complete the probabilistic inference. The final beliefs are sometimes called posterior probabilities (with prior probabilities being the probabilities before any findings were entered). Probabilistic inference done using a Bayes net is called belief updating.

If you want to apply the net to a different case, then all the findings can be retracted, new findings entered, and belief updating repeated to find new beliefs for all the nodes.

Probabilistic inference only results in a set of beliefs at each node; it does not change the net (knowledge base) at all. If the findings that have been entered are a true example that might give some indication of cases that will be seen in the future, you may think that they should change the knowledge base a little bit as well, so that next time it is used, its conditional probabilities more accurately reflect the real world. To achieve this you would also do probability revision.

See also Learning from
Cases