Classification Methods for Remotely Sensed Data, Second Edition
| [Cover] [Contents] [Index] |
Page 277
7.3.2 Bayesian multisource classification mechanism
The model described in this section is based on Lee et al. (1987). In the general multisource case, a set of observations from n (n>1) different sources is used. Let xi, i
According to Bayesian probability theory, the law of conditional probabilities states that:
(7.1) |
where P(ωj| x1, x2…, xn) is known as the conditional or posterior probability that ωj is the correct class, given the observed data vector (x1, x2, …, xn). P(x1, x2,…, xn|ωj) is the probability density function (p.d.f.) associated with the measured data (x1, x2,…, xn) given that (x1, x2,…, xn) are members of class ωj. P(x1, x2,…, xn) is the p.d.f. of data (x1, x2,…, xn). Assuming class conditional independence among the sources, one obtains P(x1, x2,…, xn|ωj)=P(x1|ωj)·P(x2|ωj)·…·P(xn|ωj), and Equation (7.1) becomes:
(7.2) |
Again, following the law of conditional probabilities,
(7.3) |
If Equation (7.3) is substituted into Equation (7.2), one obtains:
(7.4) |
If the intersource independence assumption has been made such that P(x1)·P(x2)·…·P(x1)=P(x1, x2,…, xn), then Equation (7.4) results in:
| [Cover] [Contents] [Index] |