You are on page 1of 41

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

EE561

Sensor Fusion
Are you ready for a 2nd Opinion ?
Janaka Wijayakulasooriya
PhD, MIEEE Senior Lecturer Department of Electrical and Electronic Engineering University of Peradeniya jan@ee.pdn.ac.lk
Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Can you see anything ?

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

With Thermal Sensor Fusion

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

A new dimension to the vision

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

You are no longer safe behind Bushes

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Sensor Fusion: Definition Sensor Fusion is the combining of sensory data or data derived from sensory data such that the resulting information is in some sense (eg: accuracy, robustness) better than would be possible when these sources were used individually

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Sensor Fusion Models: Complementary Type sensors do not depend on each other directly Can be combined to establish a more complete picture of the phenomenon being observed and hence the sensor datasets would be complete Example:
the use of multiple cameras each observing different parts of a room four radars around a geographical region would provide a complete picture of the area surrounding the region

Fusion algorithm can be simply appending


Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Sensor Fusion Models: Competitive Type


Each sensor delivers independent measurements of the same attribute or feature Fusion of the same type of data from different sensors or the fusion of measurements from a single sensor obtained at different instants is possible Provide robustness and fault-tolerance because comparison with another competitive sensor can be used to detect faults Can provide a degraded level of service in the presence of faults Competing sensors in this system do not necessarily have to be identical

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Sensor Fusion Models: Cooperative Type Data provided by two independent sensors are used to derive information that would not be available from a single sensor Eg: Stereoscopic vision system
By combining the two-dimensional (2D) images from two cameras located at slightly different angles of incidence (viewed from two image planes), a three- dimensional (3D) image of the observed scene can be determined

Cooperative sensor fusion is difficult to design, and the resulting data will be sensitive to the inaccuracies in all the individual sensors.
Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Stereoscopic Camera

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Levels of Fusion Raw level Fusion


Directly the sensor outputs are combined Eg: Averaging the temperature in a room

Feature level Fusion


Extract information from sensors before combining Eg: In fish classifier, features length and lightness were extracted from the vision inputs

Decision/Action Level Fusion


Fusion after taking the decisions based on individual sensors

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Fusion Methods
Probabilistic and statistical models:
Bayesian reasoning Evidence theory Robust statistics Recursive operators
Kalman Filter Optimization Regularization Uncertainty ellipsoids ANNs Fuzzy logic Approximate reasoning Computer vision techniques

Least-square (LS) and mean square methods:


Heuristic methods

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Raw data level fusion: Case Study

T1

T?

T2

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Sensor Models

P(x|z)

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Consider another sensor

P(x|z)

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

How can we combine ? Let X = (1-W)z1+Wz2 E(Xhat) = ? E() = ?

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Combined Measurement

P(x|z1,z2)

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Example

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Which Feature(s) ? Using length as a feature

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Which feature(s) ? Using Lightness as feature

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

More features Linearly separable decision boundary

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Overly complex decision boundaries Over tuned decision boundary

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

More complex decision boundaries Optimized decision boundary

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Bayesian Decision Theory Lets revisit the example of Sea Bass and Salmon Let w represents the class:
w = w1 Sea Bass w = w2 Salmon

Priori Probabilities P(w)


P(w1): The probability that next fish is Sea Bass P(w2): The probability that next fish is Salmon

Reflect our prior knowledge of how likely we are to get a sea bass or salmon before the fish actually appears Can we make a decision only based on priori ?
Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Improving the simple classifier Can we use P(w1) > P(w2) or P(w1) < P(w2) to decide next fish is Sea Bass or Salmon ? What if we have to predict many fish ? Suppose, in addition to the priori we have:
Measurement vector (Feature vector) x={x1xN} on the subject
(Example: x1 = lightness, x2 = length)

Class-conditional probability density function P(x|w)


Example: P(x2|w1) probability distribution of the length of the fish, given that it is a Sea Bass

Bayes theorem can be used to calculate the posterior probabilities P(w|x)


Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Bayes Formula

How to find P(x) ?

Informally this can be expressed in English as

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Bayes Classifier Bayes formula shows that by observing the value of x we can convert the prior probability P(j) to the a posteriori probability (or posterior) probability P(j |x) For the purpose of classification, what is important is:
Likelihood Priori

Evidence p(x) can be considered as just a scaling factor which make sure the sum of each individual probabilities = 1
Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Class-conditional PDF of the two classes


Normalized PDF = area under curve is 1

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Posteriors Let priors P(w1)=1/3 and P(w2)=2/3

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Classification based on Posteriors Now, it is sensible to set the decision boundary as


If P(w1|x) > P(w2|x) select class w1 If P(w1|x) < P(w2|x) Select class w2

In order to justify this decision, we have to calculate the probability of error, whenever we make a decision based on the measurement x and priories

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Minimizing probability of error The average probability of error is:

So, if P(error|x) can be minimized for each individual decision, P(error) can be minimized So, if we apply the Bayes decision rule then the P(error|x) = min[ P(w1|x), P(w2|x) ]

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Bayes Decision Rule: elimination of p(x) As, p(x) [ evidence ] in the Bayes just ensures that P(w1|x) + P(w2|x) = 1, we can eliminate it from the decision rule Hence, the decision can be made only based on:
p(x|w1).P(w1) > p(x|w2). P(w2) Decide w1 p(x|w1).P(w1) < p(x|w2). P(w2) Decide w2

Special cases:
When p(x|w1) = p(x|w2) decision is only based on priories When P(w1)=P(w2) decision is only based on likelihood
Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Generalization of Bayes Classifier Consider what happens if:


More than one feature
Feature vector Feature space Many classes

More than two states of nature


Allow other actions
Eg: Rejection

Introducing loss function more general than the probability of error

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Fusion at Decision Level: Bayes Reasoning

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Example Consider an air surveillance detector, which can have 3 states (x):

Suppose a single sensor observes x and return 3 values

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Sensor Model: Likelihood Matrix Sensor can be modeled in the form of a likelihood matrix This gives P(z|x) Consider P(z|x) for two sensors

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Posterior Probability

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

Combining Sensors Assuming that each sensor is mutually exclusive: P(x | z1,z2,zN) = P(x | z1).P(x|z2)P(x|ZN) Therefore P(x | z1,z2) = P(x|z1)P(x|z2) = C. P(z1|x).P(x).(z2|x).P(x)

Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

jan@ee.pdn.ac.lk
Janaka Wijayakulasooriya, Department of Electrical and Electronic Engineering, University of Peradeniya

You might also like