Thursday, October 9, 2008

Activity 20: Neural Networks

For this last activity, another pattern recognition model is used to group things in to their proper classes. This model is called Neural Networks, and in the processes implements Error Backprojection (EB). The steps of implementing EB are shown below.


Source: Dr. M. Soriano, A-20 Neural Networks

The objects to be used for this activity are fishballs and squidballs. The input training matrix and desired output wold be set up by extracting 3 features (height/width, R and G chromaticity values), defining 2 classes, with 5 training samples and 5 test samples each. Therefore, the training matrix has 5 rows x 3 columns, while the desired output has 5 rows by 2 columns. The desired output was set to 1 for the fishball class, and 0 for the squidball class.

Wednesday, October 1, 2008

Activity 19: Probabilistic Classification

In this activity, a different method was implemented to classify images into their proper classes using the features used from the previous activity. The objects used are fishballs and squidballs. There are still 4 training sets and 4 test sets for each object. The features used were still the ratio between the height and the width, and each of their R-G-B values.

Objects used:

The method used here is Linear Discrimination Analysis (LDA), wherein a linear transformation of the features (X) and classes (Y) is determined, such that the transformed values on the new axes maximizes the differences between the features of one class from the other.

The set of features are shown below.
To start, the features were first assigned to their classes (x1= fishballs, and x2=squidballs). Then the mean(μ) was calculated for each class i, and was used to calculate the mean corrected data(xi0) given by the equation:



Then, the covariance matrix (C) was determined using the equations:



The probability (p) that the object feature is assigned to a class i is just the total sample of each class divided by the total samples.

With all the calculated values, the LDA formula given by the equation below, where fi is the linear discriminant, μi is the mean of the feature, C is the covariance matrix, xk is the set of features, and pi is the conditional probability, is used. The object will then be assigned to the class where its calculated linear discriminant is highest.



Results show that:


From the table above, it was shown that 100% of the objects were successfully classified to their proper classes. The objects were assigned to the class where their calculated linear discriminant is highest.
rating: 10 bec. proper classification was done..

Activity 18: Pattern Recognition

In this activity, we were asked to perform classification of different images through pattern recognition. This would be done by assembling different objects, which can be classified into 2 to 5 classes. Half of these will serve as training sets, whereas the other half will serve as test sets. These training set are used to distinguish one class from another.

The objects used were fishballs, kwekwek, pillows, and squidballs. I assembled 8 pieces of each of the objects, and classified them using 4 features , which are, the ratio between the height and the width, and each of their R-G-B values.

The objects used are shown below.


The images were first converted into grayscale before the threshold value of the images were determined using their histogram, to be able to properly binarized it.

The feature vectors are represented by x, and N is the number of the objects in the each class j.
Below are the feature vectors of the objects.



To determine in which class an unknown feature vector belongs, the feature vector mean given by the equation below is taken.

The calculated means are tabulated below.


The calculated mean will then be used to determine the Euclidean distance, D.

Finally, the class in which minimum Euclidean distance was calculated, is the assigned class of x.
Results from the table below show that 100% of the objects were properly classified.



rating: 10 because proper classification of the objects was done!
acknowledgement: Angel and Marge for helping me with the program.