Uncategorized

Classification

CLASSIFICATION

LDA

Logistic Regression

Naive Bayes

Decision Tree

Support Vector Machine for Binary Classification

The parameter values for parameter C, which indicates the relaxation of the restriction conditions in soft margin SVM, and parameter σ , which indicates the spread of the gauss kernel distribution. With SVM, when a new feature is added, it also requires a
support vector (SV) for use in classification in that dimension.

With a boosting algorithm such as AdaBoost, learning proceeds so that weight vectors are expressed with the smallest possible number of features. As a result, classification is performed with few features, and it is possible to analyze features that have a high contribution rate. If we consider the weak learner of AdaBoost is just thresholding, then when we weighting and selecting the learners, we are selecting the features at the same time.

On the other hand, SVM attempts to express the weight vectors using the smallest possible number of cases, making it difficult to perform analysis of the features from the learning space.  If we have enough amount of data from all the classes for SVM, then if we do random selection from the whole data set, it will not largely affect the performance of SVM. If we are adding a new irrelevant feature, it will affect the performance of SVM, because we are changing the feature space of the data set.

kernal: linear, Gauss, sigmoid

Ensemble Learning

Discrete AdaBoost, Real AdaBoost, LogitBoost and Gentle AdaBoost

AdaBoost is a very simple algorithm when compared to either neural networks or SVMs
and as a result requires significantly less resources and time to train, and often outperforms them as well. Another favorable characteristic of AdaBoost, is that it seems to be resistant to over-fitting.

Random Forests
There are two famous processes adopted in RF. The first step is bootstrap, where the classification trees are constructed concurrently with random sampling the data
from dataset with replacement that forms into new training sets independently. Next step is bagging, which combines each tree into a classification forest and its result is decided.
For each decision tree in the random forest, unpruned trees grow to the largest. The root of every tree consists of different new training subsets created by bootstrap. Each node on branches is split using the best split among all features of datasets. The element of each leave has the same class label. The class labels of final leaves stand for the detection result of new data.

METRICS

metrics of binary classification

 

capture

Sensitivity = Recall = True positive rate

Specificity = True negative rate

Multiclass classification

Regression (multiple and multivariate)

Clustering and other dimension reduction

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s