under-sampling, over-sampling, increasing minority samples and decreasing majority samples simultaneously, synthesise “new” samples from the minority class, bootstrap
0-1 normalization ((bla – min(bla)) / ( max(bla) – min(bla) ))
Gaussian normalization (Gaussian kernel)
time series: entropy, approximate entropy, sample entropy
plus some domain knowledge
- Density estimation: kernal density estimation (Parzen–Rosenblatt window)
How to choose a proper feature selection method for your data? Go from easier ones to complicated ones, go from linear ones to non-linear ones.
The combinations of individually good features do not necessarily lead to good classification performance. “The m best features are not the best m features”
is a multi-dimensional generalization of the idea of measuring how many standard deviations away a point P is from the mean of the distribution D. The Mahalnobis distance transforms the random vector into a zero mean vector with an identity matrix for covariance. In that space, the Euclidean distance is safely applied. It can be used to identify outliers, which are data points away from the distribution of the data. We can consider each feature a time (multivariate reduced to univariate), then the covariance matrix reduces to a diagonal matrix. Then, we can rank the features by the distance, and delete one feature a time to identify the best combination of features by investigating the changes of the metric.
Hypothesis testing to test whether the difference of one feature is significant among classes: t-test
- Mutual information
MI is more general and determines how similar the joint distribution p(X,Y) is to the products of factored marginal distribution p(X)p(Y). I (i) is a measure of dependency between the density of variable xi and the density of the target y. Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces uncertainty about the other. I(X; Y) = 0 if and only if X and Y are independent random variables. Moreover, mutual information is nonnegative (i.e. I(X;Y) ≥ 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)).
The area contained by both circles is the joint entropy H(X,Y). The circle on the left (red and violet) is the individual entropy H(X), with the red being the conditional entropy H(X|Y). The circle on the right (blue and violet) is H(Y), with the blue being H(Y|X). The violet is the mutual information I(X;Y), which is equivalent to the amount of uncertainty in Y which is removed by knowing X
How to estimate mi of continuous variables: data discretization; density estimation method (e.g., Parze windows with Gaussian window)
- MRMR(Minimum Redundancy Maximum Relevance)
Maximal relevance: selecting the features with the highest relevance to
the target class c. Relevance is usually characterized in terms of correlation of mutual information.
We can get a ranking list of all the features, and apply wrapper method with the rank.
- KL distance
FDA (Fisher’s discriminant analysis)
Tree Based Methods(embedded)
Adaboost with a tree stump (Variable importance is measured by how much error the variable reduced each time it was used in a tree’s split/branch), CART, BART (tree model), random forest
Greedily select, use the performance of a reliable classifier. Combined with data partition (subsampling)
Dimension Reduction Methods
Dimension reduction method, which will change the feature space
Ensemble Feature Selection
- Harmony search
- Data Reliability Based Feature Selection: a feature is considered reliable (or relevant) if its values are tightly grouped together.
- Stability selection: apply a feature selection algorithm on different subsets of data and with different subsets of features (bootstrap)
- When deal with imbalanced data set, create multiple balanced data sets from the original imbalanced data set via sampling, and subsequently evaluate feature subsets using an ensemble of base classifiers each trained on a balanced data set.