Uncategorized

Feature Engineering

Data Pre-processing(Transformation)

Sampling

under-sampling, over-sampling, increasing minority samples and decreasing majority samples simultaneously, synthesise “new” samples from the minority class, bootstrap

Normalization

sigmoid normalization

0-1 normalization ((bla – min(bla)) / ( max(bla) – min(bla) ))

z-score

Gaussian normalization (Gaussian kernel)

Box-cox transformation

 

Feature Engineering

image

speech

text

time series: entropy, approximate entropy, sample entropy

plus some domain knowledge

Data Visualization

  1. Statistics
  2. Histogram
  3. Density estimation: kernal density estimation (Parzen–Rosenblatt window)

Feature Selection

How to choose a proper feature selection method for your data? Go from easier ones to complicated ones, go from linear ones to non-linear ones.
The combinations of individually good features do not necessarily lead to good classification performance.  “The m best features are not the best m features”

Similarity Measure

Euclidean distance

Cosine distance

Gaussian distance

局部哈希,Hamming  distance,

is a multi-dimensional generalization of the idea of measuring how many standard deviations away a point P is from the mean of the distribution D. The Mahalnobis distance transforms the random vector into a zero mean vector with an identity matrix for covariance. In that space, the Euclidean distance is safely applied.  It can be used to identify outliers, which are data points away from the distribution of the data. We can consider each feature a time (multivariate reduced to univariate), then the covariance matrix reduces to a diagonal matrix. Then, we can rank the features by the distance, and delete one feature a time to identify the best combination of features by investigating the changes of the metric.

Statistical Tests

Hypothesis testing to test whether the difference of one feature is significant among classes: t-test

Filter Methods

  • Correlation
  • F-statistics
  • Mutual information

capture

MI is more general and determines how similar the joint distribution p(X,Y) is to the products of factored marginal distribution p(X)p(Y). I (i) is a measure of dependency between the density of variable xi and the density of the target y. Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces uncertainty about the other.  I(X; Y) = 0 if and only if X and Y are independent random variables. Moreover, mutual information is nonnegative (i.e. I(X;Y) ≥ 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)).

The area contained by both circles is the joint entropy H(X,Y). The circle on the left (red and violet) is the individual entropy H(X), with the red being the conditional entropy H(X|Y). The circle on the right (blue and violet) is H(Y), with the blue being H(Y|X). The violet is the mutual information I(X;Y), which is equivalent to the amount of uncertainty in Y which is removed by knowing X

capture

capture

How to estimate mi of continuous variables: data discretization; density estimation method (e.g., Parze windows with Gaussian window)

  • MRMR(Minimum Redundancy Maximum Relevance)

Maximal relevance:  selecting the features with the highest relevance to
the target class c. Relevance is usually characterized in terms of correlation of mutual information.

capture

Minimal redundancy: capture

combine MRMR: capture

Capture.JPG

We can get a ranking list of all the features, and apply wrapper method with the rank.

Linear Methods

FDA (Fisher’s discriminant analysis)

Tree Based Methods(embedded)

Adaboost with a tree stump (Variable importance is measured by how much error the variable reduced each time it was used in a tree’s split/branch), CART, BART (tree model), random forest

univariate methods,linear models and regularization and random forests for feature selection, stability selection, Recursive feature elimination

Greedy Selection(wrapper)

Greedily select, use the performance of a reliable classifier. Combined with data partition (subsampling)

Dimension Reduction Methods

Dimension reduction method, which will change the feature space

Regularization/sparsity(embedded)

Ensemble Feature Selection

  • Harmony search
  • Data Reliability Based Feature Selection: a feature is considered reliable (or relevant) if its values are tightly grouped together.
  • Stability selection:  apply a feature selection algorithm on different subsets of data and with different subsets of features (bootstrap)
  • When deal with imbalanced data set, create multiple balanced data sets from the original imbalanced data set via sampling, and subsequently evaluate feature subsets using an ensemble of base classifiers each trained on a balanced data set.
  • Boosting
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s