Practical-1 |Practical-2 | Practical-3 | Practical-4 | Practical-5 | Practical-6 | Practical-7 | Practical-8 | Practical-9 | Practical-10 | Practical-11| Practical-12 |
Practical:-3
AIM:-Data reduction using variance threshold, univariate feature selection, recursive feature elimination, PCA
This blog is about how we perform data reduction using variance threshold, univariate feature selection, recursive feature elimination, PCA.
About the dataset:-
For this practical, I have used the “Iris” dataset which is loaded by sklearn.
After that let see the dataset information.
The data have four features. To test the effectiveness of different feature selection methods, we add some noise features to the data set.
Before applying the feature selection method, we need to split the data first. The reason is that we only select features based on the information from the training set, not on the whole data set.
Variance Threshold:-
VarianceThreshold is a Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning. Features with a training-set variance lower than this threshold will be removed.
Univariate Feature Selection:-
Univariate feature selection works by selecting the best features based on univariate statistical tests. We compare each feature to the target variable, to see whether there is any statistically significant relationship between them. It is also called the analysis of variance (ANOVA). … That is why it is called ‘univariate’.
- f_classif:-
2. chi2:-
3. mutual_info_classif:-
Recursive Feature Elimination:-
Recursive feature elimination (RFE) is a feature selection method that fits a model and removes the weakest feature (or features) until the specified number of features is reached. … RFE requires a specified number of features to keep, however, it is often not known in advance how many features are valid.
Differences Between Before and After Using Feature Selection:-
a. Before using Feature Selection
b. After using Feature Selection
Principal Component Analysis (PCA):-
The principal components of a collection of points in real coordinate space are a sequence of p unit vectors, where the i-th vector is the direction of a line that best fits the data while being orthogonal to the first i-1 vectors.
PCA Projection to 2D
The original data has 4 columns (sepal length, sepal width, petal length, and petal width). In this section, the code projects the original data which is 4 dimensional into 2 dimensions. The new components are just the two main dimensions of variation.
Concatenating DataFrame along axis = 1. finalDf is the final DataFrame before plotting the data.
Now, lets visualize the data frame, execute the following code:
Now lets visualize 3D graph,
Conclusion:-
I hope you will understand these things…
Github Link:-