Features Selection Algorithm For Selecting Relevant Features

1606 Words7 Pages
Abstract— Process of selecting relevant features from available dataset is known as features selection. Feature selection is use to remove or reduce redundant and irrelevant features. Various feature selection algorithms such as CFS (correlation feature selection), FCBF (Fast Correlation Based Filter) and CMIM (Conditional Mutual Information Maximization) are used to remove redundant and irrelevant features. To determine efficiency and effectiveness is the aim of feature selection algorithm. Time factor is denoted by efficiency and quality factor is denoted by effectiveness of subset of features. Problem of feature selection algorithm is accuracy is not guaranteed, computational complexity is large, ineffective at removing redundant features. To overcome these problems Fast Clustering based feature selection algorithm (FAST) is used. Removal of irrelevant features, construction of MST (Minimum Spanning Tree) from relative one and partition of MST and selecting representative features using kruskal’s method are the three steps used by FAST algorithm.
Index Terms— Feature subset selection, graph theoretic clustering, FAST


Feature subset selection can be viewed as the method of identifying and removing a lot of unrelated and unnecessary features as probable because (i) unrelated features do not give the predictive correctness (ii) unnecessary features do not redound to receiving a superior predictor for that they give main data which is previously
Get Access