نتایج جستجو برای: feature subset selection algorithm
تعداد نتایج: 1279965 فیلتر نتایج به سال:
In the context of feature selection, there is a trade-off between the number of selected features and the generalisation error. Two plots may help to summarise feature selection: the feature selection path and the sparsity-error trade-off curve. The feature selection path shows the best feature subset for each subset size, whereas the sparsity-error trade-off curve shows the corresponding gener...
In this paper, kernel feature selection is proposed to improve generalization performance of boosting classifiers. Kernel feature Selection attains the feature selection and model selection at the same time using a simple selection algorithm. The algorithm automatically selects a subset of kernel features for each classifier and combines them according to the LogitBoost algorithm. The system em...
Feature selection is an important technique in dealing with application problems with large number of variables and limited training samples, such as image processing, combinatorial chemistry, and microarray analysis. Commonly employed feature selection strategies can be divided into filter and wrapper. In this study, we propose an embedded two-layer feature selection approach to combining the ...
One of the essential motivations for feature selection is to defeat the curse of dimensionality problem. Feature selection optimization is nothing but generating best feature subset with maximum relevance, which improves the result of classification accuracy in pattern recognition. In this research work, Differential Evolution and Genetic Algorithm, the two population based feature selection me...
At present, most of the researches on feature selection do not consider the relevance between a term and its own category, the redundancy among terms. In order to solve this problem efficiently, we propose a new feature selection based on analyzing how to measure the relevance and the redundancy, which use Euclidean distance as the similarity calculation method. R2, the new feature selection al...
On high dimensional data sets choosing subspaces randomly, as in RASCO (Random Subspace Method for Co-training, Wang et al. 2008) algorithm, may produce diverse but inaccurate classifiers for Co-training. In order to remedy this problem, we introduce two algorithms for selecting relevant and non-redundant feature subspaces for Co-training. First algorithm relevant random subspaces (Rel-RASCO) p...
Nowadays, increasing the volume of data and the number of attributes in the dataset has reduced the accuracy of the learning algorithm and the computational complexity. A dimensionality reduction method is a feature selection method, which is done through filtering and wrapping. The wrapper methods are more accurate than filter ones but perform faster and have a less computational burden. With ...
In this paper, we identify two issues involved in developing an automated feature subset selection algorithm for unlabeled data: the need for finding the number of clusters in conjunction with feature selection, and the need for normalizing the bias of feature selection criteria with respect to dimension. We explore the feature selection problem and these issues through FSSEM (Feature Subset Se...
In the recent decades hyperspectral remotely sensed data present the excellent potential for information extraction of the earth surfaces. These observations are normally sampled in several hundred narrow and continuous bands. Such high dimensionality of data provides more discrimination ability in classification task, but also imposes high computational cost and complexity in data modeling. In...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید