Oversampling Method for Imbalanced Classification

نویسندگان

  • Zhuoyuan Zheng
  • Yunpeng Cai
  • Ye Li
چکیده

Classification problem for imbalanced datasets is pervasive in a lot of data mining domains. Imbalanced classification has been a hot topic in the academic community. From data level to algorithm level, a lot of solutions have been proposed to tackle the problems resulted from imbalanced datasets. SMOTE is the most popular data-level method and a lot of derivations based on it are developed to alleviate the problem of class imbalance. Our investigation indicates that there are severe flaws in SMOTE. We propose a new oversampling method SNOCC that can compensate the defects of SMOTE. In SNOCC, we increase the number of seed samples and that renders the new samples not confine in the line segment between two seed samples in SMOTE. We employ a novel algorithm to find the nearest neighbors of samples, which is different to the previous ones. These two improvements make the new samples created by SNOCC naturally reproduce the distribution of original seed samples. Our experiment results show that SNOCC outperform SMOTE and CBSO (a SMOTE-based method).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

WEMOTE - Word Embedding based Minority Oversampling Technique for Imbalanced Emotion and Sentiment Classification

Imbalanced training data always puzzles the supervised learning based emotion and sentiment classification. Several existing research showed that data sparseness and small disjuncts are the two major factors affecting the classification. Target to these two problems, this paper presents a word embedding based oversampling method. Firstly, a large-scale text corpus is used to train a continuous ...

متن کامل

ClusterOSS: a new undersampling method for imbalanced learning

A dataset is said to be imbalanced when its classes are disproportionately represented in terms of the number of instances they contain. This problem is common in applications such as medical diagnosis of rare diseases, detection of fraudulent calls, signature recognition. In this paper we propose an alternative method for imbalanced learning, which balances the dataset using an undersampling s...

متن کامل

Oversampling for Imbalanced Learning Based on K-Means and SMOTE

Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions. While different strategies exist to tackle this problem, methods which generate artificial data to achieve a balanced class distribution are more versatile than modifications to the classification a...

متن کامل

LIUBoost : Locality Informed Underboosting for Imbalanced Data Classification

The problem of class imbalance along with classoverlapping has become a major issue in the domain of supervised learning. Most supervised learning algorithms assume equal cardinality of the classes under consideration while optimizing the cost function and this assumption does not hold true for imbalanced datasets which results in sub-optimal classification. Therefore, various approaches, such ...

متن کامل

An Application of Oversampling, Undersampling, Bagging and Boosting in Handling Imbalanced Datasets

Most classifiers work well when the class distribution in the response variable of the dataset is well balanced. Problems arise when the dataset is imbalanced. This paper applied four methods: Oversampling, Undersampling, Bagging and Boosting in handling imbalanced datasets. The cardiac surgery dataset has a binary response variable (1=Died, 0=Alive). The sample size is 4976 cases with 4.2% (Di...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Computing and Informatics

دوره 34  شماره 

صفحات  -

تاریخ انتشار 2015