Nonlinear Principal Component Analysis Using Autoassociative Neural Networks
نویسنده
چکیده
Nonlinear principal component analysis is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA, like PCA, is used to identify and remove correlations among problem variables as an aid to dimensionality reduction, visualization, and exploratory data analysis. While PCA identifies only linear correlations between variables, NLPCA uncovers both linear and nonlinear correlations, without restriction on the character of the nonlinearities present in the data. NLPCA operates by training a feedforward neural network to perform the identity mapping, where the network inputs are reproduced at the output layer. The network contains an internal “bottleneck” layer (containing fewer nodes than input or output layers), which forces the network to develop a compact representation of the input data, and two additional hidden layers. The NLPCA method is demonstrated using time-dependent, simulated batch reaction data. Results show that NLPCA successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.
منابع مشابه
Neural Network Models for Recognition of Consonant-Vowel ( C V ) Utterances
I n this paper, we present an approach based o n neural network models for recognition of utterances of syllablelike units in Indian languages. The distribution capturing ability of a n autoassociative neural network model is exploited to perform nonlinear principal component analysis f o r compressing the size of the feature vector. A constraint satisfaction model is proposed t o incorporate t...
متن کاملDevelopments and Applications of Nonlinear Principal Component Analysis – a Review
Although linear principal component analysis (PCA) originates from the work of Sylvester [67] and Pearson [51], the development of nonlinear counterparts has only received attention from the 1980s. Work on nonlinear PCA, or NLPCA, can be divided into the utilization of autoassociative neural networks, principal curves and manifolds, kernel approaches or the combination of these approaches. This...
متن کاملLimitations of nonlinear PCA as performed with generic neural networks
Kramer's nonlinear principal components analysis (NLPCA) neural networks are feedforward autoassociative networks with five layers. The third layer has fewer nodes than the input or output layers. This paper proposes a geometric interpretation for Kramer's method by showing that NLPCA fits a lower-dimensional curve or surface through the training data. The first three layers project observation...
متن کاملNonlinear Principal Component Analysis, Manifolds and Projection Pursuit
Auto-associative models have been introduced as a new tool for building nonlinear Principal component analysis (PCA) methods. Such models rely on successive approximations of a dataset by manifolds of increasing dimensions. In this chapter, we propose a precise theoretical comparison between PCA and autoassociative models. We also highlight the links between auto-associative models, projection ...
متن کاملText Document Retrieval by Feed-forward Neural Networks
The paper deals with text document retrieval from the given document collection by using neural networks, namely cascade neural network, linear and nonlinear Hebbian neural networks and linear autoassociative neural network. With using neural networks it is possible to reduce the dimension of the document search space with preserving the highest retrieval accuracy.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004