Learning in Memristive Neural Network Architectures Using Analog Backpropagation Circuits
نویسندگان
چکیده
منابع مشابه
Learning Neural Network Architectures using Backpropagation
Deep neural networks with millions of parameters are at the heart of many state of the art machine learning models today. However, recent works have shown that models with much smaller number of parameters can also perform just as well. In this work, we introduce the problem of architecture-learning, i.e; learning the architecture of a neural network along with weights. We start with a large ne...
متن کاملImplementing Neural Architectures Using Analog VLSI Circuits
Biological systems routinely perform computations, such as speech recognition and the calculation of visual motion, that baffle our most powerful computers. Analog very large-scale integrated (VLSI) technology allows us not only to study and simulate biological systems, but also to emulate them in designing artificial sensory systems. A methodology for building these systems in CMOS VLSI techno...
متن کاملDesigning Neural Network Architectures using Reinforcement Learning
At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The lear...
متن کاملA Faster Learning Neural Network Classifier Using Selective Backpropagation
The problem of saturation in neural network classification problems is discussed. The listprop algorithm is presented which reduces saturation and dramatically increases the rate of convergence. The technique uses selective application of the backpropagation algorithm, such that training is only carried out for patterns which have not yet been learnt to a desired output activation tolerance. Fu...
متن کاملReinforced backpropagation for deep neural network learning
Standard error backpropagation is used in almost all modern deep network training. However, it typically suffers from proliferation of saddle points in high-dimensional parameter space. Therefore, it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a good parameter region of better generalization capabilities, especially based on rough insights a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Circuits and Systems I: Regular Papers
سال: 2019
ISSN: 1549-8328,1558-0806
DOI: 10.1109/tcsi.2018.2866510