Constructing infinite deep neural networks with flexible expressiveness while training

نویسندگان

چکیده

The depth of the deep neural network (DNN) refers to number hidden layers between input and output an artificial network. It usually indicates a certain degree complexity computational cost (parameters floating point operations per second) expressiveness once structure is settled. In this study, we experimentally investigate effectiveness using ordinary differential equations (NODEs) as component provide further in continuous way relatively shallower networks rather than stacking more (discrete depth), which achieved improvement with fewer parameters. Experiments are conducted on classic DNNs, residual networks. Moreover, construct infinite flexible based NODEs, enabling system adjust its during training. On better hidden-space provided by adaptive step ResNet NODE (ResODE) managed achieve performances terms convergence accuracy standard networks, improvements widely observed popular benchmarks.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

prDeep: Robust Phase Retrieval with Flexible Deep Neural Networks

Phase retrieval (PR) algorithms have become an important component in many modern computational imaging systems. For instance, in the context of ptychography and speckle correlation imaging PR algorithms enable imaging past the diffraction limit and through scattering media, respectively. Unfortunately, traditional PR algorithms struggle in the presence of noise. Recently PR algorithms have bee...

متن کامل

Training and Analyzing Deep Recurrent Neural Networks

Time series often have a temporal hierarchy, with information that is spread out over multiple time scales. Common recurrent neural networks, however, do not explicitly accommodate such a hierarchy, and most research on them has been focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processin...

متن کامل

Incremental Training of Deep Convolutional Neural Networks

We propose an incremental training method that partitions the original network into sub-networks, which are then gradually incorporated in the running network during the training process. To allow for a smooth dynamic growth of the network, we introduce a look-ahead initialization that outperforms the random initialization. We demonstrate that our incremental approach reaches the reference netw...

متن کامل

Adaptive dropout for training deep neural networks

Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50% of their activities. We describe a method called ‘standout’ in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This ‘adapt...

متن کامل

Exploring Strategies for Training Deep Neural Networks

Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization often appears to get stuck in poor solutions. Hinton et al. recently proposed a greedy layer-wise u...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2022

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2021.11.010