نتایج جستجو برای: Layer-wise

تعداد نتایج: 307058  

This paper presents a static analysis of laminated composite doubly-curved shells using refined kinematic models with polynomial and non-polynomial functions recently introduced in the literature. To be specific, Maclaurin, trigonometric, exponential and zig-zag functions are employed. The employed refined models are based on the equivalent single layer theories. A simply supported shell is sub...

Journal: :CoRR 2012
Ludovic Arnold Yann Ollivier

When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret autoencoders in this setting as generative models, by showing tha...

Journal: :Electronics 2021

Due to the large number of parameters and heavy computation, real-time operation deep learning in low-performance embedded board is still difficult. Network Pruning one effective methods reduce without additional network structure modification. However, conventional method prunes redundant up same rate for all layers. It may cause a bottleneck problem, which leads performance degradation, becau...

Journal: :CoRR 2017
Lixue Zhuang Yi Xu Bingbing Ni Hongteng Xu

How to effectively approximate real-valued parameters with binary codes plays a central role in neural network binarization. In this work, we reveal an important fact that binarizing different layers has a widely-varied effect on the compression ratio of network and the loss of performance. Based on this fact, we propose a novel and flexible neural network binarization method by introducing the...

2013
Ludovic Arnold Yann Ollivier

When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret autoencoders in this setting as generative models, by showing tha...

2006
Yoshua Bengio Pascal Lamblin Dan Popovici Hugo Larochelle

Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until re...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید