نتایج جستجو برای: discrete time neural networks dnns

تعداد نتایج: 2505214  

Journal: :Facta Universitatis, Series: Automatic Control and Robotics 2021

Journal: :The Journal of the Acoustical Society of America 2016
Jitong Chen DeLiang Wang

Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. T...

Journal: :CoRR 2017
Achintya Kr. Sarkar Zheng-Hua Tan

In this paper, we present a time-contrastive learning (TCL) based unsupervised bottleneck (BN) feature extraction method for speech signals with an application to speaker verification. The method exploits the temporal structure of a speech signal and more specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting the signal without using...

2008
Eva Kaslik Ştefan Balint

This chapter is devoted to the analysis of the complex dynamics exhibited by twodimensional discrete-time delayed Hopfield-type neural networks. Since the pioneering work of (Hopfield, 1982; Tank & Hopfield, 1986), the dynamics of continuous-time Hopfield neural networks have been thoroughly analyzed. In implementing the continuous-time neural networks for practical problems such as image proce...

Journal: :CoRR 2018
Kazuma Arino Yohei Kikuta

Deep neural networks (DNNs) have achieved exceptional performances in many tasks, particularly, in supervised classification tasks. However, achievements with supervised classification tasks are based on large datasets with well-separated classes. Typically, real-world applications involve wild datasets that include similar classes; thus, evaluating similarities between classes and understandin...

Journal: :CoRR 2018
Tianyun Zhang Shaokai Ye Yipeng Zhang Yanzhi Wang Makan Fardad

We present a systematic weight pruning framework of deep neural networks (DNNs) using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a constrained nonconvex optimization problem, and then adopt the ADMM framework for systematic weight pruning. We show that ADMM is highly suitable for weight pruning due to the computational effici...

2016
Zhiguang Wang Tim Oates James Lo

This paper proposes a set of new error criteria and learning approaches, Adaptive Normalized Risk-Averting Training (ANRAT), to attack the non-convex optimization problem in training deep neural networks (DNNs). Theoretically, we demonstrate its effectiveness on global and local convexity lower-bounded by the standard Lp-norm error. By analyzing the gradient on the convexity index λ, we explain...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید