Anisotropy‐based image smoothing via deep neural network training
نویسندگان
چکیده
منابع مشابه
Deep neural network training emphasizing central frames
It is common practice to concatenate several consecutive frames of acoustic features as input of a Deep Neural Network (DNN) for speech recognition. A DNN is trained to map the concatenated frames as a whole to the HMM state corresponding to the center frame while the side frames close to both ends of the concatenated frames and the remaining central frames are treated as equally important. Tho...
متن کاملDeep Convolutional Neural Network for Image Deconvolution
Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristi...
متن کاملImage Retrieval Method for Deep Neural Network
Because of the large data in the image database, the key problem of the retrieval algorithm is to retrieve the required image in the short time. Aiming at this problem, this article given a self-learning deep belief neural network method, and through building layers, input, output, and self-learning algorithm in network architecture to get global algorithm for image retrieval. The accuracy and ...
متن کاملRelation Classification via Convolutional Deep Neural Network
The state-of-the-art methods used for relation classification are primarily based on statistical machine learning, and their performance strongly depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing natural language processing (NLP) systems, which leads to the propagation of the errors in the existing tools and hinders the pe...
متن کاملCompacting Neural Network Classifiers via Dropout Training
We introduce dropout compaction, a novel method for training feed-forward neural networks which realizes the performance gains of training a large model with dropout regularization, yet extracts a compact neural network for run-time efficiency. In the proposed method, we introduce a sparsity-inducing prior on the per unit dropout retention probability so that the optimizer can effectively prune...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronics Letters
سال: 2019
ISSN: 0013-5194,1350-911X
DOI: 10.1049/el.2019.2263