نتایج جستجو برای: weight update

تعداد نتایج: 417945  

Journal: :Signal Processing 2013
Rafiahamed Shaik Mrityunjoy Chakraborty

A scheme for efficient realization of the adaptive decision feedback equalizer (ADFE) is presented using the block floating point (BFP) data format which enables the ADFE to process rapidly varying data over a wide dynamic range, at a fixed point like complexity. The proposed scheme adopts appropriate BFP format for the data as well as the filter weights and works out separate update relations ...

2012
Indah Agustien Siradjuddin M. Rahmat Widyanto T. Basaruddin

Particle filter for object tracking could achieve high tracking accuracy. To track the object, this method generates a number of particles which is the representation of the candidate target object. The location of target object is determined by particles and each weight. The disadvantage of conventional particle filter is the computational time especially on the computation of particle’s weigh...

Journal: :Neural computation 2017
Jonathan Y. Suen Saket Navlakha

Controlling the flow and routing of data is a fundamental problem in many distributed networks, including transportation systems, integrated circuits, and the Internet. In the brain, synaptic plasticity rules have been discovered that regulate network activity in response to environmental inputs, which enable circuits to be stable yet flexible. Here, we develop a new neuro-inspired model for ne...

2008
Hung-Lung Wang

In this paper, we consider the problem of maintaining the centdians in a fully dynamic forest. A centdian is a location in the given network which minimizes a convex combination of the longest distance and the sum of distances from all clients to it. A fully dynamic forest is a forest, i.e., a set of trees, in which the following operations are allowed: to insert an edge between different trees...

Journal: :CoRR 2017
Kyle Helfrich Devin Willmott Qiang Ye

Recurrent Neural Networks (RNNs) are designed to handle sequential data but suffer from vanishing or exploding gradients. Recent work on Unitary Recurrent Neural Networks (uRNNs) have been used to address this issue and in some cases, exceed the capabilities of Long Short-Term Memory networks (LSTMs). We propose a simpler and novel update scheme to maintain orthogonal recurrent weight matrices ...

2010
Avishek Saha Piyush Rai Hal Daumé Suresh Venkatasubramanian

In this paper, we propose an online multitask learning framework where the weight vectors are updated in an adaptive fashion based on inter-task relatedness. Our work is in contrast with the earlier work on online multitask learning (Cavallanti et al., 2008) where the authors use a fixed interaction matrix of tasks to derive (fixed) update rules for all the tasks. In this work, we propose to up...

2011
Constantinos Daskalakis

Let us remember the mechanics of Multiplicative-Weights-Updates: At every time t, the learner maintains a weight vector wt ≥ 0 over the experts. Given the weight vector, the probability distribution over the experts is computed as pt = wt wt·1 . The weights are initialized at w1 = 1 n · 1. (Multiplicative-weights-update step.) Given the loss vector at time t the weights are updated as follows w...

Journal: :EAI Endorsed Trans. Indust. Netw. & Intellig. Syst. 2015
Mohammad Fal Sadikin Marcel Kyas

Due to low-cost and its practical solution, the integration of RFID tag to the sensor node called smart RFID has become prominent solution in various fields including industrial applications. Nevertheless, the constrained nature of smart RFID system introduces tremendous security and privacy problem. One of them is the problem in key management system. Indeed, it is not feasible to recall all R...

Journal: :CoRR 2017
Santosh Nannuru Kay L. Gemba Peter Gerstoft William S. Hodgkiss Christoph F. Mecklenbräuker

Sparse Bayesian learning (SBL) has emerged as a fast and competitive method to perform sparse processing. The SBL algorithm, which is developed using a Bayesian framework, approximately solves a non-convex optimization problem using fixed point updates. It provides comparable performance and is significantly faster than convex optimization techniques used in sparse processing. We propose a sign...

2015
Nikko Strom

We introduce a new method for scaling up distributed Stochastic Gradient Descent (SGD) training of Deep Neural Networks (DNN). The method solves the well-known communication bottleneck problem that arises for data-parallel SGD because compute nodes frequently need to synchronize a replica of the model. We solve it by purposefully controlling the rate of weight-update per individual weight, whic...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید