نتایج جستجو برای: mlff n eural network
تعداد نتایج: 1611241 فیلتر نتایج به سال:
L Ocal E Xplanation M Ethods for D Eep N Eural N Etworks L Ack S Ensitivity to P Arameter V Al - Ues
Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning. Several proposed local explanation methods address this issue by identifying what dimensions of a single input are most responsible for a DNN’s output. The goal of this work is to assess the sensitivity of local explanations to DNN parameter values. Somewhat...
Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The ...
Suppression of bone morphogenetic protein (BMP) signaling induces neural induction in the ectoderm of developing embryos. BMP signaling inhibits eural induction via the expression of various neural suppressors. Previous research has demonstrated that the ectopic expression of dominant negative BMP receptors (DNBR) reduces the expression of target genes down-stream of BMP and leads to neural ind...
The seven papers appearing in the current Special ssue of Developmental Cognitive Neuroscience embrace particularly exciting theme in both basic and clincal neuroscience. This excitement reflects the unique otential to unite diverse groups all sharing a major nterest in development, creating a transformative translaional developmental cognitive neuroscience. These groups nclude scientists inter...
Binarized Neural networks (BNNs) have been shown to be effective in improving network efficiency during the inference phase, after the network has been trained. However, BNNs only binarize the model parameters and activations during propagations. We show there is no inherent difficulty in training BNNs using ”Quantized BackPropagation” (QBP), in which we also quantized the error gradients and i...
it is well known that in operations research, degeneracy can cause a cycle in a network simplex algorithm which can be prevented by maintaining strong feasible bases in each pivot. also, in a network consists of n arcs and m nodes, not considering any new conditions on the entering variable, the upper bound of consecutive degenerate pivots is equal $left( begin{array}{c} n...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید