نتایج جستجو برای: better training

تعداد نتایج: 834431  

1993
Jeffrey Dean Craig Chambers

Optimizing implementations for object-oriented languages rely on aggressive inlining to achieve good performance. Sometimes the compiler is over-eager in its quest for good performance, however, and inlines too many methods that merely increase compile time and consume extra compiled code space with little benefit in run-time performance. We have designed and implemented a new approach to inlin...

2006
Markus Dreyer Jason Eisner

We study unsupervised methods for learning refinements of the nonterminals in a treebank. Following Matsuzaki et al. (2005) and Prescher (2005), we may for example split NP without supervision into NP[0] and NP[1], which behave differently. We first propose to learn a PCFG that adds such features to nonterminals in such a way that they respect patterns of linguistic feature passing: each node’s...

Journal: :The Bulletin of the Royal College of Surgeons of England 2012

2014
Jenni Deveau Susanne M. Jaeggi Victor Zordan Calvin Phung Aaron R. Seitz

Can we create engaging training programs that improve working memory (WM) skills? While there are numerous procedures that attempt to do so, there is a great deal of controversy regarding their efficacy. Nonetheless, recent meta-analytic evidence shows consistent improvements across studies on lab-based tasks generalizing beyond the specific training effects (Au et al., 2014; Karbach and Verhae...

Journal: :CoRR 2017
Gangming Zhao Zhaoxiang Zhang Jingdong Wang He Guan

With the rapid development of Deep Convolutional Neural Networks (DCNNs), numerous works focus on designing better network architectures (i.e., AlexNet, VGG, Inception, ResNet and DenseNet etc.). Nevertheless, all these networks have the same characteristic: each convolutional layer is followed by an activation layer, a Rectified Linear Unit (ReLU) layer is the most used among them. In this wor...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید