Convolutional Polar Kernels
نویسندگان
چکیده
منابع مشابه
Deep Clustered Convolutional Kernels
Deep neural networks have recently achieved state of the art performance thanks to new training algorithms for rapid parameter estimation and new regularizations to reduce overfitting. However, in practice the network architecture has to be manually set by domain experts, generally by a costly trial and error procedure, which often accounts for a large portion of the final system performance. W...
متن کاملConvolutional Polar Codes
Arikan’s Polar codes [1] attracted much attention as the first efficiently decodable and capacity achieving codes. Furthermore, Polar codes exhibit an exponentially decreasing block error probability with an asymptotic error exponent upper bounded by β < 1 2 . Since their discovery, many attempts have been made to improve the error exponent and the finite block-length performance, while keeping...
متن کاملPolar Lipids from Oat Kernels
Cereal Chem. 87(5):467–474 Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments to determine genotypic or environmental variation in these lipids. Validation experiments indicated a solid phase silica gel ext...
متن کاملDesign of Kernels in Convolutional Neural Networks for Image Classification
Despite the effectiveness of Convolutional Neural Networks (CNNs) for image classification, our understanding of the relationship between shape of convolution kernels and learned representations is limited. In this work, we explore and employ the relationship between shape of kernels which define Receptive Fields (RFs) in CNNs for learning of feature representations and image classification. Fo...
متن کاملLearning Non-overlapping Convolutional Neural Networks with Multiple Kernels
In this paper, we consider parameter recovery for non-overlapping convolutional neural networks (CNNs) with multiple kernels. We show that when the inputs follow Gaussian distribution and the sample size is sufficiently large, the squared loss of such CNNs is locally strongly convex in a basin of attraction near the global optima for most popular activation functions, like ReLU, Leaky ReLU, Squ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Communications
سال: 2020
ISSN: 0090-6778,1558-0857
DOI: 10.1109/tcomm.2020.3026103