Material‐informed training of viscoelastic deep material networks

نویسندگان

چکیده

Deep material networks (DMN) are a data-driven homogenization approach that show great promise for accelerating concurrent two-scale simulations. As salient feature, DMNs solely identified by linear elastic precomputations on representative volume elements. After parameter identification, act as surrogates full-field simulations of such elements with inelastic constituents. In this work, we investigate how the training data, i.e., choice loss function and sampling affects accuracy We viscoelasticity derive material-informed procedure generating data tailored to problem at hand. These ideas improve an DMN allow significantly reducing number samples be generated labeled.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Training Very Deep Networks

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highw...

متن کامل

Deep Rewiring: Training very sparse deep networks

Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently on sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train...

متن کامل

Parallel Training of Deep Stacking Networks

The Deep Stacking Network (DSN) is a special type of deep architecture developed to enable and benefit from parallel learning of its model parameters on large CPU clusters. As a prospective key component of future speech recognizers, the architectural design of the DSN and its parallel training endow the DSN with scalability over a vast amount of training data. In this paper, we present our fir...

متن کامل

Sequence-discriminative training of deep neural networks

Sequence-discriminative training of deep neural networks (DNNs) is investigated on a 300 hour American English conversational telephone speech task. Different sequencediscriminative criteria — maximum mutual information (MMI), minimum phone error (MPE), state-level minimum Bayes risk (sMBR), and boosted MMI — are compared. Two different heuristics are investigated to improve the performance of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings in applied mathematics & mechanics

سال: 2023

ISSN: ['1617-7061']

DOI: https://doi.org/10.1002/pamm.202200143