نتایج جستجو برای: interpretability
تعداد نتایج: 4397 فیلتر نتایج به سال:
Recent NLP literature has seen growing interest in improving model interpretability. Along this direction, we propose a trainable neural network layer that learns global interaction graph between words and then selects more informative using the learned word interactions. Our layer, call WIGRAPH, can plug into any network-based text classifiers right after its embedding layer. Across multiple S...
Deep operator networks (DeepONets) are powerful architectures for fast and accurate emulation of complex dynamics. As their remarkable generalization capabilities primarily enabled by projection-based attribute, we investigate connections with low-rank techniques derived from the singular value decomposition (SVD). We demonstrate that some concepts behind proper orthogonal (POD)-neural can impr...
Deep neural networks have been well-known for their superb handling of various machine learning and artificial intelligence tasks. However, due to over-parameterized black-box nature, it is often difficult understand the prediction results deep models. In recent years, many interpretation tools proposed explain or reveal how models make decisions. this paper, we review line research try a compr...
Neural networks for NLP are becoming increasingly complex and widespread, there is a growing concern if these models responsible to use. Explaining helps address the safety ethical concerns essential accountability. Interpretability serves provide explanations in terms that understandable humans. Additionally, post-hoc methods after model learned generally model-agnostic. This survey provides c...
Following the successful applications of the fuzzy models in various application domains, the issue of automatic generation of Fuzzy Rule Based Systems (FRBSs) from observational data was widely studied in the literature and several approaches have been proposed. Most approaches were designed to search for the best accuracy of the generated model, neglecting the interpretability of FRBSs, which...
This paper presents a novel tensor-based feature learning approach for whole-brain fMRI classification. Whole-brain fMRI data have high exploratory power, but they are challenging to deal with due to large numbers of voxels. A critical step for fMRI classification is dimensionality reduction, via feature selection or feature extraction. Most current approaches perform voxel selection based on f...
In system modeling with Fuzzy Rule-Based Systems (FRBSs), we may usually find two contradictory requirements, the interpretability and the accuracy of the model obtained. As known, Linguistic Modeling (LM)—where the main requirement is the interpretability—is developed by linguistic FRBSs, while Fuzzy Modeling (FM)—where the main requirement is the accuracy—is developed, among others, by approx...
11 A new scheme based on multi-objective hierarchical genetic algorithm (MOHGA) is proposed to extract interpretable rule-based knowledge from data. The approach is derived from the use of multiple objective genetic 13 algorithm (MOGA), where the genes of the chromosome are arranged into control genes and parameter genes. These genes are in a hierarchical form so that the control genes can mani...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید