RobIn: A robust interpretable deep network for schizophrenia diagnosis
نویسندگان
چکیده
Schizophrenia is a severe mental health condition that requires long and complicated diagnostic process. However, early diagnosis vital to control symptoms. Deep learning has recently become popular way analyse interpret medical data. Past attempts use deep for schizophrenia from brain-imaging data have shown promise but suffer large training-application gap — it difficult apply lab research the real world. We propose reduce this by focusing on readily accessible collect set of psychiatric observations patients based DSM-5 criteria. Because similar already recorded in all clinics diagnose using DSM-5, our method could be easily integrated into current processes as tool assist clinicians, whilst abiding formal To facilitate real-world usage system, we show interpretable robust. Understanding how machine reaches its essential allow clinicians trust diagnosis. framework, fuse two complementary attention mechanisms, ‘squeeze excitation’ ‘self-attention’, determine global attribute importance interactivity, respectively. The model uses these scores make decisions. This allows understand was reached, improving model. models often struggle generalise different sources, perform experiments with augmented test evaluate model’s applicability find more robust perturbations, should therefore better clinical setting. It achieves 98% accuracy 10-fold cross-validation.
منابع مشابه
A ROBUST OPTIMIZATION MODEL FOR BLOOD SUPPLY CHAIN NETWORK DESIGN
The eternal need for humans' blood as a critical commodity makes the healthcare systems attempt to provide efficient blood supply chains (BSCs) by which the requirements are satisfied at the maximum level. To have an efficient supply of blood, an appropriate planning for blood supply chain is a challenge which requires more attention. In this paper, we address a mixed integer linear programming...
متن کاملDeep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Deep neural networks (DNNs) enable innovative applications of machine learning like image recognition, machine translation, or malware detection. However, deep learning is often criticized for its lack of robustness in adversarial settings (e.g., vulnerability to adversarial inputs) and general inability to rationalize its predictions. In this work, we exploit the structure of deep learning to ...
متن کاملInterpretable Deep Models for ICU Outcome Prediction
Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-t...
متن کاملInterpNET: Neural Introspection for Interpretable Deep Learning
Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system’s inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any...
متن کاملA Deep Learning Interpretable Classifier for Diabetic Retinopathy Disease Grading
Deep neural network models have been proven to be very successful in image classification tasks, also for medical diagnosis, but their main concern is its lack of interpretability. They use to work as intuition machines with high statistical confidence but unable to give interpretable explanations about the reported results. The vast amount of parameters of these models make difficult to infer ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Expert Systems With Applications
سال: 2022
ISSN: ['1873-6793', '0957-4174']
DOI: https://doi.org/10.1016/j.eswa.2022.117158