نتایج جستجو برای: unsupervised domain adaptation

تعداد نتایج: 565345  

Journal: :CoRR 2017
Kuniaki Saito Kohei Watanabe Yoshitaka Ushiku Tatsuya Harada

In this work, we present a method for unsupervised domain adaptation (UDA), where we aim to transfer knowledge from a label-rich domain (i.e., a source domain) to an unlabeled domain (i.e., a target domain). Many adversarial learning methods have been proposed for this task. These methods train domain classifier networks (i.e., a discriminator) to discriminate distinguish the features as either...

Journal: :CoRR 2017
Twan van Laarhoven Elena Marchiori

Unsupervised Domain Adaptation (DA) is used to automatize the task of labeling data: an unlabeled dataset (target) is annotated using a labeled dataset (source) from a related domain. We cast domain adaptation as the problem of finding stable labels for target examples. A new definition of label stability is proposed, motivated by a generalization error bound for large margin linear classifiers...

2017
Timothy A. Miller Steven Bethard Hadi Amiri Guergana K. Savova

Detecting negated concepts in clinical texts is an important part of NLP information extraction systems. However, generalizability of negation systems is lacking, as cross-domain experiments suffer dramatic performance losses. We examine the performance of multiple unsupervised domain adaptation algorithms on clinical negation detection, finding only modest gains that fall well short of in-doma...

Journal: :CoRR 2017
Pedro H. O. Pinheiro

The objective of unsupervised domain adaptation is to leverage features from a labeled source domain and learn a classifier for an unlabeled target domain, with a similar but different data distribution. Most deep learning approaches to domain adaptation consist of two steps: (i) learn features that preserve a low risk on labeled samples (source domain) and (ii) make the features from both doma...

Journal: :CoRR 2015
Xu Zhang Felix X. Yu Shih-Fu Chang Shengjin Wang

Domain adaptation aims at training a classifier in one dataset and applying it to a related but not identical dataset. One successfully used framework of domain adaptation is to learn a transformation to match both the distribution of the features (marginal distribution), and the distribution of the labels given features (conditional distribution). In this paper, we propose a new domain adaptat...

2016
Róger Bermúdez-Chacón Carlos J. Becker Mathieu Salzmann Pascal Fua

While Machine Learning algorithms are key to automating organelle segmentation in large EM stacks, they require annotated data, which is hard to come by in sufficient quantities. Furthermore, images acquired from one part of the brain are not always representative of another due to the variability in the acquisition and staining processes. Therefore, a classifier trained on the first may perfor...

2016
Baochen Sun Jiashi Feng Kate Saenko

Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very w...

2014
Saab Mansour Hermann Ney

In this work, we tackle the problem of language and translation models domainadaptation without explicit bilingual indomain training data. In such a scenario, the only information about the domain can be induced from the source-language test corpus. We explore unsupervised adaptation, where the source-language test corpus is combined with the corresponding hypotheses generated by the translatio...

2015
Ryo Masumura Taichi Asami Takanobu Oba Hirokazu Masataki Sumitaka Sakauchi Akinori Ito

This paper demonstrates combinations of various language model (LM) technologies simultaneously, not only modeling techniques but also those for training data expansion based on external language resources and unsupervised adaptation for spontaneous speech recognition. Although forming combinations of various LM technologies has been examined, previous works focused on only modeling techniques....

Journal: :Signal Processing-image Communication 2023

Unsupervised domain adaptation (UDA) is a transfer learning task where the annotations of source are available, but only have access to unlabeled target data during training. Previous methods minimise gap by performing distribution alignment between and domains, which has notable limitation, i.e., at level, neglecting sample-level differences, thus preventing model from achieving superior perfo...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید