Dissertation Defense - University of Houston
Skip to main content

Dissertation Defense

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

Dainis Boumber

will defend his dissertation

Domain Adaptation using Deep Adversarial Models


Abstract

In Machine Learning, a good model generalizes from training data to accurately classify instances from new unseen data pertaining to the same domain. Traditionally, data sets lie within the same domain and the same distribution is assumed for both training and testing sets. In many real-world scenarios such assumption would lead to very poor results, because data may come from similar but not identical distributions. One way to address this problem is through the use of transfer learning techniques known as domain adaptation and domain generalization, which to mitigate the difference in the data set distributions. This research is focused on investigating novel and improving methods that utilize aforementioned techniques. We present several novel methods that can learn from multiple source domains and extract a domain-agnostic model to be applied to one or more target domains. A variety of scenarios and tasks are explored. First, we explore a supervised learning scenario where a small number of labeled target samples is available for training. Second, we investigate an unsupervised multiple-domain adaptation scenario with multiple labeled source data sets and unlabeled target data --- a scenario commonly encountered in real-world applications but not yet very well understood. In this case, our algorithm acts in a semi-supervised learning fashion insofar as the target is concerned. Third, a domain generalization problem is studied, where the algorithm has no access to target data, labeled or not. Our work is initially done within the realm of Natural Language Processing. We address the task of Authorship Attribution and Verification and experiment with two standard semantic data sets, as well as a custom data-set we created. These goals are achieved by mapping the source or the target (and at times both) domains into a domain-invariant feature space. To this end, we contribute three algorithms and design models which learns an embedding subspace that is discriminate and where the mapped domains are semantically aligned and yet maximally separated. In order to achieve the desired results as well as stable training and greater accuracy, we introduce a number of modifications to the existing loss functions and introduce an intelligent regularization and early stopping approach. To the best of our knowledge, had been previously seen in literature. We validate the hypotheses made on a multitude of standard linguistic tasks. In addition, as part of this effort we mined and contributed an authorship data set that has been accepted for use as a standard language resource. An extensive amount of experiments is conducted using it, as well. Finally, to show general applicability of the proposed ideas to all of Machine Learning rather than strictly NLP tasks we apply our methods to the task of transferring knowledge between images of hand-written image data drawn from different sources. The experiments validate our approach.


Date: Monday, November 12, 2018
Time: 3:00 PM
Place: MREB 222
Advisors: Dr. Ricardo Vilalta & Dr. Arjun Mukherjee

Faculty, students, and the general public are invited.