Ph.D. Dissertation Proposal Defense Towards Interpretable and Reliable Deep Neural Networks By Ning Xie

Wednesday, November 20, 2 pm to 4 pm
Campus: 
Dayton
399 Joshi
Audience: 
Current Students
Faculty
Staff

Ph.D. Committee:  Drs. Derek Doran (advisor), Pascal Hitzler (Kansas State University), Michael Raymer, and Tanvi Banerjee

ABSTRACT:

Deep Neural Networks (DNNs) are powerful tools blossomed in a variety of successful real-life applications. While the performance of DNNs is outstanding, their opaque nature raises a growing concern in the community, causing suspicions on the reliability and trustworthiness of decisions made by DNNs. In order to release such concerns and towards building reliable deep learning systems, research efforts are actively made in diverse aspects such as model interpretation, model fairness and bias, adversarial attacks and defenses, and so on.

In this dissertation, we focus on the research topic of DNN interpretations, aiming to unfold the black-box and provide explanations in a human-understandable way. We first conduct a categorized literature review, introducing the realm of explainable deep learning. Following the review, two specific problems are tackled, explanations of Convolutions Neural Networks (CNNs), which relates the CNN decisions with input concepts, and interpretability of multi-model interactions, where an explainable model is built to solve a task similar to visual question answering. Visualization techniques are leveraged to depict the intermediate hidden states of CNNs and attention mechanisms are utilized to build an instinct explainable model. Towards increasing the trustworthiness of DNNs, a certainty measurement for decisions are also proposed as a potential future extension of this study.

For information, contact
Log in to submit a correction for this event (subject to moderation).

Related Events