Publications
Natural Language Generation (NLG) evaluation is a multifaceted task requiring assessment of multiple desirable criteria, e.g., fluency, coherency, coverage, relevance, adequacy, overall quality, etc. Across existing …
Tags:
BERT, NLP, NLG
Blogs
BERT has been a top contender in the space of NLP models. With its sucess, a parallel stream of research, named BERTology, has emerged, that tries to understand how does BERT work so well. With the similar objective, …
Tags:
Explainable AI, BERT, NLP
Publications
Multi-headed attention heads are a mainstay in transformer-based models. Different methods have been proposed to classify the role of each attention head based on the relations between tokens which have high pair-wise …
Tags:
BERT, Attention head
Publications
There is an increasing focus on model-based dialog evaluation metrics such as ADEM, RUBER, and the more recent BERT-based metrics. These models aim to assign a high score to all relevant responses and a low score to all …
Tags:
NLP, dialog, pretraining, BERT
Publications
We consider the task of generating dialogue responses from background knowledge comprising of domain specific resources. Specifically, given a conversation around a movie, the task is to generate the next response based …
Tags:
NLP, Dialogue response, BERT
Publications
BERT and its variants have achieved state-of-the-art performance in various NLP tasks. Since then, various works have been proposed to analyze the linguistic information being captured in BERT. However, the current works …
Tags:
NLP, BERT, Reading comprehension
Publications
Given the success of Transformer-based models, two directions of study have emerged: interpreting role of individual attention heads and down-sizing the models for efficiency. Our work straddles these two streams: We …
Tags:
NLP, Attention, BERT, Pruning