COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

  • Antoine Bosselut ,
  • Hannah Rashkin ,
  • Maarten Sap ,
  • Chaitanya Malaviya ,
  • Asli Celikyilmaz ,
  • Yejin Choi

ACL 2019 |

We present the first comprehensive study
on automatic knowledge base construction
for two prevalent commonsense knowledge
graphs: ATOMIC (Sap et al., 2019) and ConceptNet
(Speer et al., 2017). Contrary to many
conventional KBs that store knowledge with
canonical templates, commonsense KBs only
store loosely structured open-text descriptions
of knowledge. We posit that an important step
toward automatic commonsense completion is
the development of generative models of commonsense
knowledge, and propose COMmonsense
Transformers (COMET) that learn to
generate rich and diverse commonsense descriptions
in natural language. Despite the
challenges of commonsense modeling, our investigation
reveals promising results when implicit
knowledge from deep pre-trained language
models is transferred to generate explicit
knowledge in commonsense knowledge
graphs. Empirical results demonstrate that
COMET is able to generate novel knowledge
that humans rate as high quality, with up
to 77.5% (ATOMIC) and 91.7% (ConceptNet)
precision at top 1, which approaches human
performance for these resources. Our findings
suggest that using generative commonsense
models for automatic commonsense KB
completion could soon be a plausible alternative
to extractive methods.