Can Language Models be Biomedical Knowledge Bases?
Benchmark (surveying)
DOI:
10.18653/v1/2021.emnlp-main.388
Publication Date:
2021-12-17T03:56:42Z
AUTHORS (6)
ABSTRACT
Pre-trained language models (LMs) have become ubiquitous in solving various natural processing (NLP) tasks. There has been increasing interest what knowledge these LMs contain and how we can extract that knowledge, treating as bases (KBs). While there much work on probing the general domain, little attention to whether powerful be used domain-specific KBs. To this end, create BioLAMA benchmark, which is comprised of 49K biomedical factual triples for LMs. We find with recently proposed methods achieve up 18.51% Acc@5 retrieving knowledge. Although seems promising given task difficulty, our detailed analyses reveal most predictions are highly correlated prompt templates without any subjects, hence producing similar results each relation hindering their capabilities hope serve a challenging benchmark probing.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (18)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....