Using Natural Sentence Prompts for Understanding Biases in Language Models
Natural Language Generation
Natural language understanding
DOI:
10.18653/v1/2022.naacl-main.203
Publication Date:
2022-07-26T02:59:46Z
AUTHORS (3)
ABSTRACT
Evaluation of biases in language models is often limited to synthetically generated datasets. This dependence traces back the need prompt-style dataset trigger specific behaviors models. In this paper, we address gap by creating a prompt with respect occupations collected from real-world natural sentences present Wikipedia.We aim understand differences between using template-based prompts and sentence when studying gender-occupation We find bias evaluations are very sensitiveto design choices template prompts, propose as way more systematically move away decisions that may results.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....