Parsing as Pretraining
Bracketing (phenomenology)
Dependency grammar
Sequence (biology)
ENCODE
Tree (set theory)
DOI:
10.1609/aaai.v34i05.6446
Publication Date:
2020-06-29T19:05:50Z
AUTHORS (4)
ABSTRACT
Recent analyses suggest that encoders pretrained for language modeling capture certain morpho-syntactic structure. However, probing frameworks word vectors still do not report results on standard setups such as constituent and dependency parsing. This paper addresses this problem does full parsing (on English) relying only pretraining architectures – no decoding. We first cast sequence tagging. then use a single feed-forward layer to directly map labels encode linearized tree. is used to: (i) see how far we can reach syntax modelling with just encoders, (ii) shed some light about the syntax-sensitivity of different (by freezing weights network during training). For evaluation, bracketing F1-score las, analyze in-depth differences across representations span lengths displacements. The overall surpass existing tagging parsers ptb (93.5%) end-to-end en-ewt ud (78.8%).
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....