LeKUBE: A Knowledge Update BEnchmark for Legal Domain
Benchmark (surveying)
DOI:
10.1145/3673791.3698407
Publication Date:
2024-12-08T11:24:16Z
AUTHORS (9)
ABSTRACT
Recent advances in Large Language Models (LLMs) have significantly shaped the applications of AI multiple fields, including studies legal intelligence. Trained on extensive texts, statutes and documents, LLMs can capture important knowledge/concepts effectively provide support for downstream such as consultancy. Yet, dynamic nature interpretations also poses new challenges to use applications. Particularly, how update knowledge efficiently has become an research problem practice. Existing benchmarks evaluating methods are mostly designed open domain cannot address specific domain, nuanced application knowledge, complexity lengthiness regulations, intricate reasoning.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (51)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....