MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks

Benchmark (surveying) Benchmarking Modalities
DOI: 10.48550/arxiv.2311.07463 Publication Date: 2023-01-01
ABSTRACT
There has been a surge in LLM evaluation research to understand capabilities and limitations. However, much of this confined English, leaving building for non-English languages relatively unexplored. Several new LLMs have introduced recently, necessitating their on languages. This study aims perform thorough the SoTA (GPT-3.5-Turbo, GPT-4, PaLM2, Gemini-Pro, Mistral, Llama2, Gemma) by comparing them same set multilingual datasets. Our benchmark comprises 22 datasets covering 83 languages, including low-resource African We also include two multimodal compare performance LLaVA models, GPT-4-Vision Gemini-Pro-Vision. experiments show that larger models such as Gemini-Pro PaLM2 outperform smaller various tasks, notably with GPT-4 outperforming more data contamination find several are likely be contaminated benchmarks, approaches detect handle while assessing LLMs.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....