A Survey of Resource-efficient LLM and Multimodal Foundation Models
Implementation
DOI:
10.48550/arxiv.2401.08092
Publication Date:
2024-01-01
AUTHORS (18)
ABSTRACT
Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal are revolutionizing the entire machine learning lifecycle, from training to deployment. However, substantial advancements in versatility performance these offer come at a significant cost terms of hardware resources. To support growth scalable environmentally sustainable way, there has been considerable focus on developing resource-efficient strategies. This survey delves into critical importance such research, examining both algorithmic systemic aspects. It offers comprehensive analysis valuable insights gleaned existing literature, encompassing broad array topics cutting-edge model architectures training/serving algorithms practical system designs implementations. The goal this is provide an overarching understanding how current approaches tackling resource challenges posed by potentially inspire future breakthroughs field.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....