Simple and Controllable Music Generation

FOS: Computer and information sciences Computer Science - Machine Learning Sound (cs.SD) Artificial Intelligence (cs.AI) Computer Science - Artificial Intelligence Audio and Speech Processing (eess.AS) FOS: Electrical engineering, electronic engineering, information engineering Computer Science - Sound Electrical Engineering and Systems Science - Audio and Speech Processing Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2306.05284 Publication Date: 2023-01-01
ABSTRACT
Published at Neurips 2023<br/>We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, both mono and stereo, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen. Music samples, code, and models are available at https://github.com/facebookresearch/audiocraft<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....