MetaMGC: a music generation framework for concerts in metaverse
【Author】 Jin, Cong; Wu, Fengjuan; Wang, Jing; Liu, Yang; Guan, Zixuan; Han, Zhe
【Source】EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING
【影响因子】2.114
【Abstract】In recent years, there has been a national craze for metaverse concerts. However, existing meta-universe concert efforts often focus on immersive visual experiences and lack consideration of the musical and aural experience. But for concerts, it is the beautiful music and the immersive listening experience that deserve the most attention. Therefore, enhancing intelligent and immersive musical experiences is essential for the further development of the metaverse. With this in mind, we propose a metaverse concert generation framework & mdash; from intelligent music generation to stereo conversion and sound field design for virtual concert stages. First, combining the ideas of reinforcement learning and value functions, the Transformer-XL music generation network is improved and used in training all the music in the POP909 dataset. Experiments show that both improved algorithms have advantages over the original method in terms of objective evaluation and subjective evaluation metrics. In addition, this paper validates a neural rendering method that can be used to generate spatial audio based on a binaural-integrated neural network with a fully convolutional technique. And the purely data-driven end-to-end model performs to be more reliable compared with traditional spatial audio generation methods such as HRTF. Finally, we propose a metadata-based audio rendering algorithm to simulate real-world acoustic environments.
【Keywords】Metaverse concert; Transformer-XL; Audio digital twin; Neural network; Audio rendering
【发表时间】2022 13-Dec
【收录时间】2023-01-06
【文献类型】实证数据
【主题类别】
区块链应用-实体经济-文化领域
评论