Android 开发者文档指南https://developer.android.google.cn/guide/topics/media-apps/media-apps-overview?hl=zh-cn
Controllable Music Transformer
Official code for our paper Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Best Paper Award)
[Paper] [Demos] [Bibtex]
Introduction
We address the unexplored task – video background music generation. We first establish three rhythmic relations between video and background music. We then propose a Controllable Music Transformer (CMT) to achieve local and global control of the music generation process. Our proposed method does not require paired video and music data for training while generates melodious and compatible music with the given video.
Directory Structure
-
src/ : code of the whole pipeline
-
train.py : training script, take a npz as input music data to train the model -
model.py : code of the model -
gen_midi_conditional.py : inference script, take a npz (represents a video) as input to generate several songs -
src/video2npz/ : convert video into npz by extracting motion saliency and motion speed -
dataset/ : processed dataset for training, in the format of npz -
logs/ : logs that automatically generate during training, can be used to track training process -
exp/ : checkpoints, named after val loss (e.g. loss_8_params.pt ) -
inference/ : processed video for inference (.npz), and generated music(.mid)
Preparation
-
clone this repo -
download the processed data lpd_5_prcem_mix_v8_10000.npz from HERE and put it under dataset/ -
download the pretrained model loss_8_params.pt from HERE and put it under exp/ -
install ffmpeg=3.2.4 -
prepare a Python3 conda environment
-
conda create -n mm21_py3 python=3.7
conda activate mm21_py3
pip install -r py3_requirements.txt
-
choose the correct version of torch and pytorch-fast-transformers based on your CUDA version (see fast-trainsformers repo and this issue) -
prepare a Python2 conda environment (for extracting visbeat)
-
conda create -n mm21_py2 python=2.7
conda activate mm21_py2
pip install -r py2_requirements.txt
-
open visbeat package directory (e.g. anaconda3/envs/XXXX/lib/python2.7/site-packages/visbeat ), replace the original Video_CV.py with src/video2npz/Video_CV.py
Training
Note: use the mm21_py3 environment: conda activate mm21_py3
Note: If you want to train with another MIDI dataset, please ensure that each track belongs to one of the five instruments (Drums, Piano, Guitar, Bass, or Strings) and is named exactly with its instrument. You can check this with Muspy:
import muspy
midi = muspy.read_midi('xxx.mid')
print([track.name for track in midi.tracks])
Inference
-
convert input video (MP4 format) into npz (use the mm21_py2 environment): conda activate mm21_py2
cd src/video2npz
sh video2npz.sh ../../videos/xxx.mp4
-
run model to generate .mid (use the mm21_py3 environment) : conda activate mm21_py3
python gen_midi_conditional.py -f "../inference/xxx.npz" -c "../exp/loss_8_params.pt"
-
convert midi into audio: use GarageBand (recommended) or midi2audio
- set tempo to the value of
tempo in video2npz/metadata.json (generated when running video2npz.sh ) -
combine original video and audio into video with BGM: ffmpeg -i 'xxx.mp4' -i 'yyy.mp3' -c:v copy -c:a aac -strict experimental -map 0:v:0 -map 1:a:0 'zzz.mp4'
Matching Method
-
The matching method finds the five most matching music pieces from the music library for a given video (use the mm21_py3 environment). conda activate mm21_py3
python src/match.py inference/xxx.npz dataset/lpd_5_prcem_mix_v8_10000.npz
Citation
@inproceedings{di2021video,
title={Video Background Music Generation with Controllable Music Transformer},
author={Di, Shangzhe and Jiang, Zeren and Liu, Si and Wang, Zhaokai and Zhu, Leyan and He, Zexin and Liu, Hongming and Yan, Shuicheng},
booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
pages={2037--2045},
year={2021}
}
Acknowledgements
Our code is based on Compound Word Transformer.
|