How to make Mona Lisa talk on Google Colaboratory with Wav2Lip
I tried Wav2Lip (opens new window) in ml4a (opens new window) on Colaboratory
.
Wav2Lip
is a simple library to generate videos combined with images of people faces and text audio files.
Here is a sample video I generated.
The script of this sample, which is quoted from a novel named "Metamorphosis" written by Franz Kafka, is as follow:
As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect. He was lying on his hard, as it were armor-plated, back and when he lifted his head a little he could see his domelike brown belly divided into stiff arched segments on top of which the bed quilt could hardly keep in position and was about to slide off completely. His numerous legs, which were pitifully then compared to the rest of his bulk, waved helplessly before his eyes.
The voice data is generated using "Ondoku", a voice file generator for texts on the web.
I also published Jupyter Notebooks
for using on Colaboratory
on GitHub.
If you want to try this by yourself, access to sample01 (opens new window) or sample02 (opens new window) and then click Open in Colab
to open your own Colab. Actullay you can generate videos just with pip install ml4a
and wav2lip.run
method, so it is very easy to try.