Skip to content

BasisVR/OpenLipSync

 
 

Repository files navigation

OpenLipSync

This repo is archived and not maintained anymore. I don’t have the time or energy to get OpenLipSync to where I’d want it to be. Feel free to fork it and use it as a base for your own stuff.

Experimental work-in-progress project

An open-source, cross-platform project that converts audio input into realistic facial expressions in real-time following the MPEG-4 (FBA) standard.

Setup for model training

Core (uv)

uv sync

MFA (micromamba)

micromamba create -n mfa -c conda-forge python=3.12 montreal-forced-aligner
micromamba activate mfa

mfa model download acoustic english_us_arpa
mfa model download dictionary english_us_arpa
mfa model download g2p english_us_arpa

Dataset Download is now integrated in the training script.

python training/train.py --config training/recipes/tcn_config.toml

This project uses the LibriSpeech ASR corpus (CC BY 4.0 license).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 78.3%
  • C# 19.6%
  • Shell 2.1%