Dynalips technologies use the latest research results in the field of audiovisual speech synthesis and articulatory speech production. This research comes from our work at Loria, Inria and University of Lorraine. The team of the future startup is composed of young and experienced researchers and engineers.
work in progress
We are working on porting our DeepLipsync technology
to UE to animate MetaHuman models. This video shows our first results.
High Realistic Lip Synchronisation For 3D Animates Characters
Dynalips provides a solution to synchronize precisely and automatically the movements of the lips of a 3D character with speech (we address 3D animation movies and video games).
Help professionals (3D animators & game developers) to focus on their main activity.
Allow a better emotional and entertainment experience for players and spectators.
What we offer
Automatic lipsync solution
Assist 3D animator / Provide a fast and automatic lipsync with high quality.
Focus more on the artistic aspects (facial expressions, dramatic effects, etc.
Fast & highly accurate lip-sync
Artificial intelligence based technology
Methods based on analyzing human speaker data
Multilingual lipsync solution
Reduce the barrier of language and culture.
Better entertainment experience. Less frustration due to speech intelligibility problem.
Dynalips is made for
Movies, Cinema, Tv
Language Learners, Hearing impaired people
DeepLipsync technology visionary and one of the pioneers of scientific research on lipsync.
Machine learning specialist, one of the architects of the DeepLipsync technology, UE specialist.
Animation specialist, one of the architects of the DeepLipsync technology, expert in 3D technologies.
PhD – Computer Science, specialist in machine learning and one of the main architects of DeepLipsync technology.
Our following organizations
Our DeepLipsync technology has been developed mainly within the framework of our long-term research work at LORIA and thanks to the support of the following organizations.