
Dynalips technologies use the latest research results in the field of audiovisual speech synthesis and articulatory speech production. This research comes from our work at Loria, Inria and University of Lorraine. The team of the future startup is composed of young and experienced researchers and engineers.

Join our beta testing
We are excited to announce that we are now in beta testing ! If you are interested in participating, please contact us at contact@dynalips.com
Highlights
High Realistic Lip Synchronisation For 3D Animated Characters
Dynalips provides a solution to synchronize precisely and automatically the movements of the lips of a 3D character with speech (we address animation movies and video games).
Help professionals (animators & game developers) to focus on their main activity.
Allow a better emotional and entertainment experience for players and spectators.
Services
What we offer

Automatic lipsync solution
Assist animator / Provide a fast and automatic lipsync with high quality.
Focus more on the artistic aspects (facial expressions, dramatic effects, etc.

Fast & highly accurate lipsync
Artificial intelligence based technology
Methods based on analyzing human speaker data

Multilingual lipsync solution
Reduce the barrier of language and culture.
Better entertainment experience. Less frustration due to speech intelligibility problem.
Applications
Dynalips is made for

ANIMATION
2D and 3D Animation Movies for Cinema and Tv

VIDEOS GAMES
Virtual Reality

COMMUNICATION ASSISTANCE
Language Learners, Hearing impaired
Team
Our Team
SLIM OUNI
Co-founder
DeepLipsync technology visionary and one of the pioneers of scientific research on lipsync.
Louis Abel
Co-founder
Machine learning specialist, one of the architects of the DeepLipsync technology, UE specialist.
Théo Biasutto-Lervat
Co-founder
PhD – Computer Science, specialist in machine learning and one of the main architects of DeepLipsync technology.
thanks
Our following organizations
Our DeepLipsync technology has been developed mainly within the framework of our long-term research work at LORIA and thanks to the support of the following organizations.




