TDM has a simple "faked lipsync" capability. When AIs vocalise a sound, their mouth will open and close according to the amplitude of the sound. This is about the simplest kind of automatic lipsync you can do, but it works pretty well for our purposes. It also has the distinct advantage of being really, really easy to set up.
There are two things that must happen for this to work.
- Whenever the AI script plays a sound, it should call the PlayAndLipSync scriptevent. Currently this is wrapped by the bark function, which coders should use whenever telling the AI script to play speech.
- For a head to be able to lipsync, it must have an animation called "talk1". The first frame should be a completely closed mouth. The last frame should be a completely open mouth, as if yelling. Frames in between those two should be linearly interpolated.
That's all. Apart from those two requirements, it's all automatic.
Turning off lipsync for particular sounds
There are cases, however, where this naive lipsync method doesn't look very good. For example, if an AI is humming or whistling, their mouth should not gape open.
Thankfully, lipsync is easy to disable for certain sounds. Just put the keyword nolipsync in the description field of the soundshader (see examples below), and the AI's mouth will stay firmly closed.
description "nolipsync This is an awesome speech sound, made by yours truly."
If you ever find yourself wanting to disable lipsync completely for a certain AI (maybe they communicate telepathically or something), rename or remove its head's talk1 animation.