Faked lipsync: Difference between revisions
No edit summary |
mNo edit summary |
||
Line 3: | Line 3: | ||
There are two things that must happen for this to work. | There are two things that must happen for this to work. | ||
* Whenever the AI script plays a sound, it should call the <tt>PlayAndLipSync</tt> scriptevent. Currently this is wrapped by the <tt>bark</tt> function, which coders should use whenever telling the AI script to play | * Whenever the AI script plays a sound, it should call the <tt>PlayAndLipSync</tt> scriptevent. Currently this is wrapped by the <tt>bark</tt> function, which coders should use whenever telling the AI script to play speech. | ||
* For a head to be able to lipsync, it must have an animation called "talk1" which is 21 frames long. The first frame should be a completely closed mouth. The last frame should be a completely open mouth, as if yelling. Frames in between those two should be linearly interpolated. | * For a head to be able to lipsync, it must have an animation called "talk1" which is 21 frames long. The first frame should be a completely closed mouth. The last frame should be a completely open mouth, as if yelling. Frames in between those two should be linearly interpolated. | ||
Revision as of 09:52, 26 January 2007
TDM has a simple "faked lipsync" capability. When AIs vocalise a sound, their mouth will open and close according to the amplitude of the sound. This is about the simplest kind of automatic lipsync you can do, but it works pretty well for our purposes. It also has the distinct advantage of being really, really easy to set up.
There are two things that must happen for this to work.
- Whenever the AI script plays a sound, it should call the PlayAndLipSync scriptevent. Currently this is wrapped by the bark function, which coders should use whenever telling the AI script to play speech.
- For a head to be able to lipsync, it must have an animation called "talk1" which is 21 frames long. The first frame should be a completely closed mouth. The last frame should be a completely open mouth, as if yelling. Frames in between those two should be linearly interpolated.
That's all. Apart from those two requirements, it's all automatic.
Turning off lipsync for particular sounds
There are cases, however, where this naive lipsync method doesn't look very good. For example, if an AI is humming or whistling, their mouth should not gape open.
Thankfully, lipsync is easy to disable for certain sounds. Just put the keyword nolipsync in the description field of the soundshader, and the AI's mouth will stay firmly closed.
If you ever find yourself wanting to disable lipsync completely for a certain AI (maybe they communicate telepathically or something), rename or remove its head's talk1 animation.