Conversation Tutorial: Difference between revisions
m adding category tag |
→Sounds/Lines: audiofile should be mono |
||
Line 11: | Line 11: | ||
For example, I have an AI greet a horse with "tdm_ai_wench_greet_civilian_to_civilian", to which the horse whinnies via "animal_horse_idle", then she commiserates to it from, "tdm_ai_wench_idle". If you desire a specific line, or custom, you'll have to create your own soundshader (a text file listing an audiofile). | For example, I have an AI greet a horse with "tdm_ai_wench_greet_civilian_to_civilian", to which the horse whinnies via "animal_horse_idle", then she commiserates to it from, "tdm_ai_wench_idle". If you desire a specific line, or custom, you'll have to create your own soundshader (a text file listing an audiofile). | ||
An audiofile (e.g., .ogg file) intended for use in a conversation should be mono, not stereo. Stereo will play, but AI lip-sync will lag. | |||
=Conversation Editor window= | =Conversation Editor window= |
Revision as of 02:00, 22 December 2020
Presumably your AI are pathing along, minding their own business, when you want them to interact, say something specific, or glance at one another. Here's how to set that up, step by step... (Conversations are more than just talk, they may control movement, animations, or other things.)
Triggering
First they have to run into a trigger to initiate a "conversation", either as the target of a path node (as of TDM 2.02), another trigger entity, a stim/response trigger, or equivalent.
The trigger targets an "atdm:target_startconversation" entity (found in targets folder). That entity gets the spawnarg "conversation" "whatever" (substitute "whatever" with a label to identify this conversation compared to future ones in the list).
Sounds/Lines
Before you jump in, you will need to type in which soundshaders you will be using if you want them to talk. So if you haven't already picked your lines, pop in a temporary Speaker and get the names of the soundshaders you'll want to use.
For example, I have an AI greet a horse with "tdm_ai_wench_greet_civilian_to_civilian", to which the horse whinnies via "animal_horse_idle", then she commiserates to it from, "tdm_ai_wench_idle". If you desire a specific line, or custom, you'll have to create your own soundshader (a text file listing an audiofile).
An audiofile (e.g., .ogg file) intended for use in a conversation should be mono, not stereo. Stereo will play, but AI lip-sync will lag.
Conversation Editor window
Now enter the Conversation Editor (map menu), which will create an entity if you don't already have one, be sure it's not placed in the void!
Click +Add down at the bottom to create a new conversation, then edit it. You'll notice toward the top, "Actors must be within talk distance" and "always facing", these cause them to move toward each other, and/or turn when the conversation is triggered, not that the conversation waits until they are in range. +Add an actor (you may have just one, or multiple AI interacting), it's/their names go here to identify who does what/when in the next step.
Now in the bottom you may add commands, which run in sequence (generally). You may choose for one to not wait, in which case the next will occur perhaps before the current completes. This can be good to have AI talk more naturally overlapping, rather than a lengthy pause between each spoken line.
Typical commands are "Talk" to say something (paste in a soundshader name you noted earlier), look at something or someone (including player1), wait a few seconds, etc. For example, my wench looks at the horse, says her line, the horse whinnies in reply, then she delivers her final line. That's it, now check to see how it works out in game.
(Note: if you want to mute your AI generally, so they don't use their usual vocalizations the rest of the time (perhaps not making sense leading in or out of your conversation), change "def_vocal_set" to "atdm:ai_vocal_set_mute".)
See also
Cutscenes Part 3: Lighting, Placing the Player, and Conversations#Conversations