Subtitles: Difference between revisions
Added sample usage of mainmenu_briefing.gui |
|||
(18 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
Support for subtitles (or closed captions) was added in TDM 2.10. | Support for subtitles (or closed captions) was added in TDM 2.10. | ||
=== What the player sees === | |||
In 2.10+, the player can control subtitle visualization under Settings/Audio/Subtitles, which takes one of these values: | |||
* Story - display only story-related subtitles | |||
* On - display subtitles for all speech | |||
* Off - disable subtitles | |||
"Story" is the default. If "On" is chosen, story and non-story speech subtitles (like for AI "barks") appear. | |||
Each subtitle has an implicit or explicit start and end time relative to its audio clip. To handle multiple overlapping sounds, the text is not interwoven; instead, there are 3 separate, non-overlapping fields, stacked in the lower part of the screen. When a subtitle is shown, it is as white text on a translucent field-rectangle background. A text line that is too long is wrapped to two lines. | |||
One immediate purpose for TDM's subtitles is to provide English captions for English audio. This will assist understanding of accented dialog for those who are hearing-impaired or whose English is less fluent. (In theory, subtitles could be relied upon to play an FM silently, but the lack of audio positional and distance cues, e.g., for footsteps, would be detrimental.) | |||
As for translation, i.e., subtitles shown in other TDM-supported languages, the current system provides an important and usable building block. An FM author could employ it to provide "story" subtitles in a particular language other than English. Furthermore, a non-TDM site for a specific non-English audience - one that hosts full ports of TDM and its FMs - could create its own subtitle files in the target language. Other ideas for future capabilities, such as easy switching among multiple subtitle languages, are [https://forums.thedarkmod.com/index.php?/topic/21741-subtitles-possibilities-beyond-211/ under discussion]. | |||
=== Rollout so far === | |||
''As of 2.11'' | |||
The "Subtitles" setting affects the CVar "cv_tdm_subtitles", as follows: | |||
* Story --> 1 --- show only story-relevant | |||
* On --> 2 --- show all speech | |||
* Off --> 0 --- hide all | |||
There is an additional cv_tdm_subtitles value, 3. It means "show everything", including additional subtitles for sound effects. No core sound effect subtitles are provided yet. | |||
Subtitles are not generated on the fly. They are pre-encoded into game files (as detailed below), and this will take human effort over time. "Story" subtitles are chiefly associated with custom sound files and typically the responsibility of the FM author. Conversely, subtitles for AI standard speech phrases (as well as standard sound effects) are seen as mainly core resources. | |||
For the FMs included in the standard distribution, "Tears of Saint Lucia" (in 2.10) and "A New Job" (planned in 2.11) have "Story" speech English subtitles. | |||
Non-story speech (as of 2.10) is limited to a dozen phrases of the "Cynic" voice, used by some attacking guards. Since this is part of the core distribution, these English subtitles become available to existing games automatically. [https://forums.thedarkmod.com/index.php?/topic/21740-english-subtitles-for-ai-barks/ A project to expand this coverage for 2.12] has been started. | |||
See also the section "displaying text with extended duration (new for 2.12)" below. | |||
=== How it works === | === How it works === | ||
Line 6: | Line 37: | ||
This approach allows to reliably add subtitles for sounds of any kind, but has some downsides too: the system does '''not''' know anything at all about entities, spawnargs, scripts, and other gameplay stuff. | This approach allows to reliably add subtitles for sounds of any kind, but has some downsides too: the system does '''not''' know anything at all about entities, spawnargs, scripts, and other gameplay stuff. | ||
The system allows to assign chunks of text directly to sound samples (which are usually .ogg and .wav files), so that when a sound sample is played, the assigned text is shown on the screen. | The system allows to assign chunks of text directly to sound samples (which are usually .ogg and .wav files), so that when a sound sample is played, the assigned text is shown on the screen. | ||
=== Subtitles decls === | === Subtitles decls === | ||
Line 19: | Line 49: | ||
# Names of subtitles decl files must start with '''fm_''', e.g.: <tt>fm_conversations.subs</tt> | # Names of subtitles decl files must start with '''fm_''', e.g.: <tt>fm_conversations.subs</tt> | ||
# Names of declarations must start with '''fm_''' too, e.g.: <tt>fm_paul_intro_convo</tt> | # Names of declarations must start with '''fm_''' too, e.g.: <tt>fm_paul_intro_convo</tt> | ||
Here is an example of <tt>fm_conversations.subs</tt> contents: | Here is an example of <tt>fm_conversations.subs</tt> contents: | ||
Line 44: | Line 73: | ||
subtitles fm_briefing { | subtitles fm_briefing { | ||
verbosity story | verbosity story | ||
srt "fromVideo video/ | srt "fromVideo video/sts_briefing_intro" "video/briefing/briefing.srt" | ||
} | } | ||
Line 53: | Line 82: | ||
'''inline''' command is followed by a name of sound sample and the text to be displayed while it is being played. | '''inline''' command is followed by a name of sound sample and the text to be displayed while it is being played. | ||
In this case, the text is written straight in the decl file, and it is displayed for the whole duration of the sound sample. | In this case, the text is written straight in the decl file, and it is displayed for the whole duration of the sound sample. | ||
This way is convenient for short sounds, which are the majority of sound samples, including phrases of a conversation. | This way is convenient for short sounds, which are the majority of sound samples, including phrases of a conversation. A "\n" can be embedded in the text to force a line break. | ||
'''srt''' command is followed by a | '''srt''' command is followed by paths to a sound sample and its .srt file, typically with matching filenames. This method, while more complicated than inline, has the flexibility to support a long sound file that needs multiple sequential subtitles, such as the sound sample of a briefing video. An .srt file, in "SubRip" format, is usually placed either with its sound file or in a "subtitles" folder. It contains a sequence of text messages to show during the sound sample, each with start and end HH:MM:SS,sss timestamps within the sample's timeline. For example: | ||
1 | |||
00:00:03,612 --> 00:00:10,376 | |||
Bugger me! | |||
Something's wrong with this crystal ball. | |||
2 | |||
00:00:25,336 --> 00:00:28,167 | |||
Ah! Here we go. | |||
Creating .srt files is best done with third-party AV software, not writing them manually. For TDM, files must be in engine-native encoding (internationalization is not supported yet anyway) and have no BOM mark. While implementations of the [https://en.wikipedia.org/wiki/Subrip#SubRip_file_format "SubRip" file format] often include HTML-style markups for font attributes like bold or italics, these are not possible for TDM. What TDM ''does'' have is limited primary-color font markup, applicable to inline and srt subtitles; see [[Text_Decals_for_Signs_etc.#Signs_with_Illuminated_Colored_Letters | Signs_with_Illuminated_Colored_Letters]] for details. | |||
==== displaying text with extended duration (new for 2.12) ==== | |||
A TDM inline subtitle appears on screen at the start of its audio clip. By convention, this is true for srt as well; the first message starts at time 00:00:00,000. | |||
Under 2.10/2.11, a subtitle goes away when its audio clip ends. For srt, this is true even if a timestamp specifies a value beyond the audio clip duration. | |||
Starting with 2.12, both automatic and manual extensions are provided. | |||
'''For inline subtitles.''' Automatically, durations are extended by 0.2 seconds past the clip end. This matches the 0.2 second pause automatically put in between each voice clip in the Conversation object, so as to show such subtitles without a visual gap. This degree of extension is considered imperceptable. In addition, 1.0 second is now the minimum duration of an inline subtitle, as widely recommended for captioning. (These extensions apply to all subtitles, not just Conversation.) | |||
For any particular "inline" command, the 0.2 second value can be overridden (e.g., with a larger value) by an additional trailing parameter, with syntax of either: | |||
-durationExtend ''<positive value>'' | |||
-dx ''<positive value>'' | |||
Example: | |||
inline "sound/voices/bjorn/sound7.ogg" "Let us first watch an educational video about the harm from cutting throats." -dx 0.45 | |||
Manual extension may be needed with particularly fast talking (i.e., high presentation rate, as expressed in characters-per-second or words-per-minute). Conversely, to avoid irritating the reader with noticeably lagging subtitles, use manual extension sparingly. In particular, don't be keen to go longer than 1/2 second. | |||
There is a sanity upper limit of 5.0 seconds on the extension. So overall, the calculated duration is: | |||
duration = max(c + 0.2, 1.0) ''if x not defined'' | |||
duration = min(max(c + x, 1.0), c + 5.0) ''if x defined'' | |||
where c is the clip length, and x is the -dx value | |||
'''For srt subtitles.''' The automatic extensions provided for inline do not apply to srt. Instead, the srt timestamp values can now extend beyond the clip end. | |||
It is the srt author's responsibility to apply best-practices such as: | |||
* Start the first message at time 0 | |||
* Avoid messages less than 1 second duration | |||
* Try to cut messages between sentences, or at other vocal pauses | |||
* Don't overlap messages in time | |||
* Usually, messages are abutted; sometimes, significant time gaps are appropriate | |||
* Avoid ending the last message much beyond the clip length (analogous to -dx considerations) | |||
* While such a duration extension directly benefits the final message, it is sometimes possible to "distribute the benefit" to an earlier message by small shifts in message breaks. | |||
==== sound sample ==== | ==== sound sample ==== | ||
Line 67: | Line 135: | ||
One complicated case is when you have an [[Cutscene_video_with_FFmpeg#Play_video_with_sound|FFmpeg-based video with embedded sound]]. | One complicated case is when you have an [[Cutscene_video_with_FFmpeg#Play_video_with_sound|FFmpeg-based video with embedded sound]]. | ||
In this case write | In this case write material name, prepended with '''fromVideo''' keyword and space, just like you do in the sound shader. | ||
See <tt>fm_briefing</tt> in the example above. | See <tt>fm_briefing</tt> in the example above. | ||
Line 78: | Line 146: | ||
* '''effect''': This level is reserved for non-speech subtitles which have little interest to the player, like "(bang)" or "(ding)", etc. | * '''effect''': This level is reserved for non-speech subtitles which have little interest to the player, like "(bang)" or "(ding)", etc. | ||
It is not yet clear how exactly these levels will be used. | It is not yet clear how exactly these levels will be used in the long term. For now: | ||
# By default, players only see <tt>story</tt>-level subtitles, although they can enable the other two categories in settings, or disable subtitles entirely. | # By default, players only see <tt>story</tt>-level subtitles, although they can enable the other two categories in settings, or disable subtitles entirely. | ||
Line 90: | Line 157: | ||
==== include ==== | ==== include ==== | ||
While the engine parses all declarations that are present in the files, the subtitles system only uses one subtitles decl named <tt>tdm_root</tt>. Include | While the engine parses all declarations that are present in the files, the subtitles system only uses one subtitles decl named <tt>tdm_root</tt>. ''Include'' commands are used in order to put other subtitle decls into it. (The <tt>tdm_root</tt> decl is in file <tt>tdm_root.subs</tt>. As of TDM 2.11, this file, and all core non-game-specific tdm_*.subs, can be found in tdm_sound_vocals_decls01.pk4/subtitles/. An ''fm_root.subs'' file exists there, whose stub fm_root decl is over-ridden by your FM's fm_root as described next.) | ||
'''include''' command has one argument: the name of the subtitles decl to be included. Note that it is the name of the decl, not the name of the file which contains it! Included decl can in turn include other decls. | '''include''' command has one argument: the name of the subtitles decl to be included. Note that it is the name of the decl, not the name of the file which contains it! Included decl can in turn include other decls. | ||
FM-specific subtitles are always included from the decl named '''fm_root''', which must be in '''fm_root.subs''' file. For instance, the contents of the <tt> | FM-specific subtitles are always included from the decl named '''fm_root''', which must be in the '''fm_root.subs''' file. For instance, the contents of the <tt>fm_conversations.subs</tt> file from the example above will have no effect until we add the file <tt>fm_root.subs</tt> with contents: | ||
/** | /** | ||
Line 106: | Line 173: | ||
include "fm_intro_convo" | include "fm_intro_convo" | ||
include "fm_briefing" | include "fm_briefing" | ||
// | include "fm_much_later" // points to the decl below | ||
} | |||
// Decls and their "inline"/"srt" commands can be in the fm_root.subs file, not just in other .subs files: | |||
subtitles fm_much_later { | |||
verbosity story | |||
inline "sound/voices/paul/sound14.ogg" "Where did I put that knife sharpener?" | |||
} | } | ||
Notice that one decl can be used to do any number of subtitle assignments, so it is not necessary to have many decls and decl files. The easiest way to do FM-specific subtitles is to have one <tt>fm_root.subs</tt> file with one <tt>fm_root</tt> decl, and specify all your subtitles right there. Splitting subtitles across decls and files is entirely optional and can be used for grouping subtitles. It was added mainly for core TDM sounds: there are several thousands of them, so keeping all subtitles in single file can become a headache quickly. | Notice that one decl can be used to do any number of subtitle assignments, so it is not necessary to have many decls and decl files. The easiest way to do FM-specific subtitles is to have one <tt>fm_root.subs</tt> file with one <tt>fm_root</tt> decl, and specify all your subtitles right there. Splitting subtitles across decls and files is entirely optional and can be used for grouping subtitles. It was added mainly for core TDM sounds: there are several thousands of them, so keeping all subtitles in a single file can become a headache quickly. | ||
=== Output === | === Output === | ||
The subtitles which should be displayed right now are provided to GUI code as '''gui::subtitleN''' variables, where N ranges from 0 to 3. | The subtitles which should be displayed right now are provided to GUI code as '''gui::subtitleN''' variables, where N ranges from 0 to 3 (although only 0 to 2 will be made visible by the standard GUI scripting described next). | ||
GUI scripts display these variables as non-overlapping text messages. | GUI scripts display these variables as non-overlapping text messages. Given a logical screen height of 480, each of the 3 fields is 45 units high. N = 0 is the lowest, at Y-origin 400; 1 at 350; 2 at 300. | ||
The responsible code is located in <tt>tdm_subtitles_common.gui</tt> file, and is already used in several places: | The responsible code is located in <tt>tdm_subtitles_common.gui</tt> file, and is already used in several places: | ||
* in the following states of the main menu: briefing, briefing video, and debriefing video | * in the following states of the main menu: briefing, briefing video, and debriefing video | ||
Line 134: | Line 206: | ||
'''TODO:''' "Tears of Saint Lucia" mission as an example?... | '''TODO:''' "Tears of Saint Lucia" mission as an example?... | ||
{{GUI}} |
Latest revision as of 16:37, 2 July 2023
Support for subtitles (or closed captions) was added in TDM 2.10.
What the player sees
In 2.10+, the player can control subtitle visualization under Settings/Audio/Subtitles, which takes one of these values:
- Story - display only story-related subtitles
- On - display subtitles for all speech
- Off - disable subtitles
"Story" is the default. If "On" is chosen, story and non-story speech subtitles (like for AI "barks") appear.
Each subtitle has an implicit or explicit start and end time relative to its audio clip. To handle multiple overlapping sounds, the text is not interwoven; instead, there are 3 separate, non-overlapping fields, stacked in the lower part of the screen. When a subtitle is shown, it is as white text on a translucent field-rectangle background. A text line that is too long is wrapped to two lines.
One immediate purpose for TDM's subtitles is to provide English captions for English audio. This will assist understanding of accented dialog for those who are hearing-impaired or whose English is less fluent. (In theory, subtitles could be relied upon to play an FM silently, but the lack of audio positional and distance cues, e.g., for footsteps, would be detrimental.)
As for translation, i.e., subtitles shown in other TDM-supported languages, the current system provides an important and usable building block. An FM author could employ it to provide "story" subtitles in a particular language other than English. Furthermore, a non-TDM site for a specific non-English audience - one that hosts full ports of TDM and its FMs - could create its own subtitle files in the target language. Other ideas for future capabilities, such as easy switching among multiple subtitle languages, are under discussion.
Rollout so far
As of 2.11
The "Subtitles" setting affects the CVar "cv_tdm_subtitles", as follows:
- Story --> 1 --- show only story-relevant
- On --> 2 --- show all speech
- Off --> 0 --- hide all
There is an additional cv_tdm_subtitles value, 3. It means "show everything", including additional subtitles for sound effects. No core sound effect subtitles are provided yet.
Subtitles are not generated on the fly. They are pre-encoded into game files (as detailed below), and this will take human effort over time. "Story" subtitles are chiefly associated with custom sound files and typically the responsibility of the FM author. Conversely, subtitles for AI standard speech phrases (as well as standard sound effects) are seen as mainly core resources.
For the FMs included in the standard distribution, "Tears of Saint Lucia" (in 2.10) and "A New Job" (planned in 2.11) have "Story" speech English subtitles.
Non-story speech (as of 2.10) is limited to a dozen phrases of the "Cynic" voice, used by some attacking guards. Since this is part of the core distribution, these English subtitles become available to existing games automatically. A project to expand this coverage for 2.12 has been started.
See also the section "displaying text with extended duration (new for 2.12)" below.
How it works
Subtitles work on a very low level: where the engine manages sound samples and sound channels. This approach allows to reliably add subtitles for sounds of any kind, but has some downsides too: the system does not know anything at all about entities, spawnargs, scripts, and other gameplay stuff. The system allows to assign chunks of text directly to sound samples (which are usually .ogg and .wav files), so that when a sound sample is played, the assigned text is shown on the screen.
Subtitles decls
The assignment of text to sound samples is specified in a new type of decl files. These files must:
- be in subtitles directory,
- have .subs extension,
- contain declarations of type subtitles.
For comparison, materials are another type of decls, which must be in materials directory, in files with .mtr extension, and be of type material.
To avoid any conflicts between core game and missions, FM authors should follow the conventions:
- Names of subtitles decl files must start with fm_, e.g.: fm_conversations.subs
- Names of declarations must start with fm_ too, e.g.: fm_paul_intro_convo
Here is an example of fm_conversations.subs contents:
//note: comments work as they normally do subtitles fm_intro_convo { verbosity story inline "sound/voices/paul/sound1.ogg" "My name is Paul, and I'm a city guard." inline "sound/voices/bjorn/sound2.ogg" "Welcome to Anonymous City Guards! Call me Bjorn." inline "sound/voices/paul/sound3.ogg" "I started cutting thieves' throats when I was 17..." inline "sound/voices/paul/sound4.ogg" "... and now I cannot stop doing that!" verbosity effect inline "sound/voices/paul/sound5.ogg" "(starts weeping)" verbosity story inline "sound/voices/bjorn/sound6.ogg" "Calm down, Paul! We all were in your shoes!" inline "sound/voices/bjorn/sound7.ogg" "Let us first watch an educational video about the harm from cutting throats." srt "sound/voices/bjorn/sound8_long.ogg" "sound/voices/bjorn/sound8_long.srt" } subtitles fm_briefing { verbosity story srt "fromVideo video/sts_briefing_intro" "video/briefing/briefing.srt" }
displayed text
There are two ways to specify subtitle text: inline and srt-based.
inline command is followed by a name of sound sample and the text to be displayed while it is being played. In this case, the text is written straight in the decl file, and it is displayed for the whole duration of the sound sample. This way is convenient for short sounds, which are the majority of sound samples, including phrases of a conversation. A "\n" can be embedded in the text to force a line break.
srt command is followed by paths to a sound sample and its .srt file, typically with matching filenames. This method, while more complicated than inline, has the flexibility to support a long sound file that needs multiple sequential subtitles, such as the sound sample of a briefing video. An .srt file, in "SubRip" format, is usually placed either with its sound file or in a "subtitles" folder. It contains a sequence of text messages to show during the sound sample, each with start and end HH:MM:SS,sss timestamps within the sample's timeline. For example:
1 00:00:03,612 --> 00:00:10,376 Bugger me! Something's wrong with this crystal ball. 2 00:00:25,336 --> 00:00:28,167 Ah! Here we go.
Creating .srt files is best done with third-party AV software, not writing them manually. For TDM, files must be in engine-native encoding (internationalization is not supported yet anyway) and have no BOM mark. While implementations of the "SubRip" file format often include HTML-style markups for font attributes like bold or italics, these are not possible for TDM. What TDM does have is limited primary-color font markup, applicable to inline and srt subtitles; see Signs_with_Illuminated_Colored_Letters for details.
displaying text with extended duration (new for 2.12)
A TDM inline subtitle appears on screen at the start of its audio clip. By convention, this is true for srt as well; the first message starts at time 00:00:00,000.
Under 2.10/2.11, a subtitle goes away when its audio clip ends. For srt, this is true even if a timestamp specifies a value beyond the audio clip duration.
Starting with 2.12, both automatic and manual extensions are provided.
For inline subtitles. Automatically, durations are extended by 0.2 seconds past the clip end. This matches the 0.2 second pause automatically put in between each voice clip in the Conversation object, so as to show such subtitles without a visual gap. This degree of extension is considered imperceptable. In addition, 1.0 second is now the minimum duration of an inline subtitle, as widely recommended for captioning. (These extensions apply to all subtitles, not just Conversation.)
For any particular "inline" command, the 0.2 second value can be overridden (e.g., with a larger value) by an additional trailing parameter, with syntax of either:
-durationExtend <positive value> -dx <positive value>
Example:
inline "sound/voices/bjorn/sound7.ogg" "Let us first watch an educational video about the harm from cutting throats." -dx 0.45
Manual extension may be needed with particularly fast talking (i.e., high presentation rate, as expressed in characters-per-second or words-per-minute). Conversely, to avoid irritating the reader with noticeably lagging subtitles, use manual extension sparingly. In particular, don't be keen to go longer than 1/2 second.
There is a sanity upper limit of 5.0 seconds on the extension. So overall, the calculated duration is:
duration = max(c + 0.2, 1.0) if x not defined duration = min(max(c + x, 1.0), c + 5.0) if x defined
where c is the clip length, and x is the -dx value
For srt subtitles. The automatic extensions provided for inline do not apply to srt. Instead, the srt timestamp values can now extend beyond the clip end.
It is the srt author's responsibility to apply best-practices such as:
- Start the first message at time 0
- Avoid messages less than 1 second duration
- Try to cut messages between sentences, or at other vocal pauses
- Don't overlap messages in time
- Usually, messages are abutted; sometimes, significant time gaps are appropriate
- Avoid ending the last message much beyond the clip length (analogous to -dx considerations)
- While such a duration extension directly benefits the final message, it is sometimes possible to "distribute the benefit" to an earlier message by small shifts in message breaks.
sound sample
In order to specify sound sample, write path to the file the same way you do e.g. in sound shaders.
One complicated case is when you have an FFmpeg-based video with embedded sound. In this case write material name, prepended with fromVideo keyword and space, just like you do in the sound shader. See fm_briefing in the example above.
verbosity
Currently there are three verbosity levels for subtitles:
- story: This level should be applied to all story-related subtitles, everything that player should absolutely notice and understand.
- speech: This is for subtitles of speech which is usually of little interest to the player, or is repeated too often. For instance, regular AI barks (e.g. "I'll get ya!") belong here.
- effect: This level is reserved for non-speech subtitles which have little interest to the player, like "(bang)" or "(ding)", etc.
It is not yet clear how exactly these levels will be used in the long term. For now:
- By default, players only see story-level subtitles, although they can enable the other two categories in settings, or disable subtitles entirely.
- It is expected that almost all FM-specific subtitles get into story category. The other two exist mostly for core TDM sounds, which include a lot of AI barks (speech level) and trivial sound effects like arrow hit (effect level).
Every subtitles declaration must start with verbosity command. This command applies to all the subsequent commands, until the other verbosity is met or declaration ends (whichever happens first). For instance, all subtitles have story verbosity level in the example above, except for the "(starts weeping)" text, which has effect verbosity.
include
While the engine parses all declarations that are present in the files, the subtitles system only uses one subtitles decl named tdm_root. Include commands are used in order to put other subtitle decls into it. (The tdm_root decl is in file tdm_root.subs. As of TDM 2.11, this file, and all core non-game-specific tdm_*.subs, can be found in tdm_sound_vocals_decls01.pk4/subtitles/. An fm_root.subs file exists there, whose stub fm_root decl is over-ridden by your FM's fm_root as described next.)
include command has one argument: the name of the subtitles decl to be included. Note that it is the name of the decl, not the name of the file which contains it! Included decl can in turn include other decls.
FM-specific subtitles are always included from the decl named fm_root, which must be in the fm_root.subs file. For instance, the contents of the fm_conversations.subs file from the example above will have no effect until we add the file fm_root.subs with contents:
/** * This file should be overridden in order to provide mission-specific subtitles. * When doing so, please follow the conventions: * 1) The root decl is called "fm_root" and is located in file "fm_root.subs". * 2) All other subtitle decls start with "fm_" prefix and are located in files starting with "fm_". */ subtitles fm_root { include "fm_intro_convo" include "fm_briefing" include "fm_much_later" // points to the decl below } // Decls and their "inline"/"srt" commands can be in the fm_root.subs file, not just in other .subs files: subtitles fm_much_later { verbosity story inline "sound/voices/paul/sound14.ogg" "Where did I put that knife sharpener?" }
Notice that one decl can be used to do any number of subtitle assignments, so it is not necessary to have many decls and decl files. The easiest way to do FM-specific subtitles is to have one fm_root.subs file with one fm_root decl, and specify all your subtitles right there. Splitting subtitles across decls and files is entirely optional and can be used for grouping subtitles. It was added mainly for core TDM sounds: there are several thousands of them, so keeping all subtitles in a single file can become a headache quickly.
Output
The subtitles which should be displayed right now are provided to GUI code as gui::subtitleN variables, where N ranges from 0 to 3 (although only 0 to 2 will be made visible by the standard GUI scripting described next). GUI scripts display these variables as non-overlapping text messages. Given a logical screen height of 480, each of the 3 fields is 45 units high. N = 0 is the lowest, at Y-origin 400; 1 at 350; 2 at 300. The responsible code is located in tdm_subtitles_common.gui file, and is already used in several places:
- in the following states of the main menu: briefing, briefing video, and debriefing video
- in the always-on GUI overlay during gameplay
For example, here is how it is used in mainmenu_briefing.gui:
//note: each include of tdm_subtitles_common must have different name prefix! #define SUBTITLES_NAMEPREFIX Briefing #include "guis/tdm_subtitles_common.gui"
Usually, you don't need to do anything about it. However, this information can be relevant if you write some custom GUI and want to see subtitles there.
Note: it is not clear yet which kind of customization is necessary for the subtitles GUI. If you have any ideas, please share them on forums.
See also
TODO: "Tears of Saint Lucia" mission as an example?...