Coding in the SDK: Difference between revisions

From The DarkMod Wiki
Jump to navigationJump to search
Akira (talk | contribs)
Akira (talk | contribs)
Line 25: Line 25:
* the editor (D3Ed), but we have [[DarkRadiant]]
* the editor (D3Ed), but we have [[DarkRadiant]]


== What parts of the code are left for us to change? ==
== What parts of the code were availiable for modification pre source release ==
A whole lot, and in most cases it is sufficient:
These are the parts of the code that were possible to modify previous to the release of the source. At the time, Dark Mod was purely just that: a mod dependent on Doom 3.
* gameplay code (entities, logic)
* gameplay code (entities, logic)
* physics
* physics

Revision as of 14:59, 1 November 2013

...also known as greebo's random notes about coding for The Dark Mod.

This article is constant WIP. When coding in the SDK, there are thousands of little lessons to be learned - some of them include things I wish I would have known earlier, some might be common knowledge. Anyway, I plan include all the miscellaneous things in this article.

General

When the id people designed Doom 3, they roughly divided their code into two parts: the engine and the gameplay code. The engine code is closed source, i.e. we don't have the code, only the compiled binary in the form of DOOM3.exe.

The engine contains all the "important" stuff which id software is making money with, by licensing it to third party companies. This is where the real brainpower is in, so we don't get that part (yet - John Carmack repeatedly stated that he wants to release all the source under the GPL soon).

The gameplay code is fully available. This includes the entities, game logic, AI, physics, weapon and animation code. This leaves a whole lot of headroom for modders like you and me, and we probably should thank id software for that. There are all kinds of mods out there which change the gameplay code to create their own game so to say. Some of them are "mini-mods" (introducing a new weapon or double-jump-mods), some of them are larger, aiming to change the nature of the game and finally there's The Dark Mod, which changed almost everything that can be changed in the first place. The term Total Conversion comes close - when playing TDM, you won't believe it's the same game as Doom 3.

What parts of the code used to be inaccessible?

All of the code id are/were earning money with by licensing their engine to others. Since the release of the source in 2011, these parts are also accessible.

  • the rendering engine
  • the sound engine
  • the collision model manager
  • the Map and AAS compilers
  • the declaration manager (although we can implement our own parsers)
  • the network system
  • the keyboard and mouse handlers
  • anything platform-specific
  • the command system
  • the virtual file system
  • the CVAR system plus some special CVARs
  • the editor (D3Ed), but we have DarkRadiant

What parts of the code were availiable for modification pre source release

These are the parts of the code that were possible to modify previous to the release of the source. At the time, Dark Mod was purely just that: a mod dependent on Doom 3.

  • gameplay code (entities, logic)
  • physics
  • AI
  • custom declarations
  • custom CVARs
  • the scripting system
  • the animation code
  • plus everything outside the SDK like scripts, defs, materials and models.

Coding Style

This deserves its own article: TDM Coding Style

It's easy to see that the gameplay code has been written by a number of different programmers at id software. It's also easy to see that several of them have strong roots in C, whereas others have used C++ coding paradigms more strongly. At any rate, almost all of those programmers need a heads up that the possibility of placing comments in the code has not been removed from C++ - only a few of them seem to know about comments at all, which is a shame, especially when it comes to the animation code.

Another thing that a newcomer might notice is the mere absence of STL or other standard stuff in the id code. I'm not in the position of judging whether this is actually necessary (which I doubt), but it seems that id rewrote every little low-level library on their own, including strings, lists, hashtables, vectors, etc. Some of the classes are admittedly quite handy, but sometimes it's clear that they did it just because they could. Overall, there is strong NIH smell all about the place, but the code is there and stable already, so we might as well use it for TDM coding.

Something to keep in mind is also the custom memory manager, which (according the comments in the code) is multiple times faster than the one provided by the CRT. As a consequence of this, everything that is allocated in the game DLL must be freed again before the DLL is unloaded, otherwise you'll be running into weird crashes at shutdown. Most classes have Clear() methods in them, and whenever you write custom (global or standalone) classes you need to make sure that these are destroyed in idGameLocal::Shutdown() at the latest.

id software has been using the following coding style:

  • class member variables are lowercase, plain notation (without the m_bVar hungarian stuff)
  • class names start with the "id" prefix
  • class methods are uppercase with mixed case, like this: idDict::MatchPrefix()
  • One True Brace Style, i.e. the opening { brace is in the same line as the if ().

TDM code is slightly different:

  • class names are usually starting with C, like CFrobDoor.
  • class members are usually using hungarian notation
  • a new line for each opening or closing brace

Note that I've been "violating" these rules myself when I wrote the AI framework. I probably don't have a particularly good reason, but it seemed the right thing to do at that time as the framework is a completely new subsystem. It's pretty much the same coding style I was used to in DarkRadiant, minus a few things:

  • all classes are in their correct namespace (the namespace ai in this case)
  • class names are starting with an uppercase letter
  • class methods are starting with an uppercase letter
  • class members are lowercase, starting with an underscore, like _finishTime

Generally, I've been following the rule not to mix styles when I change existing code. If a class is using Hungarian notation, I stick to that, same goes for the id classes and the AI classes.

STL vs. idLib

The most common idLib member is the idList<> template, which is the counter-part of std::vector<>. Whenever you need a variable-sized array, use idList. This class has some nice convenience functions and supports the definition of granularity, which allows you to specify the minimum size of memory blocks which are allocated by idList. This is useful to prevent idList from calling the new operator each time you push an element into that list.

Another popular member is idStr. This is mostly equivalent with std::string. Both classes provide the well-known c_str() method which can be used to retrieve the const char* pointer from the object. Two gotchas about idStr:

  • void idStr::Empty() is an imperative method, i.e. it clears the string. Do not confuse this with bool std::string::empty(), which returns true when the string is empty. If you want to check whether an idStr is empty, use idStr::IsEmpty()...
  • idStr has an operator cast to const char*, so you can pass idStr to everything that expects a const char*. Where's the gotcha? Don't expect this to work when passing idStr to functions with variable-sized argument lists, like this
idStr mystr = "test";
idGameLocal::Printf("String is %s", mystr); // this will crash!

The compiler cannot know which operator cast to call for this type of function call, so use the c_str() function instead:

idStr mystr = "test";
idGameLocal::Printf("String is %s", mystr.c_str());

There are more container classes in idLib:

  • idLinkList, a circular linked list, but pretty different to std::list - I don't use that much, because it's inconvenient and unintuitive. The most prominent example is the "spawnedEntities" link list in idGameLocal. The "head" of this list is member of the idGameLocal class, each entity is added as node to that list (or rather: as "owner" of each idLinkList node). Don't use the LAS as example for an idLinkList, it's used the wrong way there (without list head). Also, in my opinion it's just plain weird to start iterating over an idLinkList by calling Next() instead of begin(), oh well. My recommendation is to use std::list instead.
  • idHashTable, ditto
  • idDict - this one is useful. The most common use for this is to contain all the spawnargs of an entity.
  • idKeyValue - used to define a spawnarg pair
  • idStrList - just a shortcut for idList<idStr>

Another thing I had to learn the hard way was using idStr in combination with std::map. Due to the built-in operator cast to const char*, an idStr cannot be used as index in a std::map. The std::map will try to sort the given idStr into the correct place, but the comparison operator is actually invoking the idStr::operator const char*() const which traps the std::map into comparing pointers instead of strings. If you want to use std::map, use std::map<std::string, someothertype>.

Performance Considerations

The code might not win a beauty contest, but most of id's programmers know the ins and outs of how to code a performing game, no doubt about that.

When writing new code and you're worrying about performance, think about how your code is going to be used and more importantly: how often. If your code piece is called when a projectile is hitting a surface, there's no problem with that. If your code is more or less directly hooked into the idAI::Think() routine, think twice before writing slow code.

How to recognise slow code? That's a difficult question, you need to use your brain and read lots of code to get a feeling for this.

Bad Things include:

  • Looping over a large amount of entities.
  • Querying spawnargs multiple times each frame, converting strings to float (e.g. calling idDict::GetFloat() for every little thing). Read the spawnarg once in your Spawn() method and store it in a local member variable instead.
  • Calling the clip/trace/collision code exorbitantly.

Another thing to keep in mind: even if the game performs well in your neat little developer testmap with one light and three entities, things might change when you get it to run a full-grown mission with tons of complicated lighting, geometry, moveables and AI. The 16 ms frametime which appeared to be enormously much on your 2.4 GHz developer beast will melt like ice when the rendering of the scene takes up 13 ms on its own on a 1.4 GHz low-end system, leaving a whopping 3 ms for bringing your entire level to life including physics, animation, player movement and last but not least the lightgem. There goes the FPS below 60 and counting downwards. Don't even hope that future processors will execute your code faster - they won't, given that the processor clock rate of a single core has been almost stagnating over the last few years. The SDK code is using only one core.

How to Debug

There are several techniques to find problems in your code to see why the heck your door is not playing the "close" sound, for instance.

Use your head

That goes without saying.

Use your debugger

Run the game in your IDE and use the breakpoints at the right places. Note that you can run both debug and release builds in your debugger - the difference of release builds is that some code lines will not be hit and not all variables are available to the inspector. For physics or animation code, the debugger is of limited use, because either the variables don't make sense to our little human brains or the code is executed just too fast and you can't tell when the bug is happening.

You can use templates from Displaying idLib types nicely in MSVC debugger to improve the way idLib containers are displayed in the MSVC watch.

Use your console

If you don't want to interrupt your game by a breakpoint hit, use the console output commands to write stuff to the console, like this:

gameLocal.Printf("the time is %d\n", gameLocal.time); // will print the game time to the console

Use debug drawings

For things that happen really fast or mathematical things like vectors and bounding boxes use the excellent debug output features that are available for you:

// this will draw a red line from the AI's head to the player's head, lifetime is 16 msecs.
gameRenderWorld->DebugArrow(colorRed, ai->GetEyePosition(), player->GetEyePosition(), 1, 16);

You can also draw text in the game, but keep in mind that you need to pass the text orientation matrix to the function as well, otherwise the text is not oriented to the player:

// this will draw a white "test" at the "eyePosition", readable to the player, lifetime = 16 msecs
gameRenderWorld->DrawText("test", eyePosition, 0.1f, colorWhite, gameLocal.GetLocalPlayer()->viewAngles.ToMat3(), 16);

When drawing a large amount of these, the game performance will go down the drain, but you don't need to worry about that - it's debug output for your personal use only.

There is also a method allowing for drawin text on the GUI (but note that this only works when invoked after the scene is rendered in playerview.cpp):

renderSystem->DrawSmallStringExt(x, y, "text", idVec4( 1, 1, 1, 1 ), false, declManager->FindMaterial( "textures/bigchars" ));

If you happened to write a useful piece of debug drawing output, please don't delete it right away before committing. It might pay off if you take the time and make them optional by using a CVAR, like "tdm_ai_showdebugtext", which defaults to "0". There's a good chance that other developers will want to marry you for this.

Use the logfile

There are some debugging macros available in the TDM codebase, which you can use extensively.

// Let the logfile know that we've spawned
DM_LOG(LC_AI, LT_INFO)LOGSTRING("AI %s has spawned!\r", name.c_str());

There are several log classes and log levels available, just look at the darkmod.ini, which is also where you can switch them on or off. Examples are:

  • LC_AI for AI stuff
  • LC_ENTITY
  • LC_LOCKPICK
  • LC_OBJECTIVES
  • LC_STIMRESPONSE
  • ...

The loglevels can be used to filter a certain log class for specific events:

  • LT_ERROR is the most severe message
  • LT_WARNING
  • LT_DEBUG
  • LT_INFO contains verbose stuff

You can also define new log classes in DarkMod/DarkModGlobals.cpp, if it's actually necessary. Note that performance is not a real issue here, as the code can be stripped with a single #define from the entire codebase, leaving our release code nice and clean. When switched on, each call does take a small amount of time. If you really need to know (like me), one log line usually takes up 12 µsecs, which is rather much, as the log buffer is immediately flushed to disk (otherwise the log line would vanish in case of a crash).

Why boost? Why not boost?

What is boost anyway? The boost libraries form a freely available, open-sourced collection of (templated) utility classes. Some of the boost libraries will likely be incorporated in the upcoming TR-1, which is not surprising considering that some boost fonders are members of the C++ standards committee.

The boost code is peer-reviewed and of high quality, mostly due to the brutal acceptance process new libraries have to go through before being integrated into boost.

There are several reasons for and against using boost:

Pros

  • The boost libraries are stable as rock, you can rely on them.
  • Increased productivity. They provide a number general (templated) solutions for the most common low-level tasks. Don't wast time reinvent the wheel.
  • The shared_ptr template - which alone is reason enough to use boost.
  • Most libraries are consisting of headers only. This means, you don't need to link against static libraries for most things (with exceptions)

Cons

  • Some libraries require static linkage (including boost::filesystem and boost::regex). More of a nuisance than a showstopper, but it requires some work to compile the statics for VC++.
  • The boost tree is quite large, consisting of several thousand files.
  • Some routines like string algorithms might not be as fast as a homegrown, specialised ones.

boost::shared_ptr

The by far most useful class is the boost::shared_ptr<> template, which belongs in the category of so-called "smart pointers". It stands for the RAII design pattern and implements std::tr1::shared_ptr. This template makes your memory management tasks a lot easier. Due to the automatic internal reference counting it entirely eliminates memory leaks, when used correctly. As soon as the last shared_ptr<> reference to an object is destroyed, the contained object is deleted as well and the memory is released again.

I made strong use of boost::shared_ptrs when designing the new AI framework, and this saved me from placing a single delete call anywhere in the code, no memory leaks whatsoever.

How to use shared_ptrs correctly

Use this to allocate a new class:

// Allocate a new class instance and initialise the shared_ptr
boost::shared_ptr<MyClass> myClassPtr(new MyClass);

// use the class
myClassPtr->DoSomething();

// Don't worry about deletion, as soon as the shared_ptr is destroyed, your instance comes along

This snippet allocates a new MyClass instance and hands the pointer to it over to the shared_ptr<> instance. (This is following the RAII pattern, as the allocated resource is instantly used to initialise the memory-managing class (the shared_ptr)). You don't need to worry about deleting the MyClass instance, this will be handled by the shared_ptr automatically.

How to use shared_ptrs incorrectly

Use this to crash your application:

// Allocate a new object
MyClass* myClass = new Myclass;

{
  // Construct a shared_ptr using the raw pointer from above
  boost::shared_ptr<MyClass> myClassPtr(myClass);
  
  // Do something with the shared_ptr
  myClassPtr->DoSomething();

  // The scope ends here, the shared_ptr is destroyed. As there is only one 
  // shared_ptr instance available, the internal reference count reaches 0 and 
  // the contained MyClass instance is deleted.
}

myClass->DoSomething(); // insta-crash, the object has already been deleted

delete myClass; // another sure-fire crash due to double-deletion, but you'll never reach this code.

So, the lesson is: follow the RAII design pattern and don't save the raw pointer outside of your shared_ptr. Don't manually delete resources that are managed by a shared_ptr<>.

This is another way to crash your app:

{
  // Allocate a new class
  boost::shared_ptr<MyClass> myClassPtr(new MyClass);

  // get the contained raw pointer from the shared_ptr (this is possible)
  MyClass* p = myClassPtr.get();

  delete p; // this will free your instance

  // the application will crash here, as the scope ends and the shared_ptr 
  // will attempt to destroy the contained object => double-free operation.
  // the shared_ptr cannot know about your manual deletion using the raw pointer.
}

You can use this shared_ptr like an ordinary pointer, as the class provides a dereference operator->. You can copy-construct shared_ptrs or assign them to other shared_ptrs. You can use shared_ptrs in any STL container (vector, map, set, whatever) or more generally every container which can handle copy-constructible objects. shared_ptrs are type-safe, which means you can implicitly cast your shared_ptr like you're used to with raw pointers:

// Create a shared_ptr holding an instance of MySubClass, which derives publically from MyBaseClass
boost::shared_ptr<MySubClass> subClassPtr(new MySubClass);

// Create a shared_ptr of the base class and assign the subclassPtr to it.
// The internal pointers are compatible, as are their shared_ptr<> counter-parts
boost::shared_ptr<MyBaseClass> baseClassPtr = subClassPtr; 

The shared_ptr headers include the counter-parts of the standard C++ static_cast<>, dynamic_cast<>, const_cast<> and even reinterpret_cast<> operators:

  • boost::static_pointer_cast<TYPE> complements static_cast<TYPE*>
  • boost::dynamic_pointer_cast<TYPE> complements dynamic_cast<TYPE*>
  • boost::const_pointer_cast<TYPE> complements const_cast<TYPE*>
  • boost::reinterpret_pointer_cast<TYPE> complements reinterpret_cast<TYPE*>

An example of downcasting shared_ptrs:

// Create a new baseclass pointer, but use a Subclass instance to initialise it (which is valid, the pointers are compatible)
boost::shared_ptr<MyBaseClass> baseClassPtr(new MySubClass);

// Perform a dynamic_cast<> using the baseClassPtr, the result is a new shared_ptr.
boost::shared_ptr<MySubClass> subClassPtr = boost::dynamic_pointer_cast<MySubClass>(baseClassPtr); 

// If the dynamic_cast succeeded, the subClassPtr variable is non-NULL

typedefs

As typing boost::shared_ptr<MyClass> is tiresome, use this convention to create short-hand types:

typedef boost::shared_ptr<MyClass> MyClassPtr;

So the convention is to append "Ptr" to your class name to denote a shared_ptr object. The allocation code becomes nicer then:

// Allocate a new MyClass instance
MyClassPtr mc(new MyClass);

mc->DoSomething();

The codebase is so friggin' huge!

Agreed, the codebase it is pretty huge. I can perfectly imagine that a newcomer can be overwhelmed when looking at the code. Here are some general notes and directions:

Use a good IDE for coding. I recommend Visual C++ 2005 or 2008 Express Edition (VC++) for Windows users (the free one), Linux people surely find something suitable in their package manager.

Using VC++

  • Move your mouse over variables to see their type.
  • In Debugging mode: move your mouse over a variable to inspect it (including base classes)
  • Right-Click a class or function name and choose Go to Declaration to find the header file where this class or function is declared.
  • Right-Click a function and choose Go to Definition to jump to the function body, which is useful when reading the code. Note that this doesn't work very well with virtual functions. Implementations of pure virtual functions cannot be found this way at all, use your text search to look for ::DoSomething instead to attempt to find places where this function is defined.
  • Switch on the "Code Definition" window and dock it somewhere. When you place the cursor over an identifier, you can see the definition of that type in that small window (useful for looking at enums and structs without leaving your current view).
  • Right-Click a class, function or variable name and choose Find all References to find all the places where this object is used. This is useful to find the callers of a function or see whether a function is used at all.
  • Use the search function and choose the search scopes Current Document or Current Project to find a string in the entire codebase. There is also a fulltext search available in that search widget: choose Find in Files in the top-left dropdown field.

Where does Code Execution "start"?

You might wonder where the "starting point" of the code is. The key to understanding this is the idGameLocal class, which implements a set of functions. These methods are called by the "core" engine (i.e. the EXE) on several events:

During Gameplay

Each frame the core engine is calling idGameLocal::RunFrame(), without exception. When this function happens not to be called anymore, there is no active map running.

This function does a variety of things, like calculating the lightgem, processing the Stim/Response system and most importantly, it lets all active entities "think", i.e. the function idEntity::Think() is called. Hence everything that needs to be done each frame is located in each entity's thinking routine. This usually includes RunPhysics() and Present().

After all entities have finished thinking, the event queue is processed. See the article Event System for more details.

When a new Map starts

Whenever you type map test in the console or start a map via the menus, the following happens:

  • The engine checks whether the map exists.
  • The engine calls idGameLocal::InitFromNewMap() which loads the map file, initialises the AAS, spawns the entities, etc.

After InitFromNewMap() has finished, the game will run a few frames to let some physics stuff settle and will start to fade in. Then see the above section During Gameplay.

When a Game is loaded

Things are a bit different when a map is loaded or restored from a savegame file. In this case the engine calls idGameLocal::InitFromSavegame(), which will instantiate all registered objects and de-serialises all member variables from the savegame file stream. An important thing to note is that entities are not spawned during game load. The entity classes are automatically instantiated by InitFromSaveGame() and the idEntity::Restore() methods are invoked instead, which puts the entities in their exact state as they were at save time. The map geometry and AAS file are still loaded afresh from the disk, only the "volatile" data like class members or other game state objects and variables are saved into the savefile.

After the map is loaded/restored, the engine proceeds to call idGameLocal::RunFrame(), as one might expect.

During Main Menu Display

Note that idGameLocal::RunFrame() is not called when the main menu is shown, which is logical as no map is active at that time. The only SDK function that is ever called during this phase is idGameLocal::HandleMainMenuCommands(const char*). Whenever the GUI code sets the cmd state variable, this command is passed to HandleMainMenuCommands() to take the according action.

Let's say the mainmenu.gui executes this code:

set "cmd" "close;";

This code line is roughly equivalent to this call:

gameLocal.HandleMainMenuCommands("close");

Take a look at that method (it's in game/game_local.cpp btw.) to see how these commands are handled.

This and That

How to find all spawnargs matching a given prefix

The idDict class provides a handy method MatchPrefix which takes the prefix string plus a pointer. Use this loop to traverse all matching spawnargs (in this example the prefix is "used_by"):

for (const idKeyValue* kv = spawnArgs.MatchPrefix("used_by"); kv != NULL; kv = spawnArgs.MatchPrefix("used_by", kv))
{
   // Do something with the key and the value
   DoSomething(kv->GetKey(), kv->GetValue());
}