Unity Developer's Notes

Note: Today is 12/15/2014, most of the information on this page is very old so some of the information is dated and issues have been resolved. Most of th score operational issues and ideas remain the same. I will date each section as I update the information in it or add new.

Purpose

This page is for personal reference and posterity on issues surrounding developing in and using the Unity game engine and its APIs. It's a place I can refresh my memory on things as I forget things that are not intuitive to me usage wise or are obscure details. Unfortunately, the need for such a page is increased by the state of the Unity documentation. Unity seems to be a tool definitely more targeted toward transitioning people into game creation rather than adding tools for companies that write games and simulations as a core competency. So, much of the documentation surrounding using the visual aspect of the tool are fairly complete and user friendly however the API that covers some of the more advanced features via the API are lacking. I suppose one could successfully argue that if you are using those advanced features then the developer should be familiar with them anyway. Indeed, most of the advanced features of which I am referring to are mechanisms, issues and concerns that are very similar between game engines and rendering engines. Colliders, 3 dimensional transformations (and other mathematics), physics concepts, etc. I am not knocking Unity, just wish they would polish the API documentation a bit. With that said, this document is a catalog of issues that are useful to me and hopefully to someone searching the web who is stuck on a problem that is not obvious from reading the docs or the forums.

Unity GUI

Pitfalls and idiosyncrasies

I was in the process of going into depth about all the land mines that lay in the GUI API, but it appears that they are overhauling it in a big way. So instead of wasting time with that here we will just hope for some good change in this area.

MonoBehaviours

Using the life cycle methods

There are several life cycle methods that you can implement on a MonoBehaviour in order to get work done or get access to special engine states and functionality. The order of these is significant and is described here. I think that (but I am not sure) Unity uses the SendMessage mechanism or at least uses reflection to invoke the life cycle methods on the MonoBehaviour. I also have a suspicion that it uses the Coroutine construct under the hood for invoking them. I say this because several of the methods can optionally return an IEnumerator and yield over several frames.

  • MonoBehaviour methods are invoked by name, using a reflective mechanism. Their return type nor their protection level matter.
  • As noted they have a special order in every frame. They will also not cause the compiler to complain of you misspell one of them, it just will not work.
  • MonoBehaviours that do not contain know life cycle methods appear to be enabled by default and in the inspector the enable/disable check box goes away making them always on. If it's not doing anything then why turn it off??
  • Classes that act as MonoBehaviours need not extend MonoBehaviour directly but can actually extend another class that extends MonoBehaviour. In this case the parent class may actually not be in the default package (no package). Being in the default package is a restriction placed by Unity on things that are hangable on the scene graph. As long as the last class in the inheritance chain is in the default all will be well.
  • Classes that are hung on the scene graph must not be primarily generic. In other words their type must not have a generic in it but it can realize generic interfaces and extend generic subclasses.
  • The class must be a public class in a file with the same name (not sure if this is true with 3.0).
  • As of 3.x you can (supposedly) load MonoBehaviours from DLLs but I have not tried this.

Proper property based data objects and controllers.

There are some places that Unity really encourages developers to do the wrong thing; almost all of those are around using the Inspector in the development process. One of those are the exposure of data elements on MonoBehaviours. The usage of public class fields on classes is a code smell and still in 3.0 I don't think the inspector honors C# properties. So to honor the encapsulation cornerstone of OO programming, we must rely on a trick.

Basically the idea is to allow for public Inspector friendly properties on the object while maintaining the runtime value in a real property, either explicitly backed or in an auto property. The design-time inspector property is copied down during the Awake() phase. Here is an example:

using System;
using UnityEngine;
 
namespace com.mypackage
{
    public class MyControllerWithProperties : MonoBehaviour
    {
        public String _aString;
 
        protected String aString;
 
        virtual public void Awake()
        {
            //copy down inspector value
            if (String.IsNullOrEmpty(_aString))
            {
                aString  = "default";
            }
            else
            {
                aString = _aString;
            }
        }
 
           //an auto property would work too, but only in Unity 3
        public String AString
        {
            get { return aString;  }
            set { aString= value; }
        }
    }
}

This is an example of how to shadow Unity inspector variables with real properties. Notice the trick with the public instance variable, the underscore is not visible because the field pretty print in the Inspector ignores non alphabetic characters so _aString and aString look the same to the user. I'm not really worried about Unity doing anything with the variable although that is certainly in the realm of the possible. I am more worried about users using the public field at runtime for properties that would allow them to circumvent any useful stuff in a real property, like property change notification or content checking.

Coroutines

Conceptual example

using UnityEngine;
using System.Collections;
using System;
public class CoroutineTest : MonoBehaviour
{
    public void Awake ()
    {
        Log ("Awake");
    }
 
    public void Start ()
    {
        Log ("Start");
        StartCoroutine (CoroutineOne (Time.frameCount));
        StartCoroutine (RunOnceCoroutine ());
    }
 
    public void Update ()
    {
        Log ("Update");
    }
 
    public void LateUpdate ()
    {
        Log ("LateUpdate");
 
        if (Time.frameCount == 2)
        {
            StartCoroutine (CoroutineTwo (Time.frameCount));
        }
    }
 
    public void FixedUpdate ()
    {
        Log ("FixedUpdate");
    }
 
    public IEnumerator CoroutineOne (int startFrame)
    {
        int iteration = 1;
        Debug.Log(String.Format("Coroutine 1 starting in frame {0}.", startFrame));
 
        while (true)
        {
            Debug.Log (String.Format ("Coroutine 1 has iterated {0} times.", iteration++));
            yield return "Anything you want to return because the Unity coroutine hack does not use it.";
        }
    }
 
    public IEnumerator CoroutineTwo (int startFrame)
    {
 
        int iteration = 1;
        Debug.Log(String.Format("Coroutine 2 starting in frame {0}.", startFrame));
 
        while (true)
        {
            Debug.Log (String.Format ("Coroutine 2 has iterated {0} times.", iteration++));
            yield return Mathf.PI;
        }
    }
 
    public IEnumerator RunOnceCoroutine ()
    {
        Debug.Log ("I am a terminating coroutine, I will print this and then become one with the ether.");
        yield return "Like tears in rain";
    }
 
    public static void Log (String method)
    {
        Debug.Log (String.Format ("{0}() - Frame {1}", method, Time.frameCount));
    }
}








This is used to demonstrate certain subtle principles and concepts around the usage of Unity coroutines and their execution order in the Unity game engine runtime. It also includes the generated iteration blocks for the interesting parts of the IEnumerable backing the coroutines and the output of the first few frames when executed.



This example was created to illustrate some features and subtleties that seem to be (based on some observation) commonly misunderstood in practice. Using the yield statement (which coroutines are built on) actually generates a fair amount of grunt code for you at compile time. Understanding these mechanisms, what they do and do not do is the real key to using coroutines elegantly and applying them correctly.

  • State Mechanics - Notice that in the generated IEnumerator that every time you do a yield, it signifies to the generator that this is a state transition in the coroutine method and that unless there is a while loop or some other looping conditional, this state in the method is over when it has been run once. Looping constructs are flattened to essentially if statements. There is no looping in the MoveNext function. This is an important concept to remember. Also it's helpful to know that every thing up until the first yield is executed from the code (and in the frame) that started the coroutine, so it's called immediately. Therefore, any initialization code you put into the coroutine before the first yield will get executed synchronously with the StartCoroutine call no matter what yield instruction comes in the first yield… make sure any scene references or variable references are ready for that.
  • Context - Context of the method (and owning class's members) that contains the yield statement is carried through with the coroutine if they are required by code executing in the coroutine. Observe figures two and three above. All the functionality and context variables are carried through during the compile process to the new IEnumerable. This is commonly referred to as variable capture. The entire generated class is not listed but class members for each piece of data required in the coroutine are generated on the new class. Then the compile process generates code in what was the coroutine method to set those member values on the generated class (see Figure 2). For example, the external startFrame value is set onto the generated object inside of the new coroutine method. All of this means that there is no need to create class instance variables or run once initialization blocks for for the data used in the coroutine if its executed more that one frame. The yield mechanism takes care of that for you by building a small state machine from the code that was in the original method (see Figure 3)
  • Coroutines are not threads - If you are coming from C++ or Java, yield is something that threads do. This is not so in C#. Unity has furthered the confusion by making the semantics of coroutines very thread-like: you start them, they run independently, you can stop them, etc. But for sure , they are not threads. Coroutines are a trick built on top of yield that allow the apparent forking of processes by executing the started coroutines (think iteration over the IEnumerators) at a fixed time after the Update call. The IEnumerator is 'iterated over' until it expires based on the normal rules of the yield statement, and the coroutine exits. Each frame produces one iteration in the Coroutine block. I emphasized iterated over because unity does not really do anything with the thing that is returned from the IEnumerator unless the thing returned is a YieldInstruction. If a yield instruction is returned then the engine executes (or yields over) the instruction and then continues to iterate over the original coroutine. This is not documented but I believe it to be true; this is how you can yield over a frame with yield return 0; and then yield new WaitForSeconds(5.0); and then continue on. This is one of the main reasons I think of coroutines as a beautiful hack. They are very useful.
  • Execution time and order - Coroutines are started with a call to StartCoroutine. During the frame that kicks off the coroutine, the first iteration of that coroutine is actually executed in the method that starts it. Each subsequent execution of the coroutine is executed after Update. For example, if I start a coroutine in LateUpdate that prints three lines of text over three yield statements. The first statement will be printed during the LateUpdate, the last two will actually be printed before the LateUpdate (just after the Update) in subsequent frames. Based on documentation and observation, I would say that putting physics calculations or calculations that have to rely on constant time and not frame time will not work in a Coroutine. This is true even if the Coroutine is started in FixedUpdate because only the first iteration of that coroutine will be executed in fixed time. Coroutine frames are executed with the same rules as Update, not FixedUpdate. This can be readily observed in Frame 4 of Figure 5 above.

Chaining and Stacking

Because a coroutine method understands its surrounding context and can have context of its own, it becomes useful for stateful ordered tasks or parallel tasks. If utility methods that do common things are written coroutine friendly (they return an IEnumerator) then those utilities can be chained and augmented in powerful ways. Take the following example:

    protected IEnumerator OverallProcess ()
    {
        yield return StartCoroutine (SubProcess1 ());
        Debug.Log("In between");
        yield return StartCoroutine (SubProcess2 ());
    }

    protected IEnumerator SubProcess1 ()
    {
        Debug.Log ("Process one A");
        yield return 0;
        Debug.Log ("Process one B");
        yield break;
    }

    protected IEnumerator SubProcess2 ()
    {
        Debug.Log ("Process two");
        yield return 0;
    }

yields (no pun intended) the output…

Process one A
Process one B
In between
Process two

This is a simple pattern to take smaller utility methods written as coroutine friendly methods and order them over several frames. If we wanted process one and two to executed in a pseudo parallel fashion we would drop the yield like so

    protected IEnumerator OverallProcess ()
    {
        StartCoroutine (SubProcess1 ());
        Debug.Log("In between");
        StartCoroutine (SubProcess2 ());
        yield break;
    }

…which changes the output (see below). There are several subtle variations that can be accomplished with the built in Unity coroutine implementations like WaitForEndOfFrame(), etc.

Process one A
In between
Process two
Process one B

Pivot points on imported models

It is not uncommon for art to export a model that has a non-standard pivot point. This has an effect on how Unity treats the object and it will have a huge effect on you if you art trying to actually calculate anything to do with the thing's position and size. The model may be exported with a pivot point where the actual local center of the object is not (0, 0, 0) which it should typically be for most items. Let's say we have a rectangular object like a shoe box. X (width) is 1, Y is the height and is 2 and Z is the depth which is 2. If the pivot point is set such that the local center is set to (0, -1, 0) then the center reported by gameObject.transform.position will not be the real, physical center of the object. If you rotated the shoe box around it's up axis, it would spin around the end of the box. Any calculations that you do from the transform's position that involve rotation, scaling, extents, etc. will be skewed. In order to get what you would think of as the center of the object you will have to get the renderer's center. Also, there is no visual indication in the editor that this is the case, it lies about the center of the object if the pivot is not set correctly.

Bounds

Issues using bounds
Be careful when using bounds. Bounds can be very tricky for determining size and volume of an entity and bounds from different places do different things.

Bounds on colliders and renderers

Bounds on colliders and renderers are in world space. That means for a given object, it's extents and size change with it's orientation. Here is an example. A cube with a width of one is sitting with its center on the origin, it's Z face is aligned with world space Z. The collider bounds will report that the size is (1, 1, 1); the extents will report (.5,.5,.5) which is all what we would expect. If we rotate the cube around its Y axis 45 degrees, then the extents change to reflect the size that the cube creates when mapped onto the XY plane. In other words the extents become (1.414, 1,1.414) and the extents report a similar (.707, 1, .707), this is because the diagonal of a unit square is 1.414. This works great if you need a 2D projection for the purposes of raycasting, but will give unexpected results if you intend to find the characteristics of your object in 3D space. If the object has an orientation other than that of world space you will get misleading results.

Also a subtle clue in the documentation clues you in on the fact that the bounds do not follow the orientation of the thing the are binding. Bounds have no orientation and if you impose orientation on them from the transform of the thing they came from you calculate bounds that are oriented correctly, but that grow and shrink with orientation which is obviously incorrect. A cube does not "grow" depending on it's orientation. Also the faces of a cube calculated from the bounds do not remain aligned with the space of the thing it binds, it remains aligned with the world axis always. For example, if you have cube that is scaled (1, 1, 5) and rotate it 90 degrees around the Y axis then the renderer and collider will report size of (5, 1, 1) when the real size has remained at (1, 1, 5). If the cube was rotated only 45 degrees then both the X and Z size would be 7.07 and so forth. But during this time a calculated cube based on the bounds of the renderer and collider are always oriented in world space.

More accurate bounds

For calculation or approximation of volumes during the game, use the bounds on the mesh (see the docs for MeshFilter). This bounds reports sizes that are aligned to the local space of the thing they bind. For this reason they never change in size nor position unless the entity really is resized. The center of these bounds is (0, 0, 0) typically but can be some small offset depending on the model. Art calls this setting the "pivot point" and they have to change it if it needs changing. You should write your code to take into account (subtract out) centers that are not (0, 0, 0) when appropriate which is a trivial calculation. This mostly affects transitioning points back and forth to world space. The virtual bounding cube that is calculated using these bounds most accurately represents the volume of the entity in 3 space and the faces of that cube are oriented with the mesh it was taken from. Oddly enough, the bounds property on the renderer is globally oriented (in world space) but internally the engine seems to regard a rectangle calculated from the mesh bounds (a local bounds) for actual collider sizing and collision detection. Colliders fire when colliding with a volume represented by the mesh bounds rather than the collider or renderer bounds.

It is important to remember (and tempting to forget) that projections or translations based on local bounds attributes (like calculating a rotation of bounds.max) must be done with orientation of the entity in mind. So it's transform will be your friend in these calculations, not Vector3. If not you will get really strange results, Vector.up ain't "up" to local bounds most of the time so don't use absolute directions in calculations.

Volumetric containment

One of the common things that one would want to do is find out if a transform is located inside the volume of something. A really good estimation of this (not perfect for non-rectangular shapes) is to use the mesh's bounds (see the above discussion). Renderer and collider bounds are useless for this because of issues already stated. However, there are still some caveats to the mesh.bounds approach that a practitioner should be aware of. Mesh bounds are in local space and orientation to the thing they are binding. So to test if the middle of something is inside the mesh volume you have to make the thing you are testing essentially be local space to the thing that owns the volume. There are two ways to do this:

To see if A is in B, do either:

  1. See if the mesh.bounds contains A - B (this requires your own scaling since this disregards scale. Not recommended)
  2. See if the mesh.bounds contains B.InverseTransformPoint(A)

One would thing that we could take the mesh.bounds (local, so center is (0, 0, 0)) and mathematically move them to the to the position of B and then perform a contains. This does not work though, because of features/bugs in Unity. Altering the center or the MinMax of the bounds corrupts its state in strange ways. Notably just recentering it resets the extents. So basically you have to move A to B's space to check it and this seems to be the only accurate way I have found to do so.
Issues around scale

Transforms

Scale creep

It should be a best practice to only set the scale on leaf nodes in the scene hierarchy if scaling is necessary. This is because scaling is cumulative and you will almost never want to blindly rescale every transform attached to something as a child. Transform.lossyScale attempts to figure out based on the scene hierarchy what the "real scale" should be. For instance, if I have a gun model that I create and place under a game object on which the transform's scale is (.5, .5, .5) and it has a parent transform on which the scale is also set to (.5, .5, .5) then the real scale of the model would be (.25, .25, .25) regardless of what it's scale is set to via the model's import and it will be visually distorted if these are not correct for the model. Simple, don't assign scale to things that are going to house other things except when this batch scaling is the desired result.

There are situations where this can get hopelessly confused. If the scene hierarchy content that is meant to logically represent physical hierarchy and structure content (like the interior of a building) and the model meshes or virtual bounding containers are all offset from each other in their orientation, scaling will be nearly impossible. For example, if I have a tree that represents buildings in a village, floors in a building, rooms on a floor, things in a room, etc. At each of these levels the room, or furniture have an arbitrary orientation (rotation) and a scale. Things that are place under these items for the purposes of logical grouping might not ever be able to be scaled correctly. The Unity documentation says this:

Please note that if you have a parent transform with scale and a child that is arbitrarily rotated, the scale will be skewed. Thus scale can not be represented correctly in a 3 component vector but only a 3x3 matrix. Such a representation is quite inconvenient to work with however.

This is why it's important that art assets that are used for purposes such as this be oriented with world space and have a scale of (1, 1, 1).

Transformations and scale

When transforming points or using the API for the Transform class the engine is doing a lot of work for you. If you are using a plain Vector3 in conjunction with an entity that has a transform you have to orient those scalars to the context of the transform for the math to work. For example, let's say I wanted to get the world coordinates of the maximum point for my mesh:

boundsMax = transform.TransformPoint(bounds.max);

this is essentially the same as

localBoundsMax = transform.rotation * Vector3.Scale (bounds.max, scale);

Where scale is just the scale from the transform. So the point is that the transformation hides scale calculations for you so make sure you use the transform especially where rotating and moving points based on scaled transforms is concerned. But, the inverse is not true so be careful with that.

Entities and scale

Most things in unity only have one implied size once they are in Unity, including models. That size is (1,1,1) and any
variation of that is accomplished through scaling. This sort of ties into the "Scale Creep" discussion above. A primitive or some other imported model that has been deformed in Unity either by setting the scale on the transform or via the editor will have to be treated differently, especially for local bounds. For instance, I want to find the angle between the X axis of the bounds.center point and bounds.max. This is accomplish like so:

float xZTheta = Mathf.Atan2 (bounds.extents.z * scale.z, bounds.extents.x * scale.x) * Mathf.Rad2Deg;

Note that the extents have to be scaled (scale is transform.scale). This is because the mesh renderer.bounds returns that the entity says the object is only (1,1,1) with extents of (.5,.5,.5)! There is no spoon… Unity assumes you will always use the Transform class API to do everything and take care of scale for you. So the item is a unit and Unity it lets scale do the rest. The problem is the Transform class doesn't do everything you will ever need, so just be careful.

Larger Scale Development

One thing that remains a challenge with the Unity tool is doing game development with separated art, designs and software concerns. Many of these issues are fixed by putting policy procedures in place and are not specific to the Unity tool but there are some things that you might have to do to use the tool effectively in this environment.

Continuous integration (simulated)

Unity provides you with the hooks necessary to script most any build process you have a need for. Mac is a preferable environment since it is the only supported platform with an actual shell. Here are some steps that are either not mentioned in the official documentation or where I have found deviation from the documentation provides useful results. A complete list of command line interaction for Unity can be found here.

Kicking a build

If you are using the asset server then there are a lot of fortunate side effects of this (and some unfortunate ones as well). That product is basically a PostgreSQL installation that has been modified (exactly how I am not sure) but still is a regular psql database at the core. That means you can treat it like one (see the Asset Server and psql section). If you set up a cron job to run every hour then you can run a psql query to get the latest revision from the asset server. You can then do some clever shell scripting to either see if that build has already been built or check a text file to see the revision of the last build. If every hours is to coarse, then make it every 30 minutes, or 10. Adjust the settings according to your project, I have seen builds take 20 minutes.

Building

The build process is documented here. However, I have seen issues with both assets instrumented on the scene graph and also when building asset bundles independently via the API (note, the latter is still an issue and an open bug on 3/14/2011). The previous issue which was also a bug was worked around by a coworker of mine and is described here. My suspicion is that invoking Unity in this fashion makes the heap size for the process the maximum available as it fixed our out of memory issues with all other things being equal. This is just a guess of course.

External bundles

TODO

Unity proprietary binary formats

Unity uses proprietary binary formats for most non-asset project artifacts. Prefabs, scene files and bundles are all this way. This causes some rather unfortunate side effects for large games and teams. For this reason you should really consider your use case and project development process when relying on one of these formats.

It's because of this that enforcing a more data driven game configuration and design will go a long way with Unity. By default they encourage you very much so to use the inspector tool to do almost everything. I would contend you should use it to do almost nothing (for larger teams and non-trivial projects) except real 3D scene orchestration and art related issues. In fact there is usually a mandate on most of my projects that artists or more art oriented software people are the only ones that change the scene and prefab files.

Another reason, and probably the biggest reason to use an external, data driven approach is that there is no way to properly manage these project artifacts from a configuration management standpoint. They are binary blobs essentially. What does this mean for your team? Well if you have a team of 25 artists and developers, both changing the scene graph and two people have just spent the last 45 minutes orchestrating and tweaking stuff on the scene graph… someone gets to redo their work! In fact, we usually set up a wiki page as a make shift lock. People "check out" a scene by assigning their initials next to it on a wiki page used for this and remove them after they have committed the change.

You are thinking, "…there must be a better way." Please, if anyone has a better way to deal with these artifacts please contact me and let me know. I will definitely appreciate it and share it here. Unity has a diff tool that allows you to see the differences between two versions of a scene, but there is no way to merge the work that two people have independently been working on. First check in wins, other guy(s) is screwed. Use additive scenes liberally! These help ease this pain.

Asset Server and psql

The asset server is just a PostgreSQL installation. This comes in handy when you want to do automated reporting or during automated build processes. The command client is available for Mac and Linux… I guess for the other OS you might want to check out SQuirreL. That actually works on all three and is a very nice interface. If you use SQuirreL you will have to grab the JDBC drivers from the database web site and install them. I'll leave this for the student or you can contact me.

Get the most active files
Gets the most active files check in wise, this points to junk drawers, process bottlenecks and god classes.

select count(asset) as occurs,name from assetversion WHERE assettype = 7001 GROUP BY asset, name ORDER BY occurs DESC

Comment policy check
Gets a list of people that are not obeying the check in comment policies today

select username, changeset.serial as revision, description, commit_time from person JOIN changeset on (changeset.creator = person.serial) where now()::date = changeset.commit_time::date AND description NOT LIKE '<insert pattern here>'

Commit count by committer
For all you pointy haired bosses out there here is a query that gets commit count by committer. I would issue some apologetics here but if you actually think you can judge work quantity by commit count then your an idiot and a lost cause anyway.

select username, count(changeset.serial) as commits from person JOIN changeset on (changeset.creator = person.serial) GROUP BY person.username
Unless otherwise stated, the content of this page is licensed under GNU Free Documentation License.