Svelto TaskRunner – Run Serial and Parallel Asynchronous Tasks in Unity3D

In this article I will introduce a better way to run coroutines using the Svelto.Tasks TaskRunner. I will also show how to run coroutine between threads, easily and safely. You can finally exploit the power of your processors, even if you don’t know much about multithreading. If you use Unity, you will be surprised about how simple is to pass results computed from the multithreaded coroutines to the main thread coroutines.

What we got already: Unity and StartCoroutine

If you are a Unity developer, chances are you know already how StartCoroutine works and how it exploits the powerful c# yield keyword to time slice complex routines. A coroutine is a quite handy and clean way to execute procedures over time or better, asynchronous tasks.

Lately Unity improved the support of Coroutines and new fancy things are now possible to achieve. For example, it was already possible to run tasks in serial doing something like:

And apparently* it was also possible to exploit a basic form of continuation starting a coroutine from another coroutine:

*apparently because I never tried this on unity 4 and I wasn’t aware at that time that it was possible.

However lately it’s also possible to return an IEnumerator directly from another IEnumerator without running a new coroutine, which is actually almost 3 times faster, in terms of overhead, than the previous method:

Run parallel routines is also possible, however there is no elegant way to exploit continuation when multiple StartCoroutine happen at once. Basically there is no simple way to know when all the coroutines are completed.

I should add that Unity tried to extend the functionality of the Coroutines introducing new concepts like the CustomYieldInstruction, however it fails to create a tool that can be used to solve more complex problems in a simple and elegant way, problems like, but not limited to, running several sets of parallel and serial asynchronous tasks.

Introducing Svelto.Tasks

Being limited by what Unity can achieve, a couple of years ago I started to work on my TaskRunner library and spent the last few months to evolve it in something more powerful and interesting. The set of use cases that the TaskRunner can now solve elegantly, is quite broad, but before to show a subset of them as example, I will list the main reasons why TaskRunner should be used in place of StartCoroutine:

  • you can use the TaskRunner everywhere, you don’t need to be in a Monobehaviour. The whole Svelto framework focuses on shifting the Unity programming paradigm from the use of the Monobehaviour class to more modern and flexible patterns.
  • you can use the TaskRunner to run Serial and Parallel tasks, exploiting continuation without needing to use callbacks.
  • you can pause, resume and stop whatever set of tasks running.
  • you can catch exceptions from whatever set of tasks running.
  • you can pass parameters to whatever set of tasks running.
  • you can exploit continuation between threads (!).
  • Whatever the number of tasks you are running is, the TaskRunner will always run just one Unity coroutine (with some exceptions).
  • you can run tasks on different schedulers (including schedulers on different threads!).
  • you can transform whatever asynchronous operation into a task, thanks to the ITask interface.

A subset of use cases that the TaskRunner is capable to handle, is what I am going to show you soon, and I am sure you will be surprised by some of them :). TaskRunner can be used in multiple ways and, while the performance doesn’t change much between methods, you would fully exploit the power of the library only knowing when to use what. Let’s start:

The simplest way to use the TaskRunner is to use the function Run, passing whatever IEnumerator in it.

This simply does what says on the tin. It’s very similar to the StartCoroutine function, but it can be called from everywhere. Just pay attention to the fact that
TaskRunner uses the not generic IEnumerator underneath, so using generic IEnumerator, with a value type as parameter, will always result in boxing (as shown in the example above).

TaskRunner can be also used to run every combination of serial and parallel tasks in this way:

But why not exploit the IEnumerator continuation? It’s more elegant than using callbacks and we don’t need to use a SerialTaskCollection explicitly (with no loss of performance). We won’t even need to use two ParallelTasks:

if you feel fancy, you can also use the extension methods provided:

Svelto.Tasks and Unity compatibility

you are used to yield special objects like WWW, WaitForSeconds or WaitForEndOfFrame. Those functions are not enumerators and they work because Unity is able to recognize them and run special functions accordingly. For example, when you return WWW, Unity will run a background thread to execute the http request. If WWW is not able to reach Unity framework, it will never be able to run properly. For this reason, the MainThreadRunner is actually compatible with all the Unity functions. You can yield them, however there are limitations: you cannot yield them, as they are, from a ParallelTaskCollection. If you do it, the ParallelTaskCollection will stop executing and will wait Unity to return the result, effectively loosing the benefits of the process. Whenever you return a Unity special async function from inside a ParallelTaskCollection, you’ll need to wrap it inside an IEnumerator if you want to take advantage of the parallel execution. This is the reason why WWWEnumeratorWaitForSecondsEnumerator and AsyncOperationEnumerator exist.

TaskRoutines and Promises

When c# coders think about asynchronous tasks, they think about the .net Task Library. The Task Library is an example of an easy to use tool that can be used to solve very complex problems. The main reason why the Task Library is so flexible, is because it’s Promises compliant. While the Promises design has been proved proficient through many libraries, it can also be implemented in several ways, but in every case, what makes the promises powerful, is the idea to implement continuation without using messy events all over the code.

In Sveto.Tasks, the promises pattern is implemented through the ITaskRoutine interface.  let’s see how it works: an ITaskRoutine is a coroutine already prepared and ready to start at your command. To create a new ITaskRoutine simply run this function:

Since an allocation actually happens, it’s best to preallocate and prepare a routine during the initialization phase and run it during the execution phase. A task routine can also be reused, changing all the parameters, before to run it again.  Running an empty ITaskRoutine will result in an exception thrown, so we need to prepare it first. You can do something like:

In this case I used the function SetEnumeratorProvider instead of SetEnumerator. In this way the Task Runner is able to recreate the enumerator in case you want to start the same function multiple times. Let’s see what we can do:

We can Start the routine like this using

We can Pause the routine using

We can Resume the routine using

we can Stop the routine using (it’s getting tedious)

we can Restart the routine using

You can check the ExampleTaskRoutine example out to see how it works.

Let’s see how ITaskRoutine are compliant with Promises. As we have seen, we can pipe serial and/or parallel tasks and exploit continuation. We can get the result from the previous enumerator as well, using the current properties. We can pass parameters, through the enumerator function itself. The only feature we haven’t seen yet is how to handle failures, which is obviously possible too.

For the failure case I used an approach similar to the .net Task library. You can either stop a routine from a routine, yielding Break.It; or throwing an exception. All the exceptions, including the ones threw on purpose, will interrupt the execution of the current coroutine chain. Let’s see how to handle both cases with some, not so practical, examples.

In the example above we can see several new concepts. First of all, it shows how to use the Start() method providing what to execute when the ITaskRoutine is stopped or if an exception is thrown from inside the routine. It shows how to yield Break.It to emulate the Race function of the promises pattern. Break.It is not like returning yield break, it will actually break the whole coroutine from where the current enumerator has been generated. At last it shows how to yield an array of IEnumerator as syntactic sugar in place of the normal Parallel Task generation. Just to be precise, OnStop will NOT be called when the task routine completes, it will be called only when ITaskRoutine Stop() or yield Break.It are used.

Now let’s talk about something quite interesting: the schedulers. So far we have seen our tasks running always on the standard scheduler, which is the Unity main thread scheduler. However you are able to define your own scheduler and you can run the task whenever you want! For example, you may want to run tasks during the LateUpdate or the PhysicUpdate. In this case you may implement your own IRunner scheduler or even inherit from MonoRunner and run the StartCoroutineInternal as a callback inside the Monobehaviour that will drive the LateUpdate or PhysicUpdate. Using a different scheduler than the default one is pretty straightforward:


Multithread and Continuation between threads

But what if I tell you that you can run tasks also on other threads? Yep that’s right, your scheduler can run on another thread as well and, in fact, one Multithreaded scheduler is already available. However you may wonder, what would be the practical way to use a multithreaded scheduler? Well, let’s spend some time on it, since what I came out with, is actually quite intriguing. Caution, we are now stepping in the twilight zone.

First of all, all the features so far mentioned work on whatever scheduler you put them on. This is fundamental in the design, however some limitations may be present due to the Unity not thread safe nature. For example, the MultiThreadRunner, won’t be able to detect special Unity coroutines, like WWW, AsyncOperation or YieldInstruction, which is obvious, since they cannot run on anything else than the main thread. You may wonder what the point of using a MultiThreadRunner is, if eventually it cannot be used with Unity functions. The answer is continuation! With continuation you can achieve pretty sweet effects.

Let’s see an example, enabling the PerformanceMT GameObject from the scene included in the library code. It compares the same code running on a normal StartCoroutine (SpawnObjects) and on another thread (SpawnObjectsMT). Enable only the MonoBehaviour you want test to compare the performance. What’s happening? Both MBs spawn 150 spheres that will move along random directions. In both cases, a slow coroutine runs. The coroutine goal is to compute the largest prime number smaller than a given random value between 0 and 1000; The result will be used to compute the current sphere color which will be updated as soon as it’s ready. The following is the multithreaded version:

Well I hope it’s clear to you at glance. First we run CalculateAndShowNumber on the multiThreadScheduler. We use the same MultiThreadRunner instance for all the game objects, because otherwise we would spawn a new thread for each sphere and we don’t want that. One extra thread is more than enough (I will spend few words on it in a bit).

FindPrimeNumber is supposed to be a slow function, which it is. As a matter of fact, if you run the single threaded version (enabling the SpawnObject monobehaviour instead of SpawnObjectMT) you will notice that the frame rate is fairly slow. In fact, the GPU must wait the CPU to compute the prime number.

The Multithreaded version runs the main enumerator on another thread, but how can the color be set since it’s impossible to use the Renderer component from anything else than the main thread? This is where a bit of magic happens. Returning the enumerator from a task, running on another scheduler, will actually continue its execution on that scheduler. You may think that at this point the thread will wait for the enumerator running on the main thread to continue. This is partially true, since differently than a Thread.Join(), the thread is actually not stalled, it will continue yielding, so if other tasks are running on the same thread, they will be actually processed. At the end of the main thread enumerator, the path will return to the other thread and continue from there. Quite fascinating, I’d say, also because you could expect great difference in performance.

So, We have seen some advanced applications of the TaskRunner using different threads, but since Unity will soon support c#6, you could wonder why to use the Svelto TaskRunner instead of the Task.Net library.  Well, they serve two different purposes. Task.Net library has not been designed for applications that could run heavy routines on threads. The Task.Net and the await/async keywords heavily exploit the Thread.Pool to use as many threads as possible, with the condition that most of the time, these threads are very short-lived. Basically it’s designed to serve applications that run hundreds of short lived and light asynchronous tasks.  This is usually true when we talk about server applications.

For games instead, what we generally need, are few threads where to run heavy operations that can go in parallel with the main thread and this is what the TaskRunner has been designed for. You will also be sure that all the routines running on the same MultiThreadRunner scheduler instance, won’t occur in any concurrency issue. In fact, you may see every Multithread runner as a Fiber (if you feel brave, you can also check this very interesting video). It’s also worth to notice that the MultiThreadRunner will keep the thread alive as long as it is actually used, effectively letting the Thread.Pool (used underneath) to manage the threads properly.

Other stuff…

To complete this article, I will spend few words on other two minor features. As previously mentioned, the TaskRunner will identify IAbstractTask implementations as well. Usually you will need to implement an ITask interface to be useful. The ITask is meant to transform whatever class in a task that can be executed through the task runner. For example, it could be used to run web services, which result will be yielded on the main thread.

The ITaskChain interface is something still at its early stage and it could be changed or deprecated in future. It could be useful to know parameters that are passed through tasks, like a sort of Chain Of Responsibility pattern.

The very last thing to take in consideration is the compatibility with Unity WWW, AsyncOperation and YieldInstruction objects. As long as you are not using parallel task, you can yield them from your enumerator and they will work as you expect! However you cannot use them from a parallel collection unless you wrap them in another IEnumerator, that’s why WWWEnumerator and AsyncOperationEnumerator classes exist.

The source code and examples are available from here: Every feedback is more than welcome. Please let me know if you find any bug!


Svelto ECS is now production ready

Six months have been passed since my last article about Inversion of Control and Entity Component System and meanwhile I had enough time to test and refine my theories, which have been  now proven to be working quite well in a production environment. During these months of putting in practice my ideas, I had the chance to consolidate my analysis and improve the code. As result Svelto ECS framework 1.0 is now ready.

Let’s start from the basics explaining why I use the word framework, which is very important. My previous work, Svelto IoC, cannot be defined as a framework; it’s instead a tool. As a tool, it helps to perform Dependency Injection and design your code following the Inversion of Control principle. Svelto IoC is nothing else, therefore whatever you do with it, you actually can do without it in a very similar way, without changing much the design of your code. As such, svelto IoC doesn’t dictate a way of coding, doesn’t give a paradigm to follow, leaving to the coder the burden to decide how to design the game infrastructure.

Svelto ECS won’t take the place of Svelto IoC, however it can be surely used without it. My intention is to add some more features to my IoC container in future and also write another article about how to use it properly, but as of now, I would not suggest to use it, unless you are actually aware of what IoC actually means and that you are actually looking for an IoC container that well integrates in your existing design. Without understanding the principles behind IoC, using an IoC container can result in very dangerous practices. In fact I have seen smell code and bad design implementations that actually have been aided by the use of an IoC container. These ugly results would have been much harder to achieve without it.

Svelto ECS is instead a framework and with it you are forced to design your code following its specifications. It’s the framework to dictate the way you have to write your game infrastructure. The framework is also relatively rigid, forcing to follow a well defined pattern. Using this kind of framework is important for a medium-large team, because coders have different levels of experience and it is impossible to expect from everyone to be fully able to design well structured code. Using a framework which implicitly solves the code design problem, will let all the coders focus only on the implementation of algorithms, knowing with a given degree of certainty, that the amount of refactoring needed in future will be more limited than it would have been if left on their own.

The first error I made, with the previous implementation of this framework, was to not well define the concept of Entity. I noticed that it is actually very important to have clear in mind what the elements of the project are before to write the relative code. Entities, Components, Engines and Nodes must be properly identified in order to write code faster. An Entity is actually a fundamental and well defined element in the game. The Engines must be designed thinking about managing Entities and not Components or Nodes. Characters, enemies, bullets, weapons, obstacles, bonuses, GUI elements, they are all Entities. It’s easier, and surely common, to identify an Entity with a view, but a view is not necessary for an Entity. As long as the entity is well defined and makes sense for the logic of the game, it can be managed by engines regardless its representation.

In order to give the right importance to the concept of Entity, Nodes cannot be injected anymore into Engines until an Entity is explicitly built. An Entity is now built through the new BuildEntity function and the same function is used to give an ID to the Entity. The ID doesn’t need to be unique, this is a decision left to the coder. For example as long as the ID of all the bullets are unique, it doesn’t matter if the same IDs are used for other entities managed by other engines.

The Setup

Following the example, we start from analysing the MainContext class. The concept of Composition Root, inherited from the Svelto IoC container, is still of fundamental importance for this framework to work as well. It’s needed to give back to the coder the responsibility of creating objects (which is currently the greatest problem of the original Unity framework, as explained in my other articles) and that’s what the Composition Root is there for. By the way, before to proceed, just a note about the terminology. While I call it Svelto Entity Component System framework, I don’t use the term System, I prefer the term Engine, but they are the same thing.

In the composition root we can create our ECS framework main Engines, Entities, Factories and Observers instances needed by the entire context and inject them around (a project can have multiple contexts and this is an important concept).

In our example, it’s clear that the amount of objects needed to run the demo is actually quite limited:

The EngineRoot is the main Svelto ECS class and it’s used to manage the Engines. If it’s seen as IEntityFactory, it will be used to build Entities. IEntityFactory can be injected inside Engines or other Factories in order to create Entities dynamically in run-time.

It’s easy to identify the Engines that manage the Player Entity, the Player Gun Entity and the Enemies Entities. The HealthEngine is an example of generic Engine logic shared between specialized Entities. Both Enemies and Player have Health, so their health can be managed by the same engine. DamageSoundEngine is also a generic Engine and deals with the sounds due to damage inflicted or death. HudEngine and ScoreEngine manage instead the on screen FX and GUI elements.

I guess so far the code is quite clear and would invite you to know more about it. Indeed, while I explained what an Entity is,  I haven’t given a clear definition of Engine, Node and Component yet.

An Entity is defined through its Implementers and Components. A Svelto ECS Component defines a shareable contextualized collection of data and events. A Component can exclusively hold data (through properties) and events (through the Svelto ECS Dispatcher classes) and nothing else. How the data is contextualized (and then grouped in components) depends by how the entity can been seen externally. While it’s important to define conceptually an Entity, an Entity does not exist as object in the framework, and it’s seen externally only through its components. More generic the components are, more reusable they will be. If two Entities need health, you can create one IHealthComponent, which will be implemented by two different Implementers. Other examples of generic Components are IPositionComponent, ISpeedComponent, IPhysicAttributeComponent and so on. Component interfaces can also help to reinterpret the same data in different ways or to access the same data in different ways. For example a Component could only define a getter, while another component a setter for the same data on the same Implementer. All this helps to keep your code more focused and clean. Once the shareable data options are exhausted, you can start to declare more entity specific components. Bigger the project becomes, more shareable components will be found and after a while this process will start to become quite intuitive. Don’t be afraid to create components even just for one property only as long as they make sense and help to share components between entities. More shareable components there are, less of them you will need to create in future. However it’s wise to make this process iterative, so if initially you don’t know how to create components, start with specialised ones and the refactor them later.
As I said, Components do not hold just data, the also hold events. This something I have never seen before in other ECS framework and it’s a very powerful feature, I will explain why later, but for the time being, just know that Components can also dispatch events through the Dispatcher class.

The Implementer concept was existing already in the original framework, but I didn’t formalize it at that time. One main difference between Svelto ECS and other ECS frameworks is that Components are not mapped 1:1 to c# objects. Components are always known through Interfaces. This is quite a powerful concept, because allows Components to be defined by shared objects that I call Implementers . This doesn’t just allow to create less objects, but most importantly helps to share data between Components. Several Components can been actually defined by the same Implementer . Implementers can be Monobehaviour, as Entities can be related to GameObjects, but this link is not necessary. However, in our example, you will always find that Implementers are Monobehaviour. This fact actually solves elegantly another important problem, the communication between Unity framework and Svelto ECS.
Most of the Unity framework feedback comes from predefined Monobehaviour functions like OnTriggerEnter and OnTriggerExit. One of the benefit to let Components dispatch events is that in this way the communication between the Implementer and the Engines become totally natural. The Implementer as Monobehaviour, dispatches an IComponent event when OnTriggerEnter is called. The Engines will then be notified later and act on it. When you need to communicate events between Implementers and Engines, as opposite between Engines and Engines, you can also use normal events.

It’s fundamental to keep in mind that Implementers MUST NOT DEFINE ANY LOGIC. They must ONLY implement component interfaces and act as a bridge between Unity and the ECS framework in case they happen to be also Monobehaviours.

So Components are a collection of Data and Events, Implementers define the components and don’t have any logic. Hence where all the game logic will be? All the game logic will be written in Engines and Engines only. That’s what they are there for. As a coder, you won’t need to create any other class. You may need to create new Abstract Data Types to be used through components, but you won’t need to use any other pattern to handle logic. In fact, while I still prefer MVP patterns to handle GUIs, without a proper MVP framework (which doesn’t exist for Unity yet), it won’t make much difference to use Engines instead. Engines are basically presenters and Nodes hold views and models.

OK, so, before to proceed further with the theory, let’s go back to the code. We have easily defined Engines, it’s now time to see how to build Entities. Entities are injected into the framework through Entity Descriptors. EntityDescriptor describes entities through their Implementers and present them to the framework through Nodes . For example, every single Enemy Entity, is defined by the following EnemyEntityDescriptor:

This means that an Enemy Entity and its components, will be introduced to the framework through the nodes EnemyNode, PlayerTargetNode and HealthNode and the method BuildEntity used in this way:

While EnemyImplementator implements the EnemyComponents, the EnemyEntityDescriptor introduces them to the Engines through the relative Enemy Nodes. Wow, it seems that it’s getting complicated, but it’s really not. Once you wrap your head around these concepts, you will see how easy is using them. You understood that Entities exist conceptually, but they are defined only through components which are implemented though Implementers. However you may wonder why nodes are needed as well. While Components are directly linked to Entities, Nodes are directly linked to Engines. Basically Nodes are a way to map Components inside Engines. In this way the design of the Components is decoupled from the Engines. If we had to use Components only without Nodes, often it would become very awkward to design them. Should you design Components to be shared between Entities or design Components to be used in specific Engines? Nodes rearrange group of components to be easily used inside Engines. Nodes are mapped directly to Engines. specialized Nodes are used inside specialized Engines, generic Nodes are used inside generic Engines. For example: an Enemy Entity is an EnemyNode for Enemy Engines, but it’s also a PlayerTargetNode for the PlayerShootingEngine. The Entity can also be damaged and the HealthNode will be used to let the HealthEngine handle this logic. Being the HealthEngine a generic engine, it doesn’t need to know that the Entity is actually an Enemy. different and totally unrelated engines can see the same entity in totally different ways thanks to the nodes. The Enemy Entity will be injected to all the Engines that handle the EnemyNode, the PlayerTargetNode and the HealthNode Nodes. It’s actually quite important to find Node names that match significantly the name of the relative Engines.

Entities can be easily created directly in code, but if Implementers are Monobehaviours, then it could be more convenient to use EntityDescriptorHolders to fetch the Implementers on GameObjects. For example, the GameObject that holds the Implementer Monobehaviours that define a Player Entity, can also hold the following Monobehaviour:

 this allows more generic and automatic code to create Entities implicitly when defined on GameObjects like this:

the code above simply looks for all the gameobjects that hold EntityDescriptorHolders and create Entities out of them. Once Engines and Entities are built, the setup of the game is complete and the game is ready to run.

The coder  can now focus simply on coding the Engines, knowing that the Entities will be injected automatically through the defined nodes. If new engines are created, is possible that new Nodes must be added, which on their turn will be added in the necessary Entity Descriptors.


The Logic in the Engines

the necessity to write most of the original boilerplate code has been successfully removed from this version of the Framework. An Important feature that has been added is the concept of IQueryableNodeEngine. The engines that implement this interface, will find injected the property

This new object remove the need to create custom data structures to hold the nodes. Now nodes can be efficiently queried like in this example:

This example also shows how Engines can periodically poll or iterate data from Entities. However often is very useful to manage logic through events. Other ECS frameworks use Observer or very awkward “Event Bus” objects to solve communication via events. Event bus is an anti-pattern and the extensive use of observers could lead to messy and redundant code. This is why I find the idea to dispatch events through Components quite ingenious. Most of the time Engines need to communicate Entity based events. If the Player hits an Enemy Entity, the PlayerGunShootingEngine needs to communicate to the HealthEngine that a specific Entity has been damaged. Doing this communication through the Entity Components is the most natural and therefore less painful way to do it:

The example above dispatches a isDamaged event or isDead event through the Component. Other Engines will listen the same event, through the same Component, therefore they will get the ID of the Entity in their callbacks, which can be used to query other nodes, like this isDead listener:

The framework introduces also a special Component called IRemoveEntityComponent. When this interface is implemented, will be automatically filled with a delegate that allows to remove the Entity from all the engines. It’s enough to add it in one node to have the ability to remove an Entity. In this example:

node.removeEntityComponent.removeEntity() will remove the entity from all the engines. the removeEntity delegate will be injected automatically by the framework, so you will be find it ready to use.

The unique feature to be able to receive the added and removed nodes in Engines through the interface functions Add and Remove has been maintained in this version of the framework. This functionalities is still of fundamental importance both to use custom data structures to hold nodes and to add/remove listeners from component events.


I understand that some time is needed to absorb all of this and I understand that it’s a radically different way to think of entities than what Unity taught to us so far. However there are numerous relevant benefits in using this framework beside the ones listed so far. Engines achieve a very important goal, perfect encapsulation. The encapsulation is not even possibly broken by events, because the event communication happens through injected components, not through engine events. Engines must never have public members and must never been injected anywhere, they don’t need to! This allows a very modular and elegant design. Engines can be plugged in and out, refactoring can happen without affecting other code, since the coupling between object is minimal.

Engines follow the Single Responsibility Principle. Either they are generic or specialized, they must focus on one responsibility only. The smaller they are, the better it is. Don’t be afraid of create Engines, even if they must handle one node only. Engine is the only way to write logic, so they must be used for everything. Engines can contain states, but they never ever can be entity related states, only engine related states.

Once the concepts explained above are well understood, the problem about designing code will become secondary. The coder can focus on implementing the code needed to run the game, more than caring about creating a maintainable infrastructure. The framework will force to create maintainable and modular code. Concept like “Managers”, “Containers”, Singletons will just be a memory of the past.

Please feel free to experiment with it and give me some feedback.

Note, if you are new to the ECS design and you wonder why it could be useful, you should read my previous articles:

C# Murmur 3 hash anyone?

If you need a fast and reliable (read very good to avoid collisions) hashing algorithm (not cryptographic), look no further, Murmur 3 is here to the rescue.

Take it from here, it’s compatible with Unity3D.

Directly translated from the c++ version.

More info on Wikipedia and on this definitive answer on stack overflow

The truth behind Inversion of Control – Part V – Entity Component System design to achieve true Inversion of Flow Control

I can imagine that someone, at this point, could wonder if I still recommend to use an IoC container. IoC containers are handy and are quite powerful with Unity 3D, but they are dangerous. If used without Inversion of Control in mind, they can lead to messy code. Let me explain why:

When an IoC container is used with a standard procedural approach, thus without inversion, dependencies are injected in order to be used directly by the specialised classes. Without inversion, there is no limit to the number of dependencies that can be injected. Therefore it doesn’t come natural applying the Single Responsibility Principle to the code written. Why should I split the class I am currently working on? It’s just an extra dependency injected and few lines of more code, what harm can it do? Or also: should I inject this dependency just to check its state inside my class? It seems the simplest thing to do, otherwise I would need to refactor my classes…

Similar reasoning, plus the proverbial coder laziness, usually result into very awkward code. I have seen classes with more than ten dependencies injected, without raising any sort of doubt about the legitimacy of such a shame.

Coders need constrains and IoC containers unluckily don’t give any. In this sense, using an IoC container is actually worse than manual injection, because manual injection would be limiting the problem due to the effort to pass so many parameters by constructors or setters.
Dependencies end up being injected in a sort of singleton fashion way. Binding a type to a single instance becomes much more common than binding an interface to its implementation.

One way to limit this problem is to use multiple Composition Roots according the application contexts. It’s also possible to have one Composition Root and multiple containers. in this way it would be possible to segregate classes and specify their scope, reflecting the internal layering of the application. An observer wouldn’t be injectable everywhere, but only by the classes inside the object graph of the specific container. In this sense, hierarchical containers could be quite handy.

I should definitively write an article on the best practices in using an IoC container with more examples, but right now let’s see how dangerous an IoC container could become without Inversion of Flow control in mind.

The following example is a classic class that doesn’t have a precise context or responsibility. It has very likely started with a simple functionality in mind, but being used as a sort of “Utility” class, there is no limit to the number of public functions it can have. Consequentially there is no limit to the number of dependencies it can have injected. “Look this class has already all the information we need to add this functionality” … “ok let’s add it” … “oh wait, this function actually needs this other dependencies to work, all right then, let’s add this new dependency”. This example is the worst of the lot, unluckily pretty common when the concept of Inversion of Control is unknown or not clear.

My definition of “Utility” class is simple: whatever class that exposes many public functions ends up being a Utility class, used in several places. Utility classes should be static and stateless. Stateful Utility classes are always a design error.

The following class uses Injection without Inversion of flow Control in Mind. It’s a utility class and exposes way to many public functions. Public functions reflect the class “responsibilities”, thus this class has way too many responsibilities.

Using events is a good way to implement inversion of flow control. The class reacts to events from the injected classes. Injecting dependencies to register to their events is a reasonable use of injection. What we need to achieve is encapsulation and of course public functions of course break encapsulation. What’s the problem about breaking encapsulation? The biggest problem is that your class is not in control of the flow anymore. The class won’t have any clue when and where the its public functions will be called, therefore cannot predict what will happen. Without being able to control the flow, it will be very simple to break the logic of the class. A classic example is a public function that assumes that some class states, set asynchronously, are actually set before the public function is used. Adding checks in this kind of scenario will quickly turn in some horrible and unmanageable code.

Thus, events help in this scenario, however we must be very careful when we use events. It’s very important to not assume what will listen to those events when the events are created. Events must be generic and have generic names. If events become specific, assuming what will happen when they will be dispatched, then there will be no difference compared to using public functions. Using events and callbacks The class will probably look like this:

Finally, Inversion of Control can be optimally achieved through properly designed code. I found the Template Pattern to be a good companion. The following class is not injected to any class, but actually registered inside a “manager” compatible with the template interface implemented. The manager class will hold a reference to this class and will use the object through the interface IFoo. The key is that the manager won’t have any clue about the IFoo implementation, following the Liskov Substitution Principle (the L in the SOLID principles).

Programmers tend to code according their own code design knowledge based on their own experience. If a tool is too flexible, it could be used in ways that can cause more harm than benefits. That’s why I love rigid frameworks, they must dictate the best way to be used by the coder without being open to interpretation. With this in mind, I started to research possible alternatives to IoC containers and, as discussed in my Entity Component System architecture article, I realised that a proper ECS implementation for Unity could be what I am looking for.

In order to prove that an ECS solution could be viable in Unity, I had to spend several hours during my weekends to create a new framework and an example with it. The framework and the example can be found separately at these addresses:

Download both repositories and sort out the right folders Svelto-ECS and Svelto-ECS-Example. Open the scene Svelto-ECS-Example\Scenes\Level 01.unity to run the Demo.

The example is the original Unity 3D survival shooter rewritten to be adapted to my ECS framework. However this framework is still not production ready, it misses some fundamental functions and it’s still rough (note: this is not true anymore, the framework is now production ready). Nevertheless, it has been really useful to prove and demonstrate my theories.

The intention was to create a framework able to push the coders to write High Cohesive, Low Coupled code[1] with Open/Close and Single Responsibility Principles in mind. Following these principles would naturally lead to less dependencies injected and, as it will be shown with the simple example I wrote, the use of dependency injection will be limited to the scope of showing its compatibility with the framework.

Of course it must be clear that the use of such a framework makes sense only when it’s applied on a big project maintained by several coders. The survival demo itself is too simple to really appreciate the potential of the idea. In fact, for this specific example, I could say that using my ECS framework is basically over-engineering. However I do suggest to experiment with the library anyway, so that you can understand its potentiality.

A real Entity Component System in Unity

Note: the following code examples are not updated to the new version of the library.

My implementation of ECS is very similar to many out there, although, after studying the most famous frameworks, I noticed that there isn’t a well defined way to let systems communicate between each other, a problem that I solved in a novel way. But let’s proceed step by step.

I have already talked about what an ECS is, so let’s see how we can make it fit in Unity. Let’s start from a list of rules created to use my framework properly:

  1. All the logic must be always written inside Systems.
  2. Components don’t hold logic, they are just data wrappers (Value Objects).
  3. Components can’t have methods, only getters and setters. (this is not enforced yet, but in the final version the framework will throw an exception otherwise)
  4. Systems do not hold components data or state. They can hold system states (not really enforced, but it’s a rule).
  5. Each System has one responsibility only. (not really enforced, but it’s a rule).
  6. Systems cannot be injected (not really enforced, but it’s a rule).
  7. Systems communicate between each other mainly through components, but also through observers and similar patterns.
  8. Systems can have injected dependencies.
  9. Systems do not know each other (not really enforced, but it’s a rule).
  10. Systems should be defined inside sensible namespaces according to their context.
  11. Entities are not defined by objects, they are just IDs.

First we need a composition root. We know already how to effectively introduce a composition root in Unity from my IoC container example and being the composition root independent by the container itself, we can surely re-use the same mechanism, without using any IoC container.

The Composition root becomes the place where the systems can be defined. In my framework, Systems are actually called Engines and they all implement the IEngine interface. For the example, most of the time, the INodeEngine specialization is used instead.

Following the code, it makes sense to start from our root; inside the MainContext.cs the setup of the engines is found:

enginesRoot is the new IEngine container, while tickEngine is instead not strictly part of the ECS framework and it’s used to add the Tick functionality to whatever class implements ITickable.

Entity Creation in Svelto ECS

Engines are sort of “managers” meant to solve one single problem on a set of components grouped by entity nodes. An Entity can hold several components and an engine is constantly aware of specific components from all the entities in game. Thus an engine must be able to select the ones it’s interested in.
Other frameworks implement a query mechanism that let the systems pickup the components to use, but as far as I understood this can limit the design of systems and have possible performance penalties (note: a fast query system has been introduced in the final version). My method is more flexible, but has the disadvantage to generate some boiler plate code. With this method, each INodeEngine can accept nodes (not components, I’ll talk about them later) through the following interface:

Being the library not complete yet, at the moment entities can be created only though GameObjects and components through MonoBehaviours, but this is a limitation I will eventually remove. In fact, it would be a mistake to associate GameObjects to entities and MonoBehaviours to components. What I wrote is just a bridge to make the transition painless and natural.

Components are related to the design of the Entity itself and they must not be created with the Engine logic in mind. They essentially are units of data that should be actually shareable between entities. More general is the design of the component, more reusable it is. For example the following components are entity-agnostic and reused among different entity types in our example:

The coder must face an important decision when it’s time to write components. Should several reusable small components (often holding just one data) or less not reusable “fatter” components be used? I faced this decision myself several times during the creation of the example and, in its limited scope, I eventually find out that shareable components are better in the long term, since they have more chance to be reused (therefore avoiding write new components for specific entities).

As you can see, components are defined through interfaces. This could seem like a weird decision, which instead is very practical when it’s time to define components through MonoBehaviors. I often notice that ECS frameworks have the drawback to force the coder to allocate dynamically several objects when a new entity is created. While object pooling could help, the run-time creation of new entities always have a negative impact. I then realised that, as long as the components are defined through interfaces, they could be all implemented inside bigger objects that are never directly used. With Unity it also comes handy, since few MonoBehaviours, that implement multiple interfaces, would do the job.

The MonoBehaviour above has no meaning for my framework, but instead to be destroyed, it can be reused to hold the data exposed by all the interfaces implemented. I found the use of the explicit implementation of the interface very convenient to remember which method implement which interface (note: since I wrote this article, I found out that explicit implementation of methods are actually always defined as virtual function, therefore they introduce a performance penalty. If performance is critical, ignore my suggestion).

The framework will actually pass the reference of the MonoBehaviour to the Engines, but the engines will know the components only through their interfaces. The Engines are not aware of the fact that the object implementing the component interfaces is actually a MonoBehaviour and it would be wrong otherwise.

Nodes, what are they for?

While components are designed to work with entities, nodes are defined by the engines. In this way the coder won’t be confused on how to design a component. Should the component follow the nature of the Entity or fit the necessity of the Engine? This problem initially led me to very awkward code when I wrote the first draft of the demo. Separating the two concepts helped to define a simple, but solid, structure of the code.

With each Engine comes the relative set of nodes. It’s the Engine to define its own Nodes and Nodes shouldn’t be shared between Engines. An Engine must be designed as an independent and separated piece of code. However eventually I decided to relax this rule when I found out that the ECS framework is also useful to separate the classes logic into several levels of abstraction. Engines can be grouped into namespaces, according what they do. For example all the engines that manage enemy components are under the EnemyEngines namespace. All the engines that manage player components are under the PlayerEngines namespace. All the nodes usable by enemy engines are defined under the EnemyEngines namespace and all the nodes usable by player engines are defined under the PlayerEngines namespace.

The rule is that nodes defined in a namespace, can be used only by the engines in that namespace and the namespace itself will help the coder to not mix up different classes from different environments.

Namespaces are logically layered and while enemy engines and player engines are relatively specific, since they can operate only on enemies and the player respectively, the HealthEngine is instead abstracted and can operate both on enemy and players. However, since the HealthEngine is neither in the Player nor in the Enemy namespace, it can know the entities components only through its own node, the DamageNode (I know, I should have called it HealthNode).

Nodes are simply new objects that group entity components in a way that is suitable to the engine logic. In order to create Nodes through GameObjects, a NodeHolder MonoBehaviour must be defined. While the amount of lines of code is very limited to enable this functionality, the boiler plate code is again a problem, so it’s something I will surely address in the next versions of the framework.

Once a GameObject defines its set of components and nodeholders, the framework will automatically inject the nodes in to the compatible engines when the gameobject is created. The GameObject can be created both statically, in the scene, as child of the GameContext gameobject or dynamically, using the GameObjectFactory.


Components and MonoBehaviours

components cannot hold logic and as I explained, our components are NOT MonoBehaviours, but can be implemented through MonoBehaviours. This is important not just for saving dynamic allocations, but also to not loose the features of the native Unity framework functionalities, which all work through MonoBehaviours.

Since at the end of the day it’s impossible to write a game without GameObjects and Monobehaviours, I need the framework to coexist with and improve the Unity functionalities, not fighting them which would just produce inefficient code.
This is the reason why I decided to not change the original implementation of the enemies player detection. In the original survival demo, the enemies detect if a player is in range through the OnTriggerEnter and OnTriggerExit MonoBehaviour functionalities.

The EnemyTrigger MonoBehaviour still uses these functions, but what it does is simply to fire an entityInRange event. Now pay extreme attention, the object will actually NOT set the playerInRange value, it will just fire an event. The MonoBehaviour cannot decide if the player is actually in range or not, this decision must be delegated to the Engine. Sounds strange? Maybe, but if you think of it, you will realise that it actually makes sense. All the logic must be executed by the engines only.

these values will be then used by the engine responsible of the enemy attack and the job of the MonoBehaviour simply ends here.


Communication in Svelto ECS

I understand that all these concepts are not easy to absorb, that’s why I wrote four articles before to introduce this one. I also understand that if you haven’t worked on large projects, it’s hard to see the benefits of this approach. That’s why I invite you to remember the intrinsic limits of the Unity Framework and then the pitfalls of the IoC containers that try to overcome them.
Of course using such a different approach makes sense if eventually everything becomes easier to manage and, beside the current problem of the boiler plate code, I believe this is the case.

One of the most problematic issues that the framework solves brilliantly is the communication between entities. Engines are natural entity mediators, they know all the relevant components of all the entities currently alive, therefore they can take decisions being aware of the global situation instead to be limited to a single entity scope.

Let’s take in consideration the EnemyAttackEngine class, which is the engine that uses IEnemyTriggerComponent object. It relies on the IEnemyTriggerComponent event to know if a player is potentially in range or not, but I decided to do this just for the sake to show how the ECS framework can interact with the Unity Framework. I can also guess that the OnTriggerEnter performs better than c# code, but I am not totally sure this is the case.

What I could have simply done is instead to store the enemies transform components in a List and iterate over them every frame, through the Tick function, to know if the player is in range or not. The engine could have set the component playerInRange value without waiting for the MonoBehaviour event.

Why do components have events?

Interesting question. Many ECS frameworks do not actually have this concept. Events are usually sent through external observers or using an EventBus. The decision I took about using events inside components is possibly one of the most important. In this simple demo I actually use an observer once just to show that it’s possible to use it, but otherwise I wouldn’t have needed it. All the communication between entities and engines and among engines can be usually performed through just component events.

It’s very important to design these events properly though. Engines know nothing about other engines. They cannot assume/operate/be aware of anything outside their scope. This is how we can have low-coupled, high-cohesive code.

Practically speaking this means that an Engine cannot fire a component event that semantically doesn’t make sense inside the engine itself. Let’s take in consideration the HudEngine class. The HudEngine has the responsibility to update the HUD (a little bit broad as responsibility, but it’s OK for such a simple project). One of the HUD functions is to flash when the player is damaged.

the DamageNode uses the IDamageEventComponent to know when the player is damaged. Obviously the IDamageEventComponent has a generic damageReceived event. It wouldn’t make sense otherwise. However, let’s say that, in a moment of confusion, thinking about the HudEngine functionalities, I decided to have instead a flashHudEvent. This would have been wrong on many levels:

  • first, every time an event is used as substitute of a public function, it means there is a design code flaw. This is a rule regardless this article. Events always follow the Inversion of flow control and are never designed to call a specific function.
  • I would have needed a new component that really, for the Entity point of view, wouldn’t have made any sense.
  • The HealthEngine, that decides when an entity is damaged, would have know the concept of flashing hud (waaaaat?)

Some times these reasoning are not so obvious, that’s why a self-contained engine, not aware of the outside world, will force the coder to think twice about the code design (which is my intent all along).

Putting all together

The framework is designed to push the coder to write high cohesive, low coupled code based on engines that follow the Single Responsibility and the Open Closed principles. In the demo the number of files is multiplied comparing the original version for two fundamental reason: the old classes were not following the SRP, so the number of files naturally incremented. however due to the current state of the framework, annoying boiler plate code is needed. Any suggestion and feedback is welcomed!

note: Svelto ECS is now production ready, read more about it here.


Other famous ECS frameworks:


Ash :

ECS frameworks that work with Unity3D already:


I strongly suggest to read all my articles on the topic:

How to perfectly synchronize workspaces with older versions of Perforce

The latest versions of perforce finally support the annoyingly for too long underestimated and useful option P4 clean. Using this command is possible to delete all the local files that are not present in the depot and sync all the files that have been modified locally, or to be more precise:

  1. Files present in the workspace, but missing from the depot are deleted from the workspace.
  2. Files present in the depot, but missing from your workspace. The version of the files that have been synced from the depot are added to your workspace.
  3. Files modified in your workspace that have not been checked in are restored to the last version synced from the depot.

This is very useful when is time to use perforce on a building machine, since it’s of fundamental importance to be sure that the workspace totally matches the depot.

Who, like me, is forced to use a previous version, though, is just destined to swear at infinitum. However, sick of the problem, I did some research around and found some articles with very good solutions. After I put them together I just wanted to share them with you.

The first part checks for all the files that are different in the workspace and, thanks to the piping facility, force sync them. I found out that is safer to call the commands separately instead to use the options -sd and -se on the same diff.

the second part uses a batch to delete all the files present in the workspace, but not in the depot, exploiting the reconcile command

as extra bonus, I add the following one

I don’t understand why often the revert unchanged files command doesn’t actually revert identical files. This command will succeed where the standard command fails.

The truth behind Inversion of Control – Part IV – Dependency Inversion Principle

The Dependency Inversion Principle is part of the SOLID principles. If you want a formal definition of DIP, please read the articles written by Martin[1] and Schuchert[2].

We explained that, in order to invert the control of the flow, our specialized code must never call directly methods of more abstracted classes. The methods of the classes of our framework are not explicitly used, but it is the framework to control the flow of our less abstracted code. This is also called (sarcastically) the Hollywood principle, stated as “don’t call us, we’ll call you.”.

Although the framework code must take control of less abstracted code, it would not make any sense to couple our framework with less abstracted implementations. A generic framework doesn’t have the faintest idea of what our game needs to do and, therefore, wouldn’t understand anything declared outside its scope.

Hence, the only way our framework can use objects defined in the game layer, is through the use of interfaces, but this is not enough. The framework cannot know interfaces defined in a less abstracted layer of our application, this would not make any sense as well.

The Dependency Inversion Principle introduces a new rule: our less abstracted objects must implement interfaces declared in the higher abstracted layers. In other words, the framework layer defines the interfaces that the game entities must implement .

In a practical example, RendererSystem handles a list of IRendering “Nodes”. The IRendering node is an interface that declares, as properties, all the components needed to render the Entities, such as GetWorldMatrix, GetMaterial and so on. Both the RendererSystem class and the IRendering interface are declared inside the framework layer. Our specialised code needs to implement IRendering in order to be usable by the framework.

Designing layered code

So far I used the word “framework” to identify the most abstracted code and “game” to identify the least abstracted code. However framework and game don’t mean much. Limiting our layers to just “the game layer” and “the framework layer” would be a mistake. Let’s say we have systems that handle very generic problems that can be found in every game, like the rendering of the entities, and we want to enclose this layer in to a namespace. We have defined a layer that can be even compiled in a DLL and be shipped with whatever game.

Now, let’s say we have to implement the logic that is closer to the game domain. Let’s say that we want to create a HealthSystem that handles the health of the game entities with health. Is HealthSystem part of a very generic framework? Surely not. However while HealthSystem will handle the common logic of the IHaveHealth entities, not all the game entities will have the same behaviors. Hence HealthSystem is more abstracted than more specialized behavior implementations. While this abstraction wouldn’t probably justify the creation of another framework, I believe that thinking in terms of layered code helps designing better systems and nodes.

Putting ECS, IoC and DIP all together

As we have seen, the flow is not inverted when, in order to find a solution, a bottom up design approach is used to break down the problem, that is when the specialized behaviors of the entities are modeled before the generic ones. Or also when the systems are designed as result of the specialized entity problems.

In my vision of Inversion of Control, it’s needed to break down the solutions using a top down approach. We should think of the problems starting from the most abstracted classes. What are the common behaviors of the game entities? What are the most abstracted systems we should write? What once would have been solved specializing classes through inheritance, it should be now solved layering our systems within different levels of code abstraction and declaring the relative interfaces to be used by the less abstracted code. Generic systems should be written before the specialized ones.

I believe that in this way we could benefit from the following:

  • We will be sure that our systems will have just one responsibility, modeling just one behavior
  • We will basically never break the Open/Close principle, since new behaviors means creating new systems
  • We will inject way less dependencies, avoiding using a IoC container as a Singleton alternative
  • It will be simpler to write reusable code
  • We could potentially achieve real encapsulation

In the next article I will explain how I would put all these concepts together in practice



I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part III – Entity Component System Design

In the previous article I explained what the Inversion of Control principle is, then I introduced the concept of Inversion of Flow control. In this article I will illustrate how to apply it properly, even without using an IoC container. In order to do so, I will talk about Entity Component System design. While apparently it has nothing to do with Inversion Of Control, I found it to be one of the best way to apply the principle.

Once upon the time, there was the concept of game engine. A game engine was a game-specialized framework that was supposed to run whatever game. This game engine used to have some common classes, designed as sort of “managers”, that were found more or less in all the game engines, like the Render class. Every time a new object with a Renderer was created, the Renderer component of the object was added to a list of Renderers managed by the Render class. This was also true for other components like the Collision component, the Culling component and so on. The Engine dictates when to execute the culling, when to execute the collisions and when to execute the rendering. The less abstracted objects don’t know when they will be rendered or culled, they assume only that at a given point, it will happen.

The game engine was taking the control of the flow, resulting in the first form of Inversion of Flow Control. There is no difference with what I just explained and what Unity engine does. Unity decides when is time to call Awake, Starts, Update and so on. Unity framework is capable to achieve both Inversion of Creation Control and Inversion of Flow Control. MonoBehaviour instances cannot be directly created by the users; either if they are already present in the scene or they are created dynamically, it’s Unity to create them for us. The Inversion of Flow control is instead achieved through the adoption of the Template Pattern. Our MonoBehaviour classes must follow a specific template (through the Awake, Start, Update and similar functions) in order to be usable by the Unity framework “managers”.

Now, let’s have a look at what a modern Entity Component System instead is.

Modern Entity Component Systems

A more advanced way of managing entities has been introduced in 2007 with the modern implementations of Entity Component System

In 2007, the team working on Operation Flashpoint: Dragon Rising experimented with ECS designs, including ones inspired by Bilas/Dungeon Siege, and Adam Martin later wrote a detailed account of ECS design[1], including definitions of core terminology and concepts. In particular, Martin’s work popularized the ideas of “Systems” as a first-class element, “Entities as ID’s”, “Components as raw Data”, and “Code stored in Systems, not in Components or Entities”.[2]

ECS Design uses Inversion of Flow control in its purer form. ECS is also a magnificent way to possibly never break the Open/Close principle when is time to add new behaviors.

the Open/Closed principle, which is also part of the SOLID principles, says:

software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

OCP is the holy grail of the perfect code design. If well adopted, it should never be needed to go back to previously created classes and add new functions in order to add new behaviors.

The ECS design works in this way:

  • Entities are just basically IDs.
  • Components are ValueObjects[3]. Components wrap data around an object that can be shared between Systems. Components do not have any logic to manage the data.
  • Systems are the classes where all the logic lies. Systems can access directly to list of components and execute all the logic the project needs.

I understand that the concept of data component could sound weird at first. I had problems to wrap my head around it initially as well. I guess that the key that cleared my doubts was to understand that there isn’t such a thing as a System being too small. A System can be everything that handles logic indeed, as the Presenter or the Controller handle logic in the MVP or MVC pattern respectively (if you don’t know what they are, never mind at the moment).

The really powerful aspect of this design is that the Systems must follow the Single Responsibility Rule. All the behaviors of our in-game entities are modeled inside Systems and every single behavior will have a single, well defined in terms of domain, System to manage it.

This is how we can apply the Open/Close principle without ever break it: every time we need to introduce a new specific behavior, we are forced to create a new System that simply uses components as data to execute it.

Systems are implicitly mediators as well, this is why Systems are great to let Entities communicate each other and with other Systems (through the use of the Components). However Systems are not always able to cope with all the possible communication problems just through the use of Components and for this reason Systems, that should be instantiated in the Composition Root, can have injected other dependencies (I’ll give more practical examples with the next articles).

Also, pay attention, Systems model all the logic of your framework AND game. When ECS design is used, there won’t be any implementation difference between the framework logic (like RenderingSystem, PhysicSystem and so on) and the game logic (like AliensAISystem, EnemyCollisionSystem and so on), but there will be still a sharp difference in the code separation. Framework and Game systems will still lie in different layers of the application. This introduce another very important concept: the design of our game application with multiple layers of abstraction. Using just two layers of abstraction is not enough for a complex game. Framework layer and Game layer alone are not enough. We need to make our Game Layering more granular.

All that said, I noticed that there is some confusion when it’s time to define the design adopted by Unity. Although its “managers” can be considered “systems”, they are not according the modern definition, since they cannot be defined or extended by the programmer.
Of course when is not possible to extend the “Systems” functionalities, the only way to extend the logic of our entities is to add logic inside Components. This is anyway a step forward from the classic OOP techniques. Component Oriented (I call them Entity Component, without System) frameworks like the one in Unity, push the coder to favor Composition over Inheritance, which is surely a better practice.
All the logic in Unity should be written inside focused MonoBehaviour. Every MonoBehaviour should have just one functionality, or responsibility, and they shouldn’t operate outside the GameObject itself. They should be written with modularity in mind, in such a way they can be reused independently on several GameObjects. Monobehaviours also hold data and their design clearly follows the basic concepts of OOP.
Modern design tends instead to separate data from logic. As Data, Views and Logic are separated when the Model View Controller pattern is implemented, the same happen with ECS design through Components and Systems in order to achieve better code modularity.

Before to finally shows the benefits of the ECS design approach with real code, I will explain the concept of Dependency Inversion Principle in the next article of this series.





I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part II – Inversion of Control

Note: this article assumes you already read my previous articles on IoC containers and the Part I

Inversion of Control is another concept that turns out to be very simple once it has been fully processed or rather saying “absorbed”. Absorbed as going over being understood and becoming part of one’s own forma mentis. However, make no mistake, Inversion Of Control is not the same of Inversion Of Control Container (which is not a principle, but just a tool to simplify Dependency Injection)[1]. Inversion Of Control container is a confusing name in this sense[2], therefore many use the name Dependency Injection Library instead.

While using an IoC container is very simple, being able to invert the control of the code is another matter. The process of adaptation is not straightforward since the entire code paradigm will have to change. In order to try to explain this paradigm, I will need to introduce the concept of code abstraction. The following definitions will be explored in more details in the next articles, so come back here if they won’t be grasped immediately.

The inversion of control cannot be really applied successfully without designing the application with multiple layers of abstraction. Higher is the abstraction, more general is the scope of the code. For example, a general framework is part of the highest levels of abstraction, since it could be used by whatever type of application. A class that manages the health of the enemies in a game belongs to a lower level of abstraction.

If we think of our code structure in terms of layers of abstraction, Inversion of control is all about giving the control to the more abstracted classes instead of the less ones. Of course you would ask: control of what? The idea is that general classes should control both the creation and the flow of the more specialised code.

How can the object creation be inverted? It’s simple: code that follows the Inversion of Creation Control rules, never uses the new keyword explicitly to create dependencies. All the dependencies are always injected, never created directly.

If we apply this reasoning to the injection waterfall discussed in the first article of this series, it is easy to see that ALL the dependencies, therefore all the objects, must be created and initially passed from the Composition Root. The Composition Root effectively becomes the only place where all the starting relations between objects are made.

If you wonder how to create dynamic dependencies, like objects that must be spawned in run-time, you asked yourself a good question. These objects are always created by factories and factories are always created and passed as dependency from the Composition Root.

Simply put, the new operator should be used only in the Composition Root or inside factories. An IoC container hides this process, creating and passing all the dependencies automatically instead to force the user to pass them by constructors or setters. The application-agnostic IoC container code takes control of the creation of all the dependencies. Of course dynamic allocation is still freely used to allocate data structures, but data structures are not dependencies.

Why is inverting the creation control important? Mainly for the following three reasons:

  1. the code becomes independent from the constructor implementation. A dependency can, therefore, be passed by interface, effectively removing all the coupling between the implementation of classes.
  2. Because of 1, your code will be dependent only but the abstraction of a class and not its implementation. In this way it’s possible to swap implementations without changing the code.
  3. the object injected comes already created with its dependencies resolved. If the code had to create the instance of the object, those dependencies must have be known as well.
  4. the flow of the code can change according the context. Without changing the code is possible to change its behaviour just passing different implementation of the same interface.

The first point is fundamental to be able to mock-up implementations when unit tests are written, but if unit tests are not part of your development process, the third point can lead to cleaner code when is time to implement different code paths.

Can Inversion of Creation Control be achieved without using an IoC container? Absolutely, let’s see how simply using manual dependencies injection:

while I am not really good with examples, it’s not simple to find a compact one that is also meaningful. However I think the above example includes all the discussed points.

Main is our simple Composition Root. It’s where all the dependencies are created and the initially injected.  If you try to run this code, it will actually work. It will run a dumb, not interactive simulation of a Player fighting Enemies.

All the game logic is encapsulated inside the Level class. Since the Level class uses the Template Pattern in order to implement the functions needed by the LevelManager class to manage a Level, is possible to extend the logic of the game creating new Level classes. However adding Level instances inside the LevelManager is not Dependency Injection, so it is irrelevant to our exercise (LevelManager doesn’t strictly need Level instances injected to work, the class is still functional even without any Level object added).

Each Level needs two dependencies. An implementation of an IEnemySpawner and the Player object. Note that the level name is not actually a dependency. A dependency is always an instance of a class needed by another class.

level1 and level2 are different because of the number and type of enemies created. level1 contains only two enemies of type A, level2 contains one enemy of type A and two enemies of Type B. Type B can be more powerful than type A when inflicts damage to the Player. However the Level implementation actually doesn’t change. The different behaviour is just due to the different implementation of the IEnemySpawner interface passed as parameter. Injecting two different IEnemySpawner objects changes the level gameplay, without changing the Level code.

EnemySpawner doesn’t build directly enemies, because is not its responsibility. The Enemyspawner just decides which enemies are spawned and how, but doesn’t need to be aware of what an enemy needs to be created.

As you can see both EnemyA and EnemyB depend on the implementation of the class Random to work, but EnemySpawner doesn’t need to know this dependency at all. Therefore we can use a factory both to encapsulate the operator new and pass the dependency Random directly from the composition root.

My explanation is probably more complicated than the example itself, where it’s clear that all the dependencies are created and passed through constructor from inside the main function. The only exception is when the EnemyFactory injects the Random implementation by setter.

In this example I haven’t used an Inversion of Control container but the control of the objects creation has been nevertheless inverted. The context takes away the responsibility of creating dependencies from the other objects, dependencies can be passed by Interface, the flow of the code changes according which implementation has been injected.

So the questions I am asking myself lately are: do we really need an IoC container to implement Inversion of Creation control? Are the side effects of using an IoC container  less important than the benefits of using such a tool? Searching an answer to these question is what led me to start to write these articles.

I can give a first answer though: manual Dependency Injection is very hard to achieve with the Unity framework. As I have already widely explained in my past articles, due to the Unity framework nature, dependencies can be injected only through the use of singletons or the use of reflection. C# reflection abilities is what actually enables mine and other IoC containers to inject dependencies in an application made with Unity. So how can we possibly adopt manual dependency injection in Unity? One possible solution is actually to reinterpret the meaning of the Monobehaviour class in order to never need to inject dependencies in it. Is it possible? As we soon find out, if we change our coding paradigm, it’s not just possible, but also convenient to do.
For the time being, keep in mind that if the code is designed without really knowing what Inversion of Control is, an IoC container merely becomes a tool to simplify the tedious work of injecting dependencies; a tool that is very prone to being abused. An IoC container cannot be used efficiently without knowing how to design code that inverts creation and flow control.

So far I talked mainly about Inversion of Creation control, but I mentioned several times the Inversion of Flow Control, therefore I need to give a first explanation before to conclude this article.

What is Inversion of Flow Control? Quoting Wikipedia:

inversion of control (IoC) describes a design in which custom-written portions of a computer program receive the flow of control from a generic, reusable library. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the reusable code that calls into the custom, or task-specific, code.”

In this sense an IoC container could be used to implement Inversion of Flow control when it’s time to “plug-in” implementations from the lower levels of abstraction to the higher levels of abstraction without breaking the Dependency Inversion Principle. Inversion of Flow control is even more important than Inversion of Creation Control and I will explore the reasons in detail in my next article.


[2] (but read all of it)

I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part I – Dependency Injection

Note: this article series assumes you have already read my previous articles on IoC containers.

There is an evil truth behind the concept of Inversion of Control container. An unspoken code tragedy taking place everyday while passing unnoticed. Firm in my intentions to stop this shame, I decided to stand up and start writing this series of articles that will tell about the problem, the principles and the possible solutions.
I started to notice the symptoms of this “blasphemy” (against the code gods 🙂 ) quite a while ago, but I couldn’t pin down the reason of them. Nevertheless the problem was pretty clear: the IoC container solution was scarily used too often as alternative to the Singleton pattern to inject dependencies. With the code growing and the project evolving, many classes started to take the form of a blob of functions, with a common pitfall: dependencies, dependencies everywhere.
What means and what is wrong with using an IoC container as mere substitute of a Singleton, is something that I am going to describe the best I can with this series of articles. Don’t get me wrong, IoC containers are great tools, but if they are used in the wrong way, the can actually lead to major issues as well. I realised that IoC containers cannot be used without understanding how to use them, that’s why I started to look for a safer solution that could be adopted even by inexperienced coders. Before to look at this solution, let’s start taking some steps back and explain what Dependency Injection actually is:

What Dependency Injection is

Dependency Injection isn’t anything fancy. A dependency is just an interface of which a class is dependent on. Usually dependencies are solved in two ways: injecting them or passing them by Singletons. Singletons break encapsulation in the sense that, as a global variable, they can be used without a scope. Singletons awkwardly hide your dependencies: there is nothing in the class interface showing that the dependency is used internally. Singletons strongly couple your implementations, resulting eventually in too long and painful code refactoring. To be even more practical, Singletons, as all the global variables holding references, are also often source of memory leaks. For these reasons we use injection to solve dependencies.

Now, think for a moment if there wasn’t an IoC container in place. How would we inject our dependencies? For example, passing them as parameters in a constructor as shown in the example. Would you pass 10 parameters by constructor? I surely wouldn’t, at least just for how painful and inconvenient it is. Same reasoning applies when an IoC container is used. Just because it’s more convenient, it doesn’t mean you can take advantage of it. You are just making a mess with object coupling again. To be honest, if the design of the code would really follow the SOLID[1] principles, this problem wouldn’t arise, since the number of dependencies injected is directly linked to the number of responsibilities a class has got. One responsibility only should lead to a very few dependencies injected. However without a proper paradigm to follow, we all know that coders tend to break the Open/Close principle and add behaviours to existing classes instead to adopt a modular and extendible design. That’s when IoC containers start to be dangerous, since they actually help this process, making it less painful.

When dependencies are injected into an instance, where do these objects come from? If the dependencies are injected by constructor, they obviously come from the scope where the object, that needs the dependencies injected, has been created. On its turn, the class that is injecting the dependencies into the new object could need also dependencies injected, which therefore are passed by another class in the parent scope. This chain of dependency passages creates a waterfall of injections and the relationship between these objects is called Object Graph[1].

Albeit, where does this waterfall start from? It starts from the place where the initial objects are created. This place is called Composition Root. Root because is where the context is initialised, composition because is where the dependencies start to be created and injected and, therefore, the initial relations composed.

Now you can see what the real problem of the Unity framework is: the absence of a Composition Root. Unity doesn’t have a “main” class where the relations between objects can be composed. This is why the only way to create relationships with the bare Unity framework is using Singletons or static classes/methods.

Why are relationship between objects created? Mainly to let them communicate with each other. All forms of communications involve dependency injection. The only pattern that allows communication without dependency injection is the pattern called Event Bus[2] in the Java environment. The Event bus allows communications through events held by a Singleton, hence the Event Bus is one of the many anti pattern out there. Note that you could think to create something similar to an Event Bus without using a singleton (therefore injecting it). That’s an example of what I define to use injection as mere substitute of a Singleton.

Object Communication and Dependency Injection

Communication can couple or not couple objects, but in all the cases involves injection. There are several ways to let objects communicate:

  • Interface injection: usually A is injected in B, B is coupled with A [e.g.: Inside a B method A is used, B.something() { A.something());]
  • Events: usually B is injected in A, A is coupled with B [e.g.:Inside A, B is injected to expose the event, B.onSomething += A.Something]
  • Commands: B and A are uncoupled, B could call a command that calls a method of A. Commands are great to encapsulate business logic that could potentially change often. A Command Factory is usually injected in B.
  • Mediators: usually B and A do not know each other, but know their mediator. B and A pass themselves in the mediator and the mediator wires the communication between them (i.e.: through events or interfaces). Alternatively B and A are passed to the mediator outside B and A themselves, totally removing the dependency to the Mediator itself. This is my favourite flavour and the closest to dependency-less possible.
  • Various other patterns like: Observers, Event Queue[3] and so on.

How to pick up the correct one? If we don’t have guidelines it looks like one or another is the same. That’s why, some times, we end up using, randomly, one of those. Remember the first two patterns are the worst because they couple interfaces that could change over time.

We can anyway introduce the first sound practice of our guideline for our code design: our solution must minimize the number of dependencies.

Of course the second sound practice is about the concept of Single Responsibility Principles[4] (and Interface Segregation Principle[5]). One of the 5 principles of SOLID (ISP is another fundamental one), but the only one that must be actually taken as a rule. Your class MUST have one responsibility only. Communicating could be considered a responsibility, therefore it’s better to delegate it.

How we are going to achieve SRP and solve the dependencies blob problem is something I am going to explain in the next articles of this series.


[1] SOLID (object-oriented design)

[2] Event Bus

[3] Event Queue

[4] Single Responsibility Principle

[5] Interface Segregation Principle

I strongly suggest to read all my articles on the topic:

Svelto Inversion of Control Container

If it’s the first time you visit my blog, please don’t forget to read my main articles on the subject before to continue this post:

It’s finally time to share the latest version of my Inversion of Control container, which I named Svelto IoC, which I will keep updated from now on. This new version is the current IoC container that Freejam is using for the Unity3D game Robocraft (

Thanks to the possibility to use the library in production, I could analyse in depth the benefits and disadvantages in using extensively an IoC container on a big project with a medium sized team of programmers. I am preparing an exhaustive article on the subject, but I am not sure when I will be able to publish it, so stay tuned.

The new IoC container is structurally similar to the old one, but has several major differences. In order to use it, a UnityRoot specialised monobehaviour must still be created. The class that implements ICompositionRoot is the Compositon Root of the project. The game object holding the UnityRoot monobehaviour is the game context of the scene.

All the dependencies bound in the Composition Root, will be injected during the Unity Awake period. Dependencies cannot be used until the OnDependenciesInjected function or the Start function (in case of dependencies injected inside monobehaviours) are called. Be careful though, OnDependenciesInjected is called while the injection waterfall is still happening, however injected dependencies are guaranteed to have, on their turn, their dependencies injected. Dependencies are not guaranteed to be injected during the Awake period, therefore you shouldn’t use injected dependencies inside Monobehaviour Awake calls.

Other changes include:

  • Monobehaviours that are created by unity after the scene is loaded, don’t need to ask explicitly to fill the dependencies anymore. They will be automatically injected.
  • Monobehaviours cannot be injected as dependency anymore (that was a bad idea).
  • Dynamically created monobehaviours have always dependencies injected through factories (MonoBehaviourFactory and GameObjectFactory are part of the framework).
  • Now all the contracts are always injected through “providers”, this simplifies the code and makes it more solid. Also highlights the importance of providers in this framework.
  • A type injection cache has been developed, therefore injecting dependencies of the same type is way faster than it used to be.
  • It’s now possible to create new instances for each dependency injected, if the factory MultiProvider is used explicitly.
  • You can create your own provider for special cases.
  • Improved type safety of the code. It’s not possible anymore to bind contracts to wrong types. For this reason AsSingle() has been substituted by BindSelf().
  • Various improvements and bug fixes.
  • Dependencies can be injected as weak references automatically.

What is still need to do:

  • Improve the documentation and explain the bad practices
  • Add the possibility to create hierarchical context and explain why they are necessary
  • Add the possibility to inject dependencies by construction, in order to reduce the necessity to hold references.
  • Explain how to exploit custom providers.

The new project can be found at this link: