How to use Svelto.ECS + Svelto.Tasks to optimize your game

Note: this is an on-going article and will be updated with my new findings over the time.

New exciting features are coming to Svelto.ECS and Svelto.Tasks libraries.  As I am currently focused on optimizing Robocraft for Xbox One, I added several functionalities that can help making our game faster. Hence I decided to write a simple example to show some to you. I have to be honest, it’s very hard to think about a simple example that makes sense. Every game has its own needs and what makes sense for one couldn’t be that meaningful for others. However everything boils down to the fact that I added features to exploit data locality (CPU cache) and easily write multithreaded code. I have already discussed about how to use Svelto.Tasks multi-threaded ITaskRoutine, so I hope I can now show how to use them with Svelto.ECS. Spoiler alert: this article only won’t be sufficient to show you the potentiality of the library, as being more thorough would make this post too long and complex, therefore I encourage you to experiment on your own or wait for my next articles (that come every six months or so :P). This article also assumes that you know the basic concepts behind Svelto.ECS and Svelto.Tasks.

Initially I wanted to write an example that would mimic the boids demo, showed at the unite roadmap talk, that you can briefly see on this unity blog post. I soon decided to stop going down that route because it’s obvious that Joachim took advance of the new thread safe transform class or even wrote a new rendering pipeline on purpose for this demo as the standard Transform class and even the GameObject based pipeline would be the biggest bottleneck impossible to parallelize. Since I believe that massive parallelism makes more sense on the GPU and that multithreading should be exploited on CPU in a different way, I decided to not invest more time on it. However as I had some interesting results, I will use what is left of this useless example I wrote to show you what potential gain in milliseconds you may get using Svelto.ECS and Svelto.Tasks. I will eventually discuss how this approach can potentially be used in a real life scenario too.

Svelto.ECS and Svelto.Tasks have a symbiotic relationship. Svelto.ECS allows to naturally encapsulate logic running on several entities and threat each engine flow as a separate “fiber“. Svelto.Tasks allows to run these independent “fibers” asynchronously even on other threads. GPUs work in a totally different fashion than CPUs and operative systems take the burden to handle how processes must share the few cores usually available. While on a GPU we talk about hundreds of cores, on a CPU we can have usually only 4-12 cores that have to run thousands of threads. However each core can run only one thread at time and it’s thanks to the OS scheduling system that all these threads can actually share the CPU power. More threads run, more difficult is the OS task to decide which threads to run and when. That’s why massive parallelism doesn’t make much sense on CPU. At full capacity, you can’t physically run more threads than cores, hence multithreading on CPU is not meant for intensive operations. While with Svelto.ECS+Tasks would be possible to run a thread for each engine for each entity, it would be probably better to run just a thread for each engine or sequencer execution.

You can proceed now downloading the new Svelto.ECS example, which has been updated to Unity 2017, and open the scene under the Svelto-ECS-Parallelism-Example folder. Since I removed everything from the original demo I was building, I can focus on the code instructions only and that’s why this example is the simplest ever I made so far, as it has only one Engine and one Node. The demonstration will show four different levels of optimization, using different techniques and how is possible to make the same instructions run several times faster.  In order to show the optimization steps, I decided to use some ugly defines (sorry, I can’t really spend too much time on these exercises), therefore we need to stop the execution of the demo in the editor and change the define symbols every time we want to proceed to the next step. Alternative you can build a separate executable for each combination of defines, so you won’t have to worry about the editor overhead during the profiling. The first set of defines will be: FIRST_TIER_EXAMPLE;PROFILER (never mind CROSS_PLATFORM_INPUT it can stay there if needed).

Let’s open and analyse the code in Visual Studio. As usual you can find our standard Context and relative composition root (MainContextParallel and ParallelCompositionRoot). Inside the context initialization, a BoidEngine is created (I didn’t bother renaming the classes) as well as 256,000 entities. Seems a lot, but we are going to run very simple operations on them, nothing to do with a real flock algorithm. Actually the instructions that run are basically random and totally meaningless. Feel free to change them as you wish.

The operations that must be executed are written inside the BoidEnumerator class. While the code example is ugly, I wrote it to be allocation free to be as fast as possible (the only allocation present is due to the ToString() used for the GUI). A preallocated IEnumerator can be reused for each frame. This enumerator doesn’t yield at all, running synchronously, and operates on a set of nodes that must be defined beforehand. The set of operations to run, as already said, are totally meaningless and written just to show how much time very simple operations can take to run on a large set of entities.

The main loop logic, enclosed in the Update enumerator, will run until the execution is stopped. It’s running inside a standard Update Scheduler because I am assuming that the result of the operations must be available every frame. In the way it’s designed, the main thread cannot be faster than the execution of the operations even if they run on other threads. This is very easily achievable with Svelto.Tasks.

So, assuming you didn’t get too much confused by the things happening under the hood, running the example will show you that the operations will takes several milliseconds to be executed (use the stats window if you use the editor). in my case, on my I7, running the FIRST_TIER_EXAMPLE operations takes around 120ms. (interestingly if you switch to net 4.6 it takes 140ms)

this is how the code looks at the moment:

this set of operations run 4 times for each entity; As I stated before, they are meaningless and random, it’s just a set of common and simple operations to run.

Let’s now switch to SECOND_TIER_EXAMPLE, changing the defines in the editor and replacing FIRST_TIER_EXAMPLE. Let the code recompile. Exactly the same operations, but with different instructions, now take around 72ms…what changed? I simply instructed the processor to not use the Vector3 instructions for all the operations, running the operations directly on the component.

Let’s have a look a the MSIL code generated with FIRST_TIER_EXAMPLE



while I know assembly, I am not an IL expert. I leave you digging into it if you wish, however at glance, it appears to me that the main difference is the number of call/call virt executed, which obviously involves several extra operations. The fact that those operations are not inlined, impacts the performance massively.

Let’s quickly switch to THIRD_TIER_EXAMPLE. The code now runs at around 63ms, but the code didn’t change at all. What changed then? I simply exploited the FasterList method to return the list array without trimming and copy it. This removed another extra callvirt, as the operator[] of a collection like FasterList (but same goes for the standard List) is actually a method call. An Array instead is known directly by the compiler as a contiguous portion of allocated memory, therefore knows how to treat it efficiently.

Let’s continue with FOURTH_TIER_EXAMPLE. The code now runs at around 30ms, another big jump, but again it seems that the instructions didn’t change much. Let’s see them:

It’s true that now the code is call free, but does this justifies the big jump? The answer is no. Just removing the last call would have saved around 10ms only, while now the procedure is more than 30ms faster. The big saving must be found in the new Svelto.ECS feature that FOURTH_TIER_EXAMPLE enables. With this define on, our BoidEngine becomes a IStructNodeEngine<BoidNode> and BoidNode moves from the classic class made out of components, to a struct made by of simple value fields only.

Classic Svelto.ECS BoidNode:

New Svelto.ECS BoidNode as struct:

the IStructNodeEngine will force you to implement the following method:

where _structNodes is

this pretty much becomes a pattern that will be copy and pasted for every IStructNodeEngine_structNodes contains an up to date array of BoidNode structs.

IStructNodeEngines are managed differently than other engines in Svelto.ECS and it’s the coder responsibility to add and remove the nodes explicitly. The system is still in its early stages and may change a bit in future, but it’s important to know that the nodes are shared between engines so that the list of nodes returned from SharedStructNodeList is the same between all the IStructNodeEngines. That’s why only one engine should take the responsibility to add and remove nodes through the MultiNodesEngine (or SingleNodeEngine) AddNode and RemoveNode methods. Other engines can instead read and update back values inside the array.

StructNodes are designed to be different than the normal nodes. They won’t need implementors, although they still need to be built through a BuildEntity call and an EntityDescriptor. The EntityDescriptor itself will use a FastStructNodeBuilder during the declaration of nodes to use. FastStructNodeBuilder assumes that the node doesn’t need reflection to fill its fields, which should be considered the normal case. Although I added the option through the StructNodeBuilder class, accessing a field of a struct, that is not a value type, would defeat the purpose of using an array of structs, which is the exploiting of the spatial memory locality and CPU cache.

Cache is what is making the procedure much faster.  However we can do better, the following code will run a bit faster!

this at first looks weird. How does removing the caching of the node makes it faster? Well this is where things get very tricky. Even I still have to figure out a couple of things, as I never really bothered to study how the modern CPU architectures work. My current believe is that the struct must be not too big and must be able to be read in one memory access. Nowadays the CPU can read a continuous block of 64 bytes in one memory access. Since our BoidNode struct is quite small, caching the variable here actually would have made the execution just slightly slower (few ms, try yourself). However if you make the Boidnode struct larger (Try to add other four Vector3 fields and cache the whole structure in a local variable), you will wreck the processor and the execution will become largely slower! Instead accessing directly the single component won’t enable the struct-wide read and since x,y,z are read and cached at once, these instructions will run at the same speed regardless the size of the struct. Alternatively you can cache just the position Vector3 in a local variable, which won’t make it faster, but it will be still work fast regardless the size of the struct.

If you don’t pay attention at these details, the risk is to write code that runs even slower than the class based counterpart. This is why, until I don’t figure out everything, the use of structs in Svelto.ECS must be considered experimental and used only if you know what you are doing.

To conclude, let’s keep the FOURTH_TIER_EXAMPLE define, but add a new one, called TURBO_EXAMPLE. The code now runs at around 19ms. This because the new MultiThreadedParallelTaskCollection Svelto.Tasks feature is now enabled. The operations, instead to run on one thread, are split and run on 8 threads. As you already figured out, 8 threads doesn’t mean 8 times faster and this is the reality. Splitting the operations over threads doesn’t only give sub-linear gains, but also diminishing return, as more the threads, less and less faster will the operations run, until increasing threads won’t make any difference. This is due to what I was explaining earlier. Multi-threading is no magic. Physically your CPU cannot run more threads than its cores and this is true for all the threads of all the processes running at the same time on your Operative System. That’s why CPU multi-threading makes more sense for asynchronous operations that can run over time or operations that involve waiting for external sources (sockets, disks and so on), so that the thread can pause until the data is received, leaving the space to other threads to run meanwhile.

This is how I use the MultiThreadedParallelTaskCollection in the example:

This collection allows to add Enumerators to be executed later on multiple threads. The number of threads to activate is passed by constructor. It’s interesting to note that the number of enumerators to run is independent by the number of threads, although in this case they are mapped 1 to 1. You can then run the collection of whatever scheduler you want, but the single operations will always run on their own, pre-activated, threads.

The way I am using threading in this example is not the way it should be used in real life. First of all I am actually blocking the main thread to wait for the other threads to finish, so that I can actually measure how long the threads take to finish the operations. In a real life scenario, the code shouldn’t be written to wait for the threads to finish. For example, handling AI could run independently by the frame rate. I am thinking about several way to manage synchronism so that will be possible not just to exploit continuation between threads, but even run tasks on different threads at the same time and synchronize them. WaitForSignalEnumerator is an example of what I have in mind, more will come.

All right, we are at the end of the article now. I need to repeat myself here: this article doesn’t show really what is possible to do with Svelto.ECS and Svelto.Tasks in its entirety, this is just a glimpse of the possibilities opened by this infrastructure. Also the purpose of this article wasn’t about showing the level of complexity that is possible to handle easily with the Svelto framework, but just to show you how important is to know how to optimize our code. The most important optimizations are first the algorithmic ones, then the data structure related ones and eventually the ones at instruction level. Multi-threading is not about optimization, but instead being able to actually exploit the power of the CPU used. I also tried to highlight the CPU threading is not about massive parallelism and GPUs should be used for this purpose instead.

Please leave me your thoughts, I will probably expand this article with my new findings, especially the ones related to the cache optimizations.

Svelto TaskRunner – Run Serial and Parallel Asynchronous Tasks in Unity3D

Note: this is an on-going article and is updated with the new features introduced over the time. 

In this article I will introduce a better way to run coroutines using the Svelto.Tasks TaskRunner. I will also show how to run coroutine between threads, easily and safely. You can finally exploit the power of your processors, even if you don’t know much about multithreading. If you use Unity, you will be surprised about how simple is to pass results computed from the multithreaded coroutines to the main thread coroutines.

What we got already: Unity and StartCoroutine

If you are a Unity developer, chances are you know already how StartCoroutine works and how it exploits the powerful c# yield keyword to time slice complex routines. A coroutine is a quite handy and clean way to execute procedures over time or better, asynchronous tasks.

Lately Unity improved the support of Coroutines and new fancy things are now possible to achieve. For example, it was already possible to run tasks in serial doing something like:

And apparently* it was also possible to exploit a basic form of continuation starting a coroutine from another coroutine:

*apparently because I never tried this on unity 4 and I wasn’t aware at that time that it was possible.

However lately it’s also possible to return an IEnumerator directly from another IEnumerator without running a new coroutine, which is actually almost 3 times faster, in terms of overhead, than the previous method:

Run parallel routines is also possible, however there is no elegant way to exploit continuation when multiple StartCoroutine happen at once. Basically there is no simple way to know when all the coroutines are completed.

I should add that Unity tried to extend the functionality of the Coroutines introducing new concepts like the CustomYieldInstruction, however it fails to create a tool that can be used to solve more complex problems in a simple and elegant way, problems like, but not limited to, running several sets of parallel and serial asynchronous tasks.

Introducing Svelto.Tasks

Being limited by what Unity can achieve, a couple of years ago I started to work on my TaskRunner library and spent the last few months to evolve it in something more powerful and interesting. The set of use cases that the TaskRunner can now solve elegantly, is quite broad, but before to show a subset of them as example, I will list the main reasons why TaskRunner should be used in place of StartCoroutine:

  • you can use the TaskRunner everywhere, you don’t need to be in a Monobehaviour. The whole Svelto framework focuses on shifting the Unity programming paradigm from the use of the Monobehaviour class to more modern and flexible patterns.
  • you can use the TaskRunner to run Serial and Parallel tasks, exploiting continuation without needing to use callbacks.
  • you can pause, resume and stop whatever set of tasks running.
  • you can catch exceptions from whatever set of tasks running.
  • you can pass parameters to whatever set of tasks running.
  • you can exploit continuation between threads (!).
  • Whatever the number of tasks you are running is, the TaskRunner will always run just one Unity coroutine (with some exceptions).
  • you can run tasks on different schedulers (including schedulers on different threads!).
  • you can transform whatever asynchronous operation into a task, thanks to the ITask interface.

A subset of use cases that the TaskRunner is capable to handle, is what I am going to show you soon, and I am sure you will be surprised by some of them :). TaskRunner can be used in multiple ways and, while the performance doesn’t change much between methods, you would fully exploit the power of the library only knowing when to use what. Let’s start:

The simplest way to use the TaskRunner is to use the function Run, passing whatever IEnumerator in it.

This simply does what says on the tin. It’s very similar to the StartCoroutine function, but it can be called from everywhere. Just pay attention to the fact that
TaskRunner uses the not generic IEnumerator underneath, so using generic IEnumerator, with a value type as parameter, will always result in boxing (as shown in the example above).

TaskRunner can be also used to run every combination of serial and parallel tasks in this way:

But why not exploit the IEnumerator continuation? It’s more elegant than using callbacks and we don’t need to use a SerialTaskCollection explicitly (with no loss of performance). We won’t even need to use two ParallelTasks:

if you feel fancy, you can also use the extension methods provided:

Svelto.Tasks and Unity compatibility

you are used to yield special objects like WWW, WaitForSeconds or WaitForEndOfFrame. Those functions are not enumerators and they work because Unity is able to recognize them and run special functions accordingly. For example, when you return WWW, Unity will run a background thread to execute the http request. If WWW is not able to reach Unity framework, it will never be able to run properly. For this reason, the MainThreadRunner is actually compatible with all the Unity functions. You can yield them, however there are limitations: you cannot yield them, as they are, from a ParallelTaskCollection. If you do it, the ParallelTaskCollection will stop executing and will wait Unity to return the result, effectively loosing the benefits of the process. Whenever you return a Unity special async function from inside a ParallelTaskCollection, you’ll need to wrap it inside an IEnumerator if you want to take advantage of the parallel execution. This is the reason why WWWEnumeratorWaitForSecondsEnumerator and AsyncOperationEnumerator exist.

TaskRoutines and Promises

When c# coders think about asynchronous tasks, they think about the .net Task Library. The Task Library is an example of an easy to use tool that can be used to solve very complex problems. The main reason why the Task Library is so flexible, is because it’s Promises compliant. While the Promises design has been proved proficient through many libraries, it can also be implemented in several ways, but in every case, what makes the promises powerful, is the idea to implement continuation without using messy events all over the code.

In Sveto.Tasks, the promises pattern is implemented through the ITaskRoutine interface.  let’s see how it works: an ITaskRoutine is a coroutine already prepared and ready to start at your command. To create a new ITaskRoutine simply run this function:

Since an allocation actually happens, it’s best to preallocate and prepare a routine during the initialization phase and run it during the execution phase. A task routine can also be reused, changing all the parameters, before to run it again.  Running an empty ITaskRoutine will result in an exception thrown, so we need to prepare it first. You can do something like:

In this case I used the function SetEnumeratorProvider instead of SetEnumerator. In this way the Task Runner is able to recreate the enumerator in case you want to start the same function multiple times. Let’s see what we can do:

We can Start the routine like this using

We can Pause the routine using

We can Resume the routine using

we can Stop the routine using (it’s getting tedious)

we can Restart the routine using

You can check the ExampleTaskRoutine example out to see how it works.

Let’s see how ITaskRoutine are compliant with Promises. As we have seen, we can pipe serial and/or parallel tasks and exploit continuation. We can get the result from the previous enumerator as well, using the current properties. We can pass parameters, through the enumerator function itself. The only feature we haven’t seen yet is how to handle failures, which is obviously possible too.

For the failure case I used an approach similar to the .net Task library. You can either stop a routine from a routine, yielding Break.It; or throwing an exception. All the exceptions, including the ones threw on purpose, will interrupt the execution of the current coroutine chain. Let’s see how to handle both cases with some, not so practical, examples.

In the example above we can see several new concepts. First of all, it shows how to use the Start() method providing what to execute when the ITaskRoutine is stopped or if an exception is thrown from inside the routine. It shows how to yield Break.It to emulate the Race function of the promises pattern. Break.It is not like returning yield break, it will actually break the whole coroutine from where the current enumerator has been generated. At last it shows how to yield an array of IEnumerator as syntactic sugar in place of the normal Parallel Task generation. Just to be precise, OnStop will NOT be called when the task routine completes, it will be called only when ITaskRoutine Stop() or yield Break.It are used.

Update: will now break the current running task collection. This means that if you run Break.It inside a ParallelTaskCollection or SerialTaskCollection it will break the current collection only and not the whole ITaskRoutine. In this case Stop() won’t be called, but the TaskCollection completes. This is how to use Break.It in a real life scenario:


Now let’s talk about something quite interesting: the schedulers. So far we have seen our tasks running always on the standard scheduler, which is the Unity main thread scheduler. However you are able to define your own scheduler and you can run the task whenever you want! For example, you may want to run tasks during the LateUpdate or the PhysicUpdate. In this case you may implement your own IRunner scheduler or even inherit from MonoRunner and run the StartCoroutineInternal as a callback inside the Monobehaviour that will drive the LateUpdate or PhysicUpdate. Using a different scheduler than the default one is pretty straightforward:


Multithread and Continuation between threads

But what if I tell you that you can run tasks also on other threads? Yep that’s right, your scheduler can run on another thread as well and, in fact, one Multithreaded scheduler is already available. However you may wonder, what would be the practical way to use a multithreaded scheduler? Well, let’s spend some time on it, since what I came out with, is actually quite intriguing. Caution, we are now stepping in the twilight zone.

First of all, all the features so far mentioned work on whatever scheduler you put them on. This is fundamental in the design, however some limitations may be present due to the Unity not thread safe nature. For example, the MultiThreadRunner, won’t be able to detect special Unity coroutines, like WWW, AsyncOperation or YieldInstruction, which is obvious, since they cannot run on anything else than the main thread. You may wonder what the point of using a MultiThreadRunner is, if eventually it cannot be used with Unity functions. The answer is continuation! With continuation you can achieve pretty sweet effects.

Let’s see an example, enabling the PerformanceMT GameObject from the scene included in the library code. It compares the same code running on a normal StartCoroutine (SpawnObjects) and on another thread (SpawnObjectsMT). Enable only the MonoBehaviour you want test to compare the performance. What’s happening? Both MBs spawn 150 spheres that will move along random directions. In both cases, a slow coroutine runs. The coroutine goal is to compute the largest prime number smaller than a given random value between 0 and 1000; The result will be used to compute the current sphere color which will be updated as soon as it’s ready. The following is the multithreaded version:

Well I hope it’s clear to you at glance. First we run CalculateAndShowNumber on the multiThreadScheduler. We use the same MultiThreadRunner instance for all the game objects, because otherwise we would spawn a new thread for each sphere and we don’t want that. One extra thread is more than enough (I will spend few words on it in a bit).

FindPrimeNumber is supposed to be a slow function, which it is. As a matter of fact, if you run the single threaded version (enabling the SpawnObject monobehaviour instead of SpawnObjectMT) you will notice that the frame rate is fairly slow. In fact, the GPU must wait the CPU to compute the prime number.

The Multithreaded version runs the main enumerator on another thread, but how can the color be set since it’s impossible to use the Renderer component from anything else than the main thread? This is where a bit of magic happens. Returning the enumerator from a task, running on another scheduler, will actually continue its execution on that scheduler. You may think that at this point the thread will wait for the enumerator running on the main thread to continue. This is partially true, since differently than a Thread.Join(), the thread is actually not stalled, it will continue yielding, so if other tasks are running on the same thread, they will be actually processed. At the end of the main thread enumerator, the path will return to the other thread and continue from there. Quite fascinating, I’d say, also because you could expect great difference in performance.

So, We have seen some advanced applications of the TaskRunner using different threads, but since Unity will soon support c#6, you could wonder why to use the Svelto TaskRunner instead of the Task.Net library.  Well, they serve two different purposes. Task.Net library has not been designed for applications that could run heavy routines on threads. The Task.Net and the await/async keywords heavily exploit the Thread.Pool to use as many threads as possible, with the condition that most of the time, these threads are very short-lived. Basically it’s designed to serve applications that run hundreds of short lived and light asynchronous tasks.  This is usually true when we talk about server applications.

For games instead, what we generally need, are few threads where to run heavy operations that can go in parallel with the main thread and this is what the TaskRunner has been designed for. You will also be sure that all the routines running on the same MultiThreadRunner scheduler instance, won’t occur in any concurrency issue. In fact, you may see every Multithread runner as a Fiber (if you feel brave, you can also check this very interesting video). It’s also worth to notice that the MultiThreadRunner will keep the thread alive as long as it is actually used, effectively letting the Thread.Pool (used underneath) to manage the threads properly.

Other stuff…

To complete this article, I will spend few words on other two minor features. As previously mentioned, the TaskRunner will identify IAbstractTask implementations as well. Usually you will need to implement an ITask interface to be useful. The ITask is meant to transform whatever class in a task that can be executed through the task runner. For example, it could be used to run web services, which result will be yielded on the main thread.

The ITaskChain interface is something still at its early stage and it could be changed or deprecated in future. It could be useful to know parameters that are passed through tasks, like a sort of Chain Of Responsibility pattern.

The very last thing to take in consideration is the compatibility with Unity WWW, AsyncOperation and YieldInstruction objects. As long as you are not using parallel task, you can yield them from your enumerator and they will work as you expect! However you cannot use them from a parallel collection unless you wrap them in another IEnumerator, that’s why WWWEnumerator and AsyncOperationEnumerator classes exist.

The source code and examples are available from here: Every feedback is more than welcome. Please let me know if you find any bug!

Notes on Optimizations

TaskRunner has been designed with optimizations in mind. Without counting the multithreaded runner, extensive use of TaskRunner will actually results in performance increase over the normal use of StartCoroutine and the normal Update functions. MonoRunner works using one single unity coroutine (Except in the few cases when they are handled to Unity) for all the performing tasks., this will eliminate all the overhead needed to run hundreds of separate Updates. TaskRunner is also very useful to run time-slicing tasks.

03/11/16 Notes

I realised that MonoRunner was behaving differently than the Unity StartCoroutine function since the latter runs the enumerator immediately, while MonoRunner was waiting for the next available slot. I changed its behaviour now, but this meant to introduce a not so elegant ThreadSafe version of every Run function.

08/01/17 Notes

Some minor changes and improvement of examples

  • Added a Editor Profiler to keep track of the tasks performance
  • WWW is not handed to Unity anymore, yield it always through WWWEnumerator
  • To avoid confusion, IEnumerable cannot be yielded directly anymore

22/10/17 Notes

A ton of features have been introduced and I have not been diligent enough to keep track of them properly. I will list the highlights:

  • Surely I have introduced some breaking changes, but they will be simple to fix
  • Massively improved the multi-threading related features, I know I would need to write a good example on how to use them
  • The unit tests now run through the official Unity tests runner
  • improved examples and unit tests
  • Pausing/Resuming/Stopping/Starting ITaskroutines now works better and makes more sense
  • The profiler now can recognize tasks running on other threads and profile them (very cool)
  • Rewritten the MultiThreadedParallelTaskCollection and now is usable
  • yield break will stop the current Enumeration only, yield Break.It stops the current TaskCollection, yield.BreakAndStop stops the whole ITaskRoutine and triggers the stop callback
  • several optimizations
  • Important: TaskCollection logic has been rewritten to be able to reuse them after the enumeration ends (They can restart). In order to Clear them the new Clear function is added as Reset won’t clear them anymore
  • Added LateMonoRunner and UpdateMonoRunner (you can guess when they run 🙂 )
  • Added StaggeredMonoRunner. It allows to spread tasks over N frames
  • It’s now possible to create several MonoRunners. In this way you can manage and leave the “standard” ones alone when you need to do weird stuff
  • Breaking change: the TaskCollection doesn’t accept Enumerables anymore. This is because it was bad to hide the GetEnumerator() allocation that should have been done anyway.


Svelto ECS is now production ready

Note: this is an on-going article and is updated with the new features introduced over the time. 

Six months have been passed since my last article about Inversion of Control and Entity Component System and meanwhile I had enough time to test and refine my theories, which have been now proven to work in a production environment quite well. During these months of putting in practice my ideas, I had the chance to consolidate my analysis and improve the code. As result Svelto ECS framework is now ready.

Let’s start from the basics explaining why I use the word framework, which is very important. My previous work, Svelto IoC, cannot be defined as a framework; it’s instead a tool. As a tool, it helps to perform Dependency Injection and design your code following the Inversion of Control principle. Svelto IoC is nothing else, therefore whatever you do with it, you actually can do without it in a very similar way, without changing much the design of your code. As such, svelto IoC doesn’t dictate a way of coding, doesn’t give a paradigm to follow, leaving to the coder the burden to decide how to design the game infrastructure.

Svelto ECS won’t take the place of Svelto IoC, however it can be surely used without it. My intention is to add some more features to my IoC container in future and also write another article about how to use it properly, but as of now, I would not suggest to use it, unless you are actually aware of what IoC actually means and that you are actually looking for an IoC container that well integrates in your existing design (like, for example, to create a MVP based GUI framework). Without understanding the principles behind IoC, using an IoC container can result in very dangerous practices. In fact I have seen smell code and bad design implementations that actually have been aided by the use of an IoC container. These ugly results would have been much harder to achieve without it.

Svelto ECS is instead a framework and with it you are forced to design your code following its specifications. It’s the framework to dictate the way you have to write your game infrastructure. The framework is also relatively rigid, forcing to follow a well defined pattern. Using this kind of framework is important for a medium-large team, because coders have different levels of experience and it is impossible to expect from everyone to be fully able to design well structured code. Using a framework which implicitly solves the problem to design the code, will let all the coders focus only on the implementation of algorithms, knowing with a given degree of certainty, that the amount of refactoring needed in future will be more limited than it would have been if left on their own.

The first error I made, with the previous implementation of this framework, was to not well define the concept of Entity. I noticed that it is actually very important to have clear in mind what the elements of the project are before to write the relative code. Entities, Components, Engines and Nodes must be properly identified in order to write code faster. An Entity is actually a fundamental and well defined element in the game. The Engines must be designed thinking about managing Entities and not Components or Nodes. Characters, enemies, bullets, weapons, obstacles, bonuses, GUI elements, they are all Entities. It’s easier, and surely common, to identify an Entity with a view, but a view is not necessary for an Entity. As long as the entity is well defined and makes sense for the logic of the game, it can be managed by engines regardless its representation.

In order to give the right importance to the concept of Entity, Nodes cannot be injected anymore into Engines until an Entity is explicitly built. An Entity is now built through the new BuildEntity function and the same function is used to give an ID to the Entity. The ID doesn’t need to be unique, this is a decision left to the coder. For example as long as the ID of all the bullets are unique, it doesn’t matter if the same IDs are used for other entities managed by other engines.

The Setup

Following the example, we start from analysing the MainContext class. The concept of Composition Root, inherited from the Svelto IoC container, is still of fundamental importance for this framework to work as well. It’s needed to give back to the coder the responsibility of creating objects (which is currently the greatest problem of the original Unity framework, as explained in my other articles) and that’s what the Composition Root is there for. By the way, before to proceed, just a note about the terminology. While I call it Svelto Entity Component System framework, I don’t use the term System, I prefer the term Engine, but they are the same thing.

In the composition root we can create our ECS framework main Engines, Entities, Factories and Observers instances needed by the entire context and inject them around (a project can have multiple contexts and this is an important concept).

In our example, it’s clear that the amount of objects needed to run the demo is actually quite limited:

The EngineRoot is the main Svelto ECS class and it’s used to manage the Engines. If it’s seen as IEntityFactory, it will be used to build Entities. IEntityFactory can be injected inside Engines or other Factories in order to create Entities dynamically in run-time.

It’s easy to identify the Engines that manage the Player Entity, the Player Gun Entity and the Enemies Entities. The HealthEngine is an example of generic Engine logic shared between specialized Entities. Both Enemies and Player have Health, so their health can be managed by the same engine. DamageSoundEngine is also a generic Engine and deals with the sounds due to damage inflicted or death. HudEngine and ScoreEngine manage instead the on screen FX and GUI elements.

I guess so far the code is quite clear and would invite you to know more about it. Indeed, while I explained what an Entity is,  I haven’t given a clear definition of Engine, Node and Component yet.

An Entity is defined through its Implementers and Components. A Svelto ECS Component defines a shareable contextualized collection of data and events. A Component can exclusively hold data (through properties) and events (through the Svelto ECS Dispatcher classes) and nothing else. How the data is contextualized (and then grouped in components) depends by how the entity can been seen externally. While it’s important to define conceptually an Entity, an Entity does not exist as object in the framework, and it’s seen externally only through its components. More generic the components are, more reusable they will be. If two Entities need health, you can create one IHealthComponent, which will be implemented by two different Implementers. Other examples of generic Components are IPositionComponent, ISpeedComponent, IPhysicAttributeComponent and so on. Component interfaces can also help to reinterpret or to access the same data in different ways. For example a Component could only define a getter, while another component a setter for the same data on the same Implementer. All this helps to keep your code more focused and clean. Once the shareable data options are exhausted, you can start to declare more entity specific components. Bigger the project becomes, more shareable components will be found and after a while this process will start to become quite intuitive. Don’t be afraid to create components even just for one property only as long as they make sense and help to share components between entities. More shareable components there are, less of them you will need to create in future. However it’s wise to make this process iterative, so if initially you don’t know how to create components, start with specialised ones and the refactor them later.
As I said, Components do not hold just data, the also hold events. This something I have never seen before in other ECS framework and it’s a very powerful feature, I will explain why later, but for the time being, just know that Components can also dispatch events through the Dispatcher class.

The Implementer concept was existing already in the original framework, but I didn’t formalize it at that time. One main difference between Svelto ECS and other ECS frameworks is that Components are not mapped 1:1 to c# objects. Components are always known through Interfaces. This is quite a powerful concept, because allows Components to be defined by objects that I call Implementers . This doesn’t just allow to create less objects, but most importantly helps to share data between Components. Several Components can been actually defined by the same Implementer. Implementers can be Monobehaviour, as Entities can be related to GameObjects, but this link is not necessary. However, in our example, you will always find that Implementers are Monobehaviour. This fact actually solves elegantly another important problem, the communication between Unity framework and Svelto ECS.
Most of the Unity framework feedback comes from predefined Monobehaviour functions like OnTriggerEnter and OnTriggerExit. One of the benefit to let Components dispatch events is that in this way the communication between the Implementer and the Engines become totally natural. The Implementer as Monobehaviour, dispatches an IComponent event when OnTriggerEnter is called. The Engines will then be notified later and act on it. When you need to communicate events between Implementers and Engines, as opposite between Engines and Engines, you can also use normal events.

It’s fundamental to keep in mind that Implementers MUST NOT DEFINE ANY LOGIC. They must ONLY implement component interfaces, hold the relative states and act as a bridge between Unity and the ECS framework in case they happen to be also Monobehaviours.

So Components are a collection of Data and Events, Implementers define the components and don’t have any logic. Hence where all the game logic will be? All the game logic will be written in Engines and Engines only. That’s what they are there for. As a coder, you won’t need to create any other class. You may need to create new Abstract Data Types to be used through components, but you won’t need to use any other pattern to handle logic. In fact, while I still prefer MVP patterns to handle GUIs, without a proper MVP framework (which doesn’t exist for Unity yet), it won’t make much difference to use Engines instead. Engines are basically presenters and Nodes hold views and models.

OK, so, before to proceed further with the theory, let’s go back to the code. We have easily defined Engines, it’s now time to see how to build Entities. Entities are injected into the framework through Entity Descriptors. EntityDescriptor describes entities through their Implementers and present them to the framework through Nodes . For example, every single Enemy Entity, is defined by the following EnemyEntityDescriptor:

This means that an Enemy Entity and its components, will be introduced to the framework through the nodes EnemyNode, PlayerTargetNode and HealthNode and the method BuildEntity used in this way:

While EnemyImplementor implements the EnemyComponents, the EnemyEntityDescriptor introduces them to the Engines through the relative Enemy Nodes. Wow, it seems that it’s getting complicated, but it’s really not. Once you wrap your head around these concepts, you will see how easy is using them. You understood that Entities exist conceptually, but they are defined only through components which are implemented though Implementers. However you may wonder why nodes are needed as well. While Components are directly linked to Entities, Nodes are directly linked to Engines. Nodes are a way to map Components inside Engines. In this way the design of the Components is decoupled from the Engines. If we had to use Components only without Nodes, often it would become very awkward to design them. Should you design Components to be shared between Entities or design Components to be used in specific Engines? Nodes rearrange group of components to be easily used inside Engines. Nodes are mapped directly to Engines. specialized Nodes are used inside specialized Engines, generic Nodes are used inside generic Engines. For example: an Enemy Entity is an EnemyNode for Enemy Engines, but it’s also a PlayerTargetNode for the PlayerShootingEngine. The Entity can also be damaged and the HealthNode will be used to let the HealthEngine handle this logic. Being the HealthEngine a generic engine, it doesn’t need to know that the Entity is actually an Enemy. different and totally unrelated engines can see the same entity in totally different ways thanks to the nodes. The Enemy Entity will be injected to all the Engines that handle the EnemyNode, the PlayerTargetNode and the HealthNode Nodes. It’s actually quite important to find Node names that match significantly the name of the relative Engines.

As we have seen, entities can easily be created explicitly in code passing a specific EntityDescriptor to the BuildEntity function. However, some times, it’s more convenient to exploit the polymorphism-like feature given by the IEntityDescriptorHolder interface. Using it, you can attach the information about which EntityDescriptor to use directly on the prefab, so that the EntityDescriptor to use will be directly taken from the prefab, instead to be explicitly specified in code. This can save a ton of code, when you need to build Entities from prefabs.

So we can define multiple IEntityDescriptorHolder implementations like:

and use them to create Entities implicitly, without needing to specify the descriptor to use, since it can be fetched directly from the prefab (i.e.: playerPrefab, enemyPrefab):

Once Engines and Entities are built, the setup of the game is complete and the game is ready to run. The coder  can now focus simply on coding the Engines, knowing that the Entities will be injected automatically through the defined nodes. If new engines are created, is possible that new Nodes must be added, which on their turn will be added in the necessary Entity Descriptors.

The Logic in the Engines

the necessity to write most of the original boilerplate code has been successfully removed from this version of the Framework. An Important feature that has been added is the concept of IQueryableNodeEngine. The engines that implement this interface, will find injected the property

This new object remove the need to create custom data structures to hold the nodes. Now nodes can be efficiently queried like in this example:

This example also shows how Engines can periodically poll or iterate data from Entities. However often is very useful to manage logic through events. Other ECS frameworks use Observer or very awkward “EventBus” objects to solve communication via events. Event bus is an anti-pattern and the extensive use of observers could lead to messy and redundant code. This is why I find the idea to dispatch events through Components quite ingenious. Most of the time Engines need to communicate Entity based events. If the Player hits an Enemy Entity, the PlayerGunShootingEngine needs to communicate to the HealthEngine that a specific Entity has been damaged. Doing this communication through the Entity Components is the most natural and therefore less painful way to do it:

The example above dispatches a isDamaged event or isDead event through the Component. Other Engines will listen the same event, through the same Component, therefore they will get the ID of the Entity in their callbacks, which can be used to query other nodes, like this isDead listener:

The unique feature to be able to receive the added and removed nodes in Engines through the interface functions Add and Remove has been maintained in this version of the framework. This functionalities is still of fundamental importance both to use custom data structures to hold nodes and to add/remove listeners from component events. If the Add and Remove functions are used, the user must never assume when a node will be added in to the engine and write code consequentially.


IRemoveEntityComponent needs a dedicated paragraph. This Component is a special Component partially managed by the framework. The user must still need to implement it through an implementer, but all it’s needed to implement is:

the removeEntity action is automatically set by the framework and once called, it will remove, from all the engines, all the nodes related to the entity created by the BuildEntity function. This is the only way to execute the Remove engine method. A standard example is to use in a Monobehaviour implementer like this:

but can be used also in this way:

the removeEntity delegate will be injected automatically by the framework, so you will be find it ready to use.


I understand that some time is needed to absorb all of this and I understand that it’s a radically different way to think of entities than what Unity taught to us so far. However there are numerous relevant benefits in using this framework beside the ones listed so far. Engines achieve a very important goal, perfect encapsulation. The encapsulation is not even possibly broken by events, because the event communication happens through injected components, not through engine events. Engines must never have public members and must never been injected anywhere, they don’t need to! This allows a very modular and elegant design. Engines can be plugged in and out, refactoring can happen without affecting other code, since the coupling between object is minimal.

Engines follow the Single Responsibility Principle. Either they are generic or specialized, they must focus on one responsibility only. The smaller they are, the better it is. Don’t be afraid of create Engines, even if they must handle one node only. Engine is the only way to write logic, so they must be used for everything. Engines can contain states, but they never ever can be entity related states, only engine related states.

Once the concepts explained above are well understood, the problem about designing code will become secondary. The coder can focus on implementing the code needed to run the game, more than caring about creating a maintainable infrastructure. The framework will force to create maintainable and modular code. Concept like “Managers”, “Containers”, Singletons will just be a memory of the past.

Some rules to remember:

  • Nodes must be defined by the Engine themselves, this means that every engine comes with a set of node(s). Reusing nodes often smells
  • Engines must never use nodes defined by other engines, unless they are defined in the same specialized namespace. Therefore an EnemyEngine will never use a HudNode.
  • Generic Engines define Generic Nodes, Generic Engines NEVER use specialized nodes. Using specialized nodes inside generic engines is as wrong as down-casting pointers in c++

Please feel free to experiment with it and give me some feedback.

Notes from 21/01/2017 Update:

Svelto ECS has been successfully used in Freejam for some while now on multiple projects.  Hence, I had the possibility to analyse some weakness that were leading to some bad practices and fix them. Let’s talk about them:

  1. First of all, my apologies for the mass renames you will be forced to undergo, but I renamed several classes and namespaces.
  2. The Dispatcher class was being severely abused, therefore I decided to take a step back and deprecate it. This class is not part of the framework anymore, however it has been completely replaced by DispatcherOnSet and DispatcherOnChange. The rationale behind it is quite simple, Components must hold ONLY data. The difference is now how this data is retrieved. It can be retrieved through polling (using the get property) or trough pushing (using events). The event based functionality is still there, but it must be approached differently. Events can’t be triggered for the exclusive sake to communicate with another Engine, but always as consequence of data set or changed. In this way the abuse has been noticeably reduced, forcing the user to think about what the events are meant be about. The class now expose a get property for the value as well.
  3. The new Sequencer class has been introduced to take care of the cases where pushing data doesn’t fit well for communication between Engines. It’s also needed to easily create sequence of steps between Engines. You will notice that as consequence of a specific event, a chain of engines calls can be triggered. This chain is not simple to follow through the use of simple events, also some it’s not simple to branch it or create loops. This is what the sequencer is for:

This example covers all the basics, except the loop one. To create a loop would be enough to create a new step that has as target an engine already present as starting point of an existing step.
A Sequence should be created only inside the context or inside factories. In this way it will be simple to understand the logic of a sequence of events without investigating inside Engines. The sequence is then passed into the engines like it happens already with observables and used like this:

  1. The Concept of EntityGroup has been added. This is a bit of an abstract concept, but useful in some specific cases. It allows to define an Entity as a Group of Entities. It works exactly like a normal Entity, but the use case is different. It must not be seen as a single Entity, but as an Entity that groups the information of a set of Entitites. As a normal Entity, it is made out of nodes and implementers, however the nodes are stored in a separate set inside the Nodes database. In this way, you can have an EntityGroup called like EnemiesOverMind (sorry I just came with the example while I am typing) that has an OverMindNode implemented through an implementer that is NOT a Monobehaviour. the same node is then used in the normal EnemyEntity, however the same implementer used for the group will be used for the EnemyEntity too (through the extraimplementers parameter of the EntityDescriptor constructor). In this way, effectively, the EnemiesOverMind and the EnemyEntity share the same OverMindNode through the same implementer. Doing so, you can now create the OverMindEngine that will query the only the OverMindNode present in the EntityGroup database. Changing a value in this node, will change the value of all the nodes injected in the normal Enemy Engines. This is a simple way to be able to handle shared data between entities without being forced to iterate among all the entities that share the same data.
  2. The IComponent interface was never used by the framework so I removed it, you can keep on using it for the sake to optimize the GetComponents call in a BuildEntity.
  3. Added a new GenericEntityDescriptor class, now it’s possible to create Descriptors without building a new class everytime (it depends by the number of nodes used).
  4. Since the Tickable system has been deprecated, the TaskRunner library should be used to run loops. TaskRunner comes with a profiler window that can be used to check how long tasks take to run. This has been copied and adapted from the Entitas source code. You can Enable it from the menu Tasks -> Enable Profiler. Similarly  you can enable an Engine Profiler from Engine -> Enable profiler to profile adding and remove nodes into Engines. It will look more or less like this:

In future, I may implement some of the following features (feedback is appreciated):

  • Entity pooling, so that a removed entity can be reused
  • Some standard solution to handle input event in engines
  • Using GC handles to keep hold of the Node references, all these references around can make the garbage collection procedure slower

Note, if you are new to the ECS design and you wonder why it could be useful, you should read my previous articles:

C# Murmur 3 hash anyone?

If you need a fast and reliable (read very good to avoid collisions) hashing algorithm (not cryptographic), look no further, Murmur 3 is here to the rescue.

Take it from here, it’s compatible with Unity3D.

Directly translated from the c++ version.

More info on Wikipedia and on this definitive answer on stack overflow

The truth behind Inversion of Control – Part V – Entity Component System design to achieve true Inversion of Flow Control

At this point, I can imagine someone wondering if I still recommend to use an IoC container. IoC containers are handy and are quite powerful with Unity 3D, but they are dangerous. If used without Inversion of Control in mind, they can lead to messy code. Let me explain why:

When an IoC container is used with a standard procedural approach, thus without inversion of control in mind, dependencies are injected in order to be used directly by the specialised classes. Without inversion, there is no limit to the number of dependencies that can be injected. Therefore it doesn’t come natural applying the Single Responsibility Principle to the code written. Why should I split the class I am currently working on? It’s just an extra dependency injected and few lines of more code, what harm can it do? Or also: should I inject this dependency just to check its state inside my class? It seems the simplest thing to do, otherwise I would need to spend  time to refactor my classes…Similar reasoning, plus the proverbial coder laziness, usually result into very awkward code. I have seen classes with more than ten dependencies injected, without raising any sort of doubt about the legitimacy of such a shame.

Coders need constrains and IoC containers unluckily don’t give any. In this sense, using an IoC container is actually worse than manual injection, because manual injection would be limiting the problem due to the effort to pass so many parameters by constructors or setters.
Dependencies end up being injected as a sort of global variables. Binding a type to a single instance becomes much more common than binding an interface to its implementation.

One way to limit this problem is to use multiple Composition Roots according the application contexts. It’s also possible to have one Composition Root and multiple containers. in this way it would be possible to segregate classes and specify their scope, reflecting the internal layering of the application. An observer wouldn’t be injectable everywhere, but only by the classes inside the object graph of the specific container. In this sense, hierarchical containers could be quite handy. I should definitively write an article on the best practices in using an IoC container with more examples, but right now let’s see how dangerous an IoC container could become without using Inversion of Flow control.

The following example is a classic class that doesn’t have a precise context or responsibility. It has very likely started with a simple functionality in mind, but being used as a sort of “Utility” class, there is no limit to the number of public functions it can have. Consequentially there is no limit to the number of dependencies it can have injected. “Look this class has already all the information we need to add this functionality” … “ok let’s add it” … “oh wait, this function actually needs this other dependencies to work, all right then, let’s add this new dependency”. This example is the worst of the lot, unluckily pretty common when the concept of Inversion of Control is unknown or not clear.

My definition of “Utility” class is simple: every class that exposes many public functions ends up being a Utility class, used in several places. Utility classes should be static and stateless. Stateful Utility classes are always a design error.

The following class uses Injection without Inversion of flow Control in Mind. It’s a utility class and exposes way to many public functions. Public functions reflect the class “responsibilities”, thus this class has way too many responsibilities.

Using events is a good way to implement inversion of flow control. The class reacts to events fired from the injected classes. Injecting dependencies to register to their events is a reasonable use of injection. What we need to achieve is encapsulation and of course public functions break encapsulation. What’s the problem about breaking encapsulation? The biggest problem is that your class is not in control of the flow anymore. The class won’t have any clue when and where its public functions will be called, therefore cannot predict what will happen. Without being able to control the flow, it will be very simple to break the logic of the class. A classic example is a public function that assumes that some class states, loaded asynchronously, are actually set before the public function is used. Adding checks in this kind of scenario will quickly turn in some horrible and unmanageable code.

Events help in this scenario, however we must be very careful when events are used. It’s very important to not assume what will listen to those events when the events are created. Obviously events must be generic and have generic names. If events become specific, assuming what will happen when they will be dispatched, then there will be no difference compared to using public functions. Using events the class will probably look like this:

A Name like OnMachineDestroyed communicates clearly that the event will be fired when the machine is destroyed and anything can listen to this event. The Control is then inverted, as the class doesn’t command a specific dependency or the flow of the execution, it just triggers a signal without taking any responsibility of the consequences.

Finally, Inversion of Control can be optimally achieved through properly designed code. I found the Template Pattern to be a good companion. The following class is not injected to any class, but actually registered inside a “manager” compatible with the template interface implemented. The manager class will hold a reference to this class and will use the object through the interface IFoo. The key is that the manager won’t have any clue about the IFoo implementation, following the Liskov Substitution Principle (the L in the SOLID principles).

Template Pattern is a common method normally used by frameworks to take control of the application code.

Programmers tend to code according their own code design knowledge based on their own experience. If a tool is too flexible, it could be used in ways that can cause more harm than benefits. That’s why I love rigid frameworks, they must dictate the best way to be used by the coder without being open to interpretation. With this in mind, I started to research possible alternatives to IoC containers and, as discussed in my Entity Component System architecture article, I realised that a proper ECS implementation for Unity could be what I am looking for.

In order to prove that an ECS solution could be viable in Unity, I had to spend several hours during my weekends to create a new framework and an example with it. The framework and the example can be found separately at these addresses:

Download both repositories and sort out the right folders for Svelto-ECS and Svelto-ECS-Example. Open the scene Svelto-ECS-Example\Scenes\Level 01.unity to run the Demo.

The example is the original Unity 3D survival shooter rewritten to be adapted to my ECS framework. However this framework is still not production ready, it misses some fundamental functions and it’s still rough (note: this is not true anymore, the framework is now production ready). Nevertheless, it has been really useful to prove and demonstrate my theories.

The intention was to create a framework able to push the user to write High Cohesive, Low Coupled code[1] with the Open/Close and Single Responsibility Principles in mind. Following these principles would naturally lead to less dependencies injected and, as it will be shown with the simple example I wrote, the use of dependency injection will be limited to the scope of showing its compatibility with the framework.

Of course it must be clear that the use of such a framework makes sense only when it’s applied on a big project maintained by several coders. The survival demo itself is too simple to really appreciate the potential of the idea. In fact, for this specific example, I could say that using my ECS framework is border line to over-engineering. However I do suggest to experiment with the library anyway, so that you can understand its potentiality.

A real Entity Component System in Unity

Note: the following code examples are not updated to the new version of the library.

My implementation of ECS is very similar to many out there, although, after studying the most famous frameworks, I noticed that there isn’t a well defined way to let systems communicate between each other, a problem that I solved in a novel way. But let’s proceed step by step.

I have already talked about what an ECS is, so let’s see how we can make it fit in Unity. Let’s start from a list of rules created to use my framework properly:

  1. All the logic must always be written inside Systems.
  2. Components don’t hold logic, they are just data wrappers (Value Objects).
  3. Components can’t have methods, only get and set properties.
  4. Systems do not hold or cache entitites/components data or state. They can hold system states (not really enforced, but it’s a rule).
  5. Each System has one responsibility only. (not really enforced, but it’s a rule).
  6. Systems cannot be injected
  7. Systems communicate between each other mainly through components, but also through observers and similar patterns.
  8. Systems can have injected dependencies.
  9. Systems do not know each other.
  10. Systems should be defined inside sensible namespaces according to their context.
  11. Entities are not defined by objects, they are just IDs.

First we need a composition root. We know already how to effectively introduce a composition root in Unity from my IoC container example and being the composition root independent by the container itself, we can surely re-use the same mechanism, without using any IoC container. The Composition root becomes the place where the systems can be defined. In my framework, Systems are actually called Engines and they all implement the IEngine interface. For the example, most of the time, the INodeEngine specialization is used instead.

Following the code, it makes sense to start from our root; inside the MainContext.cs the setup of the engines is found:

enginesRoot is the new IEngine container, while tickEngine is instead not strictly part of the ECS framework and it’s used to add the Tick functionality to whatever class implements ITickable.

Entity Creation in Svelto ECS

Engines are sort of “managers” meant to solve one single problem on a set of components grouped by entity nodes. An Entity can be defined by several components and an engine is constantly aware of specific components from all the entities in game. Thus an engine must be able to select the ones it’s interested in.
Other frameworks implement a query mechanism that let the systems pickup the components to use, but as far as I understood this can limit the design of systems and have possible performance penalties (note: a fast query system has been introduced in the final version). My method is more flexible, but has the disadvantage to generate some boiler plate code. With this method, each INodeEngine can accept nodes (not components, I’ll talk about them later) through the following interface:

Being the library not complete yet, at the moment entities can be created only though GameObjects and components through MonoBehaviours, but this is a limitation I will eventually remove. In fact, it would be a mistake to associate GameObjects to entities and MonoBehaviours to components. What I wrote is just a bridge to make the transition painless and natural.

Components are related to the design of the Entity itself and they must not be created with the Engine logic in mind. They essentially are units of data that should be actually shareable between entities. More general is the design of the component, more reusable it is. For example the following components are entity-agnostic and reused among different entity types in our example:

The coder must face an important decision when it’s time to write components. Should several reusable small components (often holding just one data) or less not reusable “fatter” components be used? I faced this decision myself several times during the creation of the example and, in its limited scope, I eventually find out that shareable components are better in the long term, since they have more chance to be reused (therefore avoiding write new components for specific entities).

As you can see, components are defined through interfaces. This could seem like a weird decision, which instead is very practical when it’s time to define components through MonoBehaviors. I often notice that ECS frameworks have the drawback to force the coder to allocate dynamically several objects when a new entity is created. While object pooling could help, the run-time creation of new entities always have a negative impact. I then realised that, as long as the components are defined through interfaces, they could be all implemented inside bigger objects that are never directly used. With Unity, this also comes handy since few MonoBehaviours, that implement multiple interfaces, would do the job.

The MonoBehaviour above is defined in my framework as Implementor, as it implements the Components interfaces, but doesn’t have any other use, expect creating a bridge between the Unity logic and the Engines logic. The use of the explicit implementation of the interface is very convenient to remember which method implement which interface (note: since I wrote this article, I found out that explicit implementation of methods are actually always defined as virtual function, therefore they introduce a performance penalty. If performance is critical, ignore my suggestion).

The framework will actually pass the reference of the MonoBehaviour to the Engines through the nodes, but the engines will know the components only through their interfaces. The Engines are not aware of the fact that the object implementing the component interfaces is actually a MonoBehaviour and it would be wrong otherwise.

Nodes, what are they for?

While components are designed to work with entities, nodes are defined by the engines. In this way the coder won’t be confused on how to design a component. Should the component follow the nature of the Entity or fit the necessity of the Engine? This problem initially led me to very awkward code when I wrote the first draft of the demo. Separating the two concepts helped to define a simple, but solid, structure of the code.

With each Engine comes the relative set of nodes. It’s the Engine to define its own Nodes and Nodes shouldn’t be shared between Engines. An Engine must be designed as an independent and separated piece of code. However eventually I decided to relax this rule when I found out that the ECS framework is also useful to separate the classes logic into several levels of abstraction. Engines can be grouped into namespaces, according what they do. For example all the engines that manage enemy components are under the EnemyEngines namespace. All the engines that manage player components are under the PlayerEngines namespace. All the nodes usable by enemy engines are defined under the EnemyEngines namespace and all the nodes usable by player engines are defined under the PlayerEngines namespace. The rule is that nodes defined in a namespace, can be used only by the engines in that namespace and the namespace itself will help the coder to not mix up different classes from different environments.

Namespaces are logically layered and while enemy engines and player engines are relatively specific, since they can operate only on enemies and the player respectively, the HealthEngine is instead abstracted and can operate both on enemy and players. However, since the HealthEngine is neither in the Player nor in the Enemy namespace, it can know the entities components only through its own node, the HealthNode.  Generic and reusable nodes can only belong to generic and reusable engines which logic can be applied to Entities regardless their specialization. Nodes are simply new objects that group entity components in a way that is suitable to the engine logic.

From a forum I also read this definition, which fits well:

It is common for “systems” (or “engines” as they are known here) to need to access multiple components per entity. For example an EnemyEngine might need to access a Health component, an Ammo component and a Positioning component. This creates at least a 1-to-many relationship between Engines and Components. (In fact once you share components among several engines that turns into a many-to-many relationship – but only the 1-to-many side of it need be modeled).

Their solution to this problem is the Nodes concept. They represent that 1-to-many relationship as a Node object and that object might hold several components within it. Then they simply have a 1-to-1 relationship between Engines and Nodes. It also means an Engine need only manage a single object which fully embodies the relevant aspects of an entity.
Going further, I would imagine that a Node could slightly abstract over the components if it so wished. For example an EnemyNode might have a shoot() method which deducts from the Ammo component and uses the Positioning component to decide where to spawn a bullet entity. This provides a higher-level and more engine-specific API for the EnemyEngine to use, rather than it having to dig in and drive the components directly. Now the EnemyEngine can concentrate on just the AI logic itself and the menial details of keeping components in-sync is offloaded to the Node concept.

It is possible to create entities through scene GameObjects, in this case a NodeHolder MonoBehaviour must be defined (note: the syntax has been changed with the later versions, it’s now called EntityDescriptorHolder).  The GameObject can be created both statically, in the scene, as child of the Context gameobject or dynamically, using the GameObjectFactory.

Components and MonoBehaviours

components cannot hold logic and as I explained, our components are NOT MonoBehaviours, but can be implemented through MonoBehaviours. This is important not just for saving dynamic allocations, but also to not loose the features of the native Unity framework functionalities, which all work through MonoBehaviours.

Since at the end of the day it’s not very convenient to write a Unity game without GameObjects and Monobehaviours, I need the framework to coexist with and to improve the Unity functionalities, not fighting them which would just produce inefficient code.
This is the reason why I decided to not change the original implementation of the enemies player detection. In the original survival demo, the enemies detect if a player is in range through the OnTriggerEnter and OnTriggerExit MonoBehaviour functionalities.

The EnemyTrigger MonoBehaviour still uses these functions, but what it does is simply to fire an entityInRange event. Now pay extreme attention, the object will actually NOT set the playerInRange value, it will just fire an event. The MonoBehaviour cannot decide if the player is actually in range or not, this decision must be delegated to the Engine. Sounds strange? Maybe, but if you think of it, you will realise that it actually makes sense. All the logic must be executed by the engines only.

these values will be then used by the engine responsible of the enemy attack and the job of the MonoBehaviour simply ends here.

Communication in Svelto ECS

I understand that all these concepts are not easy to absorb, that’s why I wrote four articles before to introduce this one. I also understand that if you haven’t worked on large projects, it’s hard to see the benefits of this approach. That’s why I invite you to remember the intrinsic limits of the Unity Framework and then the pitfalls of the IoC containers that try to overcome them. Of course using such a different approach makes sense if eventually everything becomes easier to manage and I believe this is the case.

One of the most problematic issues that the framework solves brilliantly is the communication between entities. Engines are natural entity mediators, they know all the relevant components of all the entities currently alive, therefore they can take decisions being aware of the global situation instead to be limited to a single entity scope.

Let’s take in consideration the EnemyAttackEngine class, which is the engine that uses IEnemyTriggerComponent object. It relies on the IEnemyTriggerComponent event to know if a player is potentially in range or not, but I decided to do this just for the sake to show how the ECS framework can interact with the Unity Framework. I can also guess that the OnTriggerEnter performs better than c# code. What I could have simply done is instead to store the enemies transform components in a List and iterate over them every frame, through the Tick function, to know if the player is in range or not. The engine could have set the component playerInRange value without waiting for the MonoBehaviour event.

Why do components have events?

Interesting question. Many ECS frameworks do not actually have this concept. Events are usually sent through external observers or using an EventBus. The decision I took about using events inside components is possibly one of the most important. In this simple demo I actually use an observer once just to show that it’s possible to use them, but otherwise I wouldn’t have needed it. All the communication between entities and engines and among engines can be usually performed through just component events.

It’s very important to design these events properly though. Engines know nothing about other engines. They cannot assume/operate/be aware of anything outside their scope. This is how we can have low-coupled, high-cohesive code. Practically speaking this means that an Engine cannot fire a component event that semantically doesn’t make sense inside the engine itself. Let’s take in consideration the HudEngine class. The HudEngine has the responsibility to update the HUD (a little bit broad as responsibility, but it’s OK for such a simple project). One of the HUD functions is to flash when the player is damaged.

the DamageNode uses the IDamageEventComponent to know when the player is damaged. Obviously the IDamageEventComponent has a generic damageReceived event. It wouldn’t make sense otherwise. However, let’s say that, in a moment of confusion, thinking about the HudEngine functionalities, I decided to have instead a flashHudEvent. This would have been wrong on many levels:

  • first, every time an event is used as substitute of a public function (as explained at the begin of this article), it means there is a design code flaw. Events always follow the Inversion of flow control and are never designed to call a specific function.
  • I would have needed a new component that really, for the Entity point of view, wouldn’t have made any sense.
  • The HealthEngine, that decides when an entity is damaged, would have know the concept of flashing hud (waaaaat?)

Some times these reasoning are not so obvious, that’s why a self-contained engine, not aware of the outside world, will force the coder to think twice about the code design (which is my intent all along).

Putting all together

The framework is designed to push the coder to write high cohesive, low coupled code based on engines that follow the Single Responsibility and the Open Closed principles. In the demo the number of files is multiplied comparing the original version for two fundamental reason: the old classes were not following the SRP, so the number of files naturally incremented. however due to the current state of the framework, annoying boiler plate code is needed. Any suggestion and feedback is welcomed!

note: —->Svelto ECS is now production ready, read more about it here.<—–


Other famous ECS frameworks:


Ash :

ECS frameworks that work with Unity3D already:


I strongly suggest to read all my articles on the topic:

How to perfectly synchronize workspaces with older versions of Perforce

The latest versions of perforce finally support the annoyingly for too long underestimated and useful option P4 clean. Using this command is possible to delete all the local files that are not present in the depot and sync all the files that have been modified locally, or to be more precise:

  1. Files present in the workspace, but missing from the depot are deleted from the workspace.
  2. Files present in the depot, but missing from your workspace. The version of the files that have been synced from the depot are added to your workspace.
  3. Files modified in your workspace that have not been checked in are restored to the last version synced from the depot.

This is very useful when is time to use perforce on a building machine, since it’s of fundamental importance to be sure that the workspace totally matches the depot.

Who, like me, is forced to use a previous version, though, is just destined to swear at infinitum. However, sick of the problem, I did some research around and found some articles with very good solutions. After I put them together I just wanted to share them with you.

The first part checks for all the files that are different in the workspace and, thanks to the piping facility, force sync them. I found out that is safer to call the commands separately instead to use the options -sd and -se on the same diff.

the second part uses a batch to delete all the files present in the workspace, but not in the depot, exploiting the reconcile command

as extra bonus, I add the following one

I don’t understand why often the revert unchanged files command doesn’t actually revert identical files. This command will succeed where the standard command fails.

The truth behind Inversion of Control – Part IV – Dependency Inversion Principle

The Dependency Inversion Principle is part of the SOLID principles. If you want a formal definition of DIP, please read the articles written by Martin[1] and Schuchert[2].

We explained that, in order to invert the control of the flow, our specialized code must never call directly methods of more abstracted classes. The methods of the classes of our framework are not explicitly used, but it is the framework to control the flow of our less abstracted code. This is also called (sarcastically) the Hollywood principle, stated as “don’t call us, we’ll call you.”.

Although the framework code must take control of less abstracted code, it would not make any sense to couple our framework with less abstracted implementations. A generic framework doesn’t have the faintest idea of what our game needs to do and, therefore, wouldn’t understand anything declared outside its scope.

Hence, the only way our framework can use objects defined in the game layer, is through the use of interfaces, but this is not enough. The framework cannot know interfaces defined in a less abstracted layer of our application, this would not make any sense as well.

The Dependency Inversion Principle introduces a new rule: our less abstracted objects must implement interfaces declared in the higher abstracted layers. In other words, the framework layer defines the interfaces that the game entities must implement .

In a practical example, RendererSystem handles a list of IRendering “Nodes”. The IRendering node is an interface that declares, as properties, all the components needed to render the Entities, such as GetWorldMatrix, GetMaterial and so on. Both the RendererSystem class and the IRendering interface are declared inside the framework layer. Our specialised code needs to implement IRendering in order to be usable by the framework.

Designing layered code

So far I used the word “framework” to identify the most abstracted code and “game” to identify the least abstracted code. However framework and game don’t mean much. Limiting our layers to just “the game layer” and “the framework layer” would be a mistake. Let’s say we have systems that handle very generic problems that can be found in every game, like the rendering of the entities, and we want to enclose this layer in to a namespace. We have defined a layer that can be even compiled in a DLL and be shipped with whatever game.

Now, let’s say we have to implement the logic that is closer to the game domain. Let’s say that we want to create a HealthSystem that handles the health of the game entities with health. Is HealthSystem part of a very generic framework? Surely not. However while HealthSystem will handle the common logic of the IHaveHealth entities, not all the game entities will have the same behaviors. Hence HealthSystem is more abstracted than more specialized behavior implementations. While this abstraction wouldn’t probably justify the creation of another framework, I believe that thinking in terms of layered code helps designing better systems and nodes.

Putting ECS, IoC and DIP all together

As we have seen, the flow is not inverted when, in order to find a solution, a bottom up design approach is used to break down the problem, that is when the specialized behaviors of the entities are modeled before the generic ones. Or also when the systems are designed as result of the specialized entity problems.

In my vision of Inversion of Control, it’s needed to break down the solutions using a top down approach. We should think of the problems starting from the most abstracted classes. What are the common behaviors of the game entities? What are the most abstracted systems we should write? What once would have been solved specializing classes through inheritance, it should be now solved layering our systems within different levels of code abstraction and declaring the relative interfaces to be used by the less abstracted code. Generic systems should be written before the specialized ones.

I believe that in this way we could benefit from the following:

  • We will be sure that our systems will have just one responsibility, modeling just one behavior
  • We will basically never break the Open/Close principle, since new behaviors means creating new systems
  • We will inject way less dependencies, avoiding using a IoC container as a Singleton alternative
  • It will be simpler to write reusable code
  • We could potentially achieve real encapsulation

In the next article I will explain how I would put all these concepts together in practice



I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part III – Entity Component System Design

In the previous article I explained what the Inversion of Control principle is, then I introduced the concept of Inversion of Flow control. In this article I will illustrate how to apply it properly, even without using an IoC container. In order to do so, I will talk about Entity Component System design. While apparently it has nothing to do with Inversion Of Control, I found it to be one of the best way to apply the principle.

Once upon the time, there was the concept of game engine. A game engine was a game-specialized framework that was supposed to run whatever game. This game engine used to have some common classes, designed as sort of “managers”, that were found more or less in all the game engines, like the Render class. Every time a new object with a Renderer was created, the Renderer component of the object was added to a list of Renderers managed by the Render class. This was also true for other components like the Collision component, the Culling component and so on. The Engine dictates when to execute the culling, when to execute the collisions and when to execute the rendering. The less abstracted objects don’t know when they will be rendered or culled, they assume only that at a given point, it will happen.

The game engine was taking the control of the flow, resulting in the first form of Inversion of Flow Control. There is no difference with what I just explained and what Unity engine does. Unity decides when is time to call Awake, Starts, Update and so on. Unity framework is capable to achieve both Inversion of Creation Control and Inversion of Flow Control. MonoBehaviour instances cannot be directly created by the users; either if they are already present in the scene or they are created dynamically, it’s Unity to create them for us. The Inversion of Flow control is instead achieved through the adoption of the Template Pattern. Our MonoBehaviour classes must follow a specific template (through the Awake, Start, Update and similar functions) in order to be usable by the Unity framework “managers”.

Now, let’s have a look at what a modern Entity Component System instead is.

Modern Entity Component Systems

A more advanced way of managing entities has been introduced in 2007 with the modern implementations of Entity Component System

In 2007, the team working on Operation Flashpoint: Dragon Rising experimented with ECS designs, including ones inspired by Bilas/Dungeon Siege, and Adam Martin later wrote a detailed account of ECS design[1], including definitions of core terminology and concepts. In particular, Martin’s work popularized the ideas of “Systems” as a first-class element, “Entities as ID’s”, “Components as raw Data”, and “Code stored in Systems, not in Components or Entities”.[2]

ECS Design uses Inversion of Flow control in its purer form. ECS is also a magnificent way to possibly never break the Open/Close principle when is time to add new behaviors.

the Open/Closed principle, which is also part of the SOLID principles, says:

software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

OCP is the holy grail of the perfect code design. If well adopted, it should never be needed to go back to previously created classes and add new functions in order to add new behaviors.

The ECS design works in this way:

  • Entities are just basically IDs.
  • Components are ValueObjects[3]. Components wrap data around an object that can be shared between Systems. Components do not have any logic to manage the data.
  • Systems are the classes where all the logic lies. Systems can access directly to list of components and execute all the logic the project needs.

I understand that the concept of data component could sound weird at first. I had problems to wrap my head around it initially as well. I guess that the key that cleared my doubts was to understand that there isn’t such a thing as a System being too small. A System can be everything that handles logic indeed, as the Presenter or the Controller handle logic in the MVP or MVC pattern respectively (if you don’t know what they are, never mind at the moment).

The really powerful aspect of this design is that the Systems must follow the Single Responsibility Rule. All the behaviors of our in-game entities are modeled inside Systems and every single behavior will have a single, well defined in terms of domain, System to manage it.

This is how we can apply the Open/Close principle without ever break it: every time we need to introduce a new specific behavior, we are forced to create a new System that simply uses components as data to execute it.

Systems are implicitly mediators as well, this is why Systems are great to let Entities communicate each other and with other Systems (through the use of the Components). However Systems are not always able to cope with all the possible communication problems just through the use of Components and for this reason Systems, that should be instantiated in the Composition Root, can have injected other dependencies (I’ll give more practical examples with the next articles).

Also, pay attention, Systems model all the logic of your framework AND game. When ECS design is used, there won’t be any implementation difference between the framework logic (like RenderingSystem, PhysicSystem and so on) and the game logic (like AliensAISystem, EnemyCollisionSystem and so on), but there will be still a sharp difference in the code separation. Framework and Game systems will still lie in different layers of the application. This introduce another very important concept: the design of our game application with multiple layers of abstraction. Using just two layers of abstraction is not enough for a complex game. Framework layer and Game layer alone are not enough. We need to make our Game Layering more granular.

All that said, I noticed that there is some confusion when it’s time to define the design adopted by Unity. Although its “managers” can be considered “systems”, they are not according the modern definition, since they cannot be defined or extended by the programmer.
Of course when is not possible to extend the “Systems” functionalities, the only way to extend the logic of our entities is to add logic inside Components. This is anyway a step forward from the classic OOP techniques. Component Oriented (I call them Entity Component, without System) frameworks like the one in Unity, push the coder to favor Composition over Inheritance, which is surely a better practice.
All the logic in Unity should be written inside focused MonoBehaviour. Every MonoBehaviour should have just one functionality, or responsibility, and they shouldn’t operate outside the GameObject itself. They should be written with modularity in mind, in such a way they can be reused independently on several GameObjects. Monobehaviours also hold data and their design clearly follows the basic concepts of OOP.
Modern design tends instead to separate data from logic. As Data, Views and Logic are separated when the Model View Controller pattern is implemented, the same happen with ECS design through Components and Systems in order to achieve better code modularity.

Before to finally shows the benefits of the ECS design approach with real code, I will explain the concept of Dependency Inversion Principle in the next article of this series.





I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part II – Inversion of Control

Note: this article assumes you already read my previous articles on IoC containers and the Part I

Inversion of Control is another concept that turns out to be very simple once it has been fully processed or rather saying “absorbed”. Absorbed as going over being understood and becoming part of one’s own forma mentis. However, make no mistake, Inversion Of Control is not the same of Inversion Of Control Container (which is not a principle, but just a tool to simplify Dependency Injection)[1]. Inversion Of Control container is a confusing name in this sense[2], therefore many use the name Dependency Injection Library instead.

While using an IoC container is very simple, being able to invert the control of the code is another matter. The process of adaptation is not straightforward since the entire code paradigm will have to change. In order to try to explain this paradigm, I will need to introduce the concept of code abstraction. The following definitions will be explored in more details in the next articles, so come back here if they won’t be grasped immediately.

The inversion of control cannot be really applied successfully without designing the application with multiple layers of abstraction. Higher is the abstraction, more general is the scope of the code. For example, a general framework is part of the highest levels of abstraction, since it could be used by whatever type of application. A class that manages the health of the enemies in a game belongs to a lower level of abstraction.

If we think of our code structure in terms of layers of abstraction, Inversion of control is all about giving the control to the more abstracted classes instead of the less ones. Of course you would ask: control of what? The idea is that general classes should control both the creation and the flow of the more specialised code.

How can the object creation be inverted? It’s simple: code that follows the Inversion of Creation Control rules, never uses the new keyword explicitly to create dependencies. All the dependencies are always injected, never created directly.

If we apply this reasoning to the injection waterfall discussed in the first article of this series, it is easy to see that ALL the dependencies, therefore all the objects, must be created and initially passed from the Composition Root. The Composition Root effectively becomes the only place where all the starting relations between objects are made.

If you wonder how to create dynamic dependencies, like objects that must be spawned in run-time, you asked yourself a good question. These objects are always created by factories and factories are always created and passed as dependency from the Composition Root.

Simply put, the new operator should be used only in the Composition Root or inside factories. An IoC container hides this process, creating and passing all the dependencies automatically instead to force the user to pass them by constructors or setters. The application-agnostic IoC container code takes control of the creation of all the dependencies. Of course dynamic allocation is still freely used to allocate data structures, but data structures are not dependencies.

Why is inverting the creation control important? Mainly for the following three reasons:

  1. the code becomes independent from the constructor implementation. A dependency can, therefore, be passed by interface, effectively removing all the coupling between the implementation of classes.
  2. Because of 1, your code will be dependent only but the abstraction of a class and not its implementation. In this way it’s possible to swap implementations without changing the code.
  3. the object injected comes already created with its dependencies resolved. If the code had to create the instance of the object, those dependencies must have be known as well.
  4. the flow of the code can change according the context. Without changing the code is possible to change its behaviour just passing different implementation of the same interface.

The first point is fundamental to be able to mock-up implementations when unit tests are written, but if unit tests are not part of your development process, the third point can lead to cleaner code when is time to implement different code paths.

Can Inversion of Creation Control be achieved without using an IoC container? Absolutely, let’s see how simply using manual dependencies injection:

while I am not really good with examples, it’s not simple to find a compact one that is also meaningful. However I think the above example includes all the discussed points.

Main is our simple Composition Root. It’s where all the dependencies are created and the initially injected.  If you try to run this code, it will actually work. It will run a dumb, not interactive simulation of a Player fighting Enemies.

All the game logic is encapsulated inside the Level class. Since the Level class uses the Template Pattern in order to implement the functions needed by the LevelManager class to manage a Level, is possible to extend the logic of the game creating new Level classes. However adding Level instances inside the LevelManager is not Dependency Injection, so it is irrelevant to our exercise (LevelManager doesn’t strictly need Level instances injected to work, the class is still functional even without any Level object added).

Each Level needs two dependencies. An implementation of an IEnemySpawner and the Player object. Note that the level name is not actually a dependency. A dependency is always an instance of a class needed by another class.

level1 and level2 are different because of the number and type of enemies created. level1 contains only two enemies of type A, level2 contains one enemy of type A and two enemies of Type B. Type B can be more powerful than type A when inflicts damage to the Player. However the Level implementation actually doesn’t change. The different behaviour is just due to the different implementation of the IEnemySpawner interface passed as parameter. Injecting two different IEnemySpawner objects changes the level gameplay, without changing the Level code.

EnemySpawner doesn’t build directly enemies, because is not its responsibility. The Enemyspawner just decides which enemies are spawned and how, but doesn’t need to be aware of what an enemy needs to be created.

As you can see both EnemyA and EnemyB depend on the implementation of the class Random to work, but EnemySpawner doesn’t need to know this dependency at all. Therefore we can use a factory both to encapsulate the operator new and pass the dependency Random directly from the composition root.

My explanation is probably more complicated than the example itself, where it’s clear that all the dependencies are created and passed through constructor from inside the main function. The only exception is when the EnemyFactory injects the Random implementation by setter.

In this example I haven’t used an Inversion of Control container but the control of the objects creation has been nevertheless inverted. The context takes away the responsibility of creating dependencies from the other objects, dependencies can be passed by Interface, the flow of the code changes according which implementation has been injected.

So the questions I am asking myself lately are: do we really need an IoC container to implement Inversion of Creation control? Are the side effects of using an IoC container  less important than the benefits of using such a tool? Searching an answer to these question is what led me to start to write these articles.

I can give a first answer though: manual Dependency Injection is very hard to achieve with the Unity framework. As I have already widely explained in my past articles, due to the Unity framework nature, dependencies can be injected only through the use of singletons or the use of reflection. C# reflection abilities is what actually enables mine and other IoC containers to inject dependencies in an application made with Unity. So how can we possibly adopt manual dependency injection in Unity? One possible solution is actually to reinterpret the meaning of the Monobehaviour class in order to never need to inject dependencies in it. Is it possible? As we soon find out, if we change our coding paradigm, it’s not just possible, but also convenient to do.
For the time being, keep in mind that if the code is designed without really knowing what Inversion of Control is, an IoC container merely becomes a tool to simplify the tedious work of injecting dependencies; a tool that is very prone to being abused. An IoC container cannot be used efficiently without knowing how to design code that inverts creation and flow control.

So far I talked mainly about Inversion of Creation control, but I mentioned several times the Inversion of Flow Control, therefore I need to give a first explanation before to conclude this article.

What is Inversion of Flow Control? Quoting Wikipedia:

inversion of control (IoC) describes a design in which custom-written portions of a computer program receive the flow of control from a generic, reusable library. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the reusable code that calls into the custom, or task-specific, code.”

In this sense an IoC container could be used to implement Inversion of Flow control when it’s time to “plug-in” implementations from the lower levels of abstraction to the higher levels of abstraction without breaking the Dependency Inversion Principle. Inversion of Flow control is even more important than Inversion of Creation Control and I will explore the reasons in detail in my next article.


[2] (but read all of it)

I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part I – Dependency Injection

Note: this article series assumes you have already read my previous articles on IoC containers.

There is an evil truth behind the concept of Inversion of Control container. An unspoken code tragedy taking place everyday while passing unnoticed. Firm in my intentions to stop this shame, I decided to stand up and start writing this series of articles that will tell about the problem, the principles and the possible solutions.
I started to notice the symptoms of this “blasphemy” (against the code gods 🙂 ) quite a while ago, but I couldn’t pin down the reason of them. Nevertheless the problem was pretty clear: the IoC container solution was scarily used too often as alternative to the Singleton pattern to inject dependencies. With the code growing and the project evolving, many classes started to take the form of a blob of functions, with a common pitfall: dependencies, dependencies everywhere.
What means and what is wrong with using an IoC container as mere substitute of a Singleton, is something that I am going to describe the best I can with this series of articles. Don’t get me wrong, IoC containers are great tools, but if they are used in the wrong way, the can actually lead to major issues as well. I realised that IoC containers cannot be used without understanding how to use them, that’s why I started to look for a safer solution that could be adopted even by inexperienced coders. Before to look at this solution, let’s start taking some steps back and explain what Dependency Injection actually is:

What Dependency Injection is

Dependency Injection isn’t anything fancy. A dependency is just an interface of which a class is dependent on. Usually dependencies are solved in two ways: injecting them or passing them by Singletons. Singletons break encapsulation in the sense that, as a global variable, they can be used without a scope. Singletons awkwardly hide your dependencies: there is nothing in the class interface showing that the dependency is used internally. Singletons strongly couple your implementations, resulting eventually in too long and painful code refactoring. To be even more practical, Singletons, as all the global variables holding references, are also often source of memory leaks. For these reasons we use injection to solve dependencies.

Now, think for a moment if there wasn’t an IoC container in place. How would we inject our dependencies? For example, passing them as parameters in a constructor as shown in the example. Would you pass 10 parameters by constructor? I surely wouldn’t, at least just for how painful and inconvenient it is. Same reasoning applies when an IoC container is used. Just because it’s more convenient, it doesn’t mean you can take advantage of it. You are just making a mess with object coupling again. To be honest, if the design of the code would really follow the SOLID[1] principles, this problem wouldn’t arise, since the number of dependencies injected is directly linked to the number of responsibilities a class has got. One responsibility only should lead to a very few dependencies injected. However without a proper paradigm to follow, we all know that coders tend to break the Open/Close principle and add behaviours to existing classes instead to adopt a modular and extendible design. That’s when IoC containers start to be dangerous, since they actually help this process, making it less painful.

When dependencies are injected into an instance, where do these objects come from? If the dependencies are injected by constructor, they obviously come from the scope where the object, that needs the dependencies injected, has been created. On its turn, the class that is injecting the dependencies into the new object could need also dependencies injected, which therefore are passed by another class in the parent scope. This chain of dependency passages creates a waterfall of injections and the relationship between these objects is called Object Graph[1].

Albeit, where does this waterfall start from? It starts from the place where the initial objects are created. This place is called Composition Root. Root because is where the context is initialised, composition because is where the dependencies start to be created and injected and, therefore, the initial relations composed.

Now you can see what the real problem of the Unity framework is: the absence of a Composition Root. Unity doesn’t have a “main” class where the relations between objects can be composed. This is why the only way to create relationships with the bare Unity framework is using Singletons or static classes/methods.

Why are relationship between objects created? Mainly to let them communicate with each other. All forms of communications involve dependency injection. The only pattern that allows communication without dependency injection is the pattern called Event Bus[2] in the Java environment. The Event bus allows communications through events held by a Singleton, hence the Event Bus is one of the many anti pattern out there. Note that you could think to create something similar to an Event Bus without using a singleton (therefore injecting it). That’s an example of what I define to use injection as mere substitute of a Singleton.

Object Communication and Dependency Injection

Communication can couple or not couple objects, but in all the cases involves injection. There are several ways to let objects communicate:

  • Interface injection: usually A is injected in B, B is coupled with A [e.g.: Inside a B method A is used, B.something() { A.something());]
  • Events: usually B is injected in A, A is coupled with B [e.g.:Inside A, B is injected to expose the event, B.onSomething += A.Something]
  • Commands: B and A are uncoupled, B could call a command that calls a method of A. Commands are great to encapsulate business logic that could potentially change often. A Command Factory is usually injected in B.
  • Mediators: usually B and A do not know each other, but know their mediator. B and A pass themselves in the mediator and the mediator wires the communication between them (i.e.: through events or interfaces). Alternatively B and A are passed to the mediator outside B and A themselves, totally removing the dependency to the Mediator itself. This is my favourite flavour and the closest to dependency-less possible.
  • Various other patterns like: Observers, Event Queue[3] and so on.

How to pick up the correct one? If we don’t have guidelines it looks like one or another is the same. That’s why, some times, we end up using, randomly, one of those. Remember the first two patterns are the worst because they couple interfaces that could change over time.

We can anyway introduce the first sound practice of our guideline for our code design: our solution must minimize the number of dependencies.

Of course the second sound practice is about the concept of Single Responsibility Principles[4] (and Interface Segregation Principle[5]). One of the 5 principles of SOLID (ISP is another fundamental one), but the only one that must be actually taken as a rule. Your class MUST have one responsibility only. Communicating could be considered a responsibility, therefore it’s better to delegate it.

How we are going to achieve SRP and solve the dependencies blob problem is something I am going to explain in the next articles of this series.


[1] SOLID (object-oriented design)

[2] Event Bus

[3] Event Queue

[4] Single Responsibility Principle

[5] Interface Segregation Principle

I strongly suggest to read all my articles on the topic: