The truth behind Inversion of Control – Part V – Entity Component System design to achieve true Inversion of Flow Control

At this point, I can imagine someone wondering if I still recommend to use an IoC container. IoC containers are handy and are quite powerful with Unity 3D, but they are dangerous. If used without Inversion of Control in mind, they can lead to messy code. Let me explain why:

When an IoC container is used with a standard procedural approach, thus without inversion of control in mind, dependencies are injected in order to be used directly by the specialised classes. Without inversion, there is no limit to the number of dependencies that can be injected. Therefore it doesn’t come natural applying the Single Responsibility Principle to the code written. Why should I split the class I am currently working on? It’s just an extra dependency injected and few lines of more code, what harm can it do? Or also: should I inject this dependency just to check its state inside my class? It seems the simplest thing to do, otherwise I would need to spend  time to refactor my classes…Similar reasoning, plus the proverbial coder laziness, usually result into very awkward code. I have seen classes with more than ten dependencies injected, without raising any sort of doubt about the legitimacy of such a shame.

Coders need constrains and IoC containers unluckily don’t give any. In this sense, using an IoC container is actually worse than manual injection, because manual injection would be limiting the problem due to the effort to pass so many parameters by constructors or setters.
Dependencies end up being injected as a sort of global variables. Binding a type to a single instance becomes much more common than binding an interface to its implementation.

One way to limit this problem is to use multiple Composition Roots according the application contexts. It’s also possible to have one Composition Root and multiple containers. in this way it would be possible to segregate classes and specify their scope, reflecting the internal layering of the application. An observer wouldn’t be injectable everywhere, but only by the classes inside the object graph of the specific container. In this sense, hierarchical containers could be quite handy. I should definitively write an article on the best practices in using an IoC container with more examples, but right now let’s see how dangerous an IoC container could become without using Inversion of Flow control.

The following example is a classic class that doesn’t have a precise context or responsibility. It has very likely started with a simple functionality in mind, but being used as a sort of “Utility” class, there is no limit to the number of public functions it can have. Consequentially there is no limit to the number of dependencies it can have injected. “Look this class has already all the information we need to add this functionality” … “ok let’s add it” … “oh wait, this function actually needs this other dependencies to work, all right then, let’s add this new dependency”. This example is the worst of the lot, unluckily pretty common when the concept of Inversion of Control is unknown or not clear.

My definition of “Utility” class is simple: every class that exposes many public functions ends up being a Utility class, used in several places. Utility classes should be static and stateless. Stateful Utility classes are always a design error.

The following class uses Injection without Inversion of flow Control in Mind. It’s a utility class and exposes way to many public functions. Public functions reflect the class “responsibilities”, thus this class has way too many responsibilities.

Using events is a good way to implement inversion of flow control. The class reacts to events fired from the injected classes. Injecting dependencies to register to their events is a reasonable use of injection. What we need to achieve is encapsulation and of course public functions break encapsulation. What’s the problem about breaking encapsulation? The biggest problem is that your class is not in control of the flow anymore. The class won’t have any clue when and where its public functions will be called, therefore cannot predict what will happen. Without being able to control the flow, it will be very simple to break the logic of the class. A classic example is a public function that assumes that some class states, loaded asynchronously, are actually set before the public function is used. Adding checks in this kind of scenario will quickly turn in some horrible and unmanageable code.

Events help in this scenario, however we must be very careful when events are used. It’s very important to not assume what will listen to those events when the events are created. Obviously events must be generic and have generic names. If events become specific, assuming what will happen when they will be dispatched, then there will be no difference compared to using public functions. Using events the class will probably look like this:

A Name like OnMachineDestroyed communicates clearly that the event will be fired when the machine is destroyed and anything can listen to this event. The Control is then inverted, as the class doesn’t command a specific dependency or the flow of the execution, it just triggers a signal without taking any responsibility of the consequences.

Finally, Inversion of Control can be optimally achieved through properly designed code. I found the Template Pattern to be a good companion. The following class is not injected to any class, but actually registered inside a “manager” compatible with the template interface implemented. The manager class will hold a reference to this class and will use the object through the interface IFoo. The key is that the manager won’t have any clue about the IFoo implementation, following the Liskov Substitution Principle (the L in the SOLID principles).

Template Pattern is a common method normally used by frameworks to take control of the application code.

Programmers tend to code according their own code design knowledge based on their own experience. If a tool is too flexible, it could be used in ways that can cause more harm than benefits. That’s why I love rigid frameworks, they must dictate the best way to be used by the coder without being open to interpretation. With this in mind, I started to research possible alternatives to IoC containers and, as discussed in my Entity Component System architecture article, I realised that a proper ECS implementation for Unity could be what I am looking for.

In order to prove that an ECS solution could be viable in Unity, I had to spend several hours during my weekends to create a new framework and an example with it. The framework and the example can be found separately at these addresses:

Download both repositories and sort out the right folders for Svelto-ECS and Svelto-ECS-Example. Open the scene Svelto-ECS-Example\Scenes\Level 01.unity to run the Demo.

The example is the original Unity 3D survival shooter rewritten to be adapted to my ECS framework. However this framework is still not production ready, it misses some fundamental functions and it’s still rough (note: this is not true anymore, the framework is now production ready). Nevertheless, it has been really useful to prove and demonstrate my theories.

The intention was to create a framework able to push the user to write High Cohesive, Low Coupled code[1] with the Open/Close and Single Responsibility Principles in mind. Following these principles would naturally lead to less dependencies injected and, as it will be shown with the simple example I wrote, the use of dependency injection will be limited to the scope of showing its compatibility with the framework.

Of course it must be clear that the use of such a framework makes sense only when it’s applied on a big project maintained by several coders. The survival demo itself is too simple to really appreciate the potential of the idea. In fact, for this specific example, I could say that using my ECS framework is border line to over-engineering. However I do suggest to experiment with the library anyway, so that you can understand its potentiality.

A real Entity Component System in Unity

Note: the following code examples are not updated to the new version of the library.

My implementation of ECS is very similar to many out there, although, after studying the most famous frameworks, I noticed that there isn’t a well defined way to let systems communicate between each other, a problem that I solved in a novel way. But let’s proceed step by step.

I have already talked about what an ECS is, so let’s see how we can make it fit in Unity. Let’s start from a list of rules created to use my framework properly:

  1. All the logic must always be written inside Systems.
  2. Components don’t hold logic, they are just data wrappers (Value Objects).
  3. Components can’t have methods, only get and set properties.
  4. Systems do not hold or cache entitites/components data or state. They can hold system states (not really enforced, but it’s a rule).
  5. Each System has one responsibility only. (not really enforced, but it’s a rule).
  6. Systems cannot be injected
  7. Systems communicate between each other mainly through components, but also through observers and similar patterns.
  8. Systems can have injected dependencies.
  9. Systems do not know each other.
  10. Systems should be defined inside sensible namespaces according to their context.
  11. Entities are not defined by objects, they are just IDs.

First we need a composition root. We know already how to effectively introduce a composition root in Unity from my IoC container example and being the composition root independent by the container itself, we can surely re-use the same mechanism, without using any IoC container. The Composition root becomes the place where the systems can be defined. In my framework, Systems are actually called Engines and they all implement the IEngine interface. For the example, most of the time, the INodeEngine specialization is used instead.

Following the code, it makes sense to start from our root; inside the MainContext.cs the setup of the engines is found:

enginesRoot is the new IEngine container, while tickEngine is instead not strictly part of the ECS framework and it’s used to add the Tick functionality to whatever class implements ITickable.

Entity Creation in Svelto ECS

Engines are sort of “managers” meant to solve one single problem on a set of components grouped by entity nodes. An Entity can be defined by several components and an engine is constantly aware of specific components from all the entities in game. Thus an engine must be able to select the ones it’s interested in.
Other frameworks implement a query mechanism that let the systems pickup the components to use, but as far as I understood this can limit the design of systems and have possible performance penalties (note: a fast query system has been introduced in the final version). My method is more flexible, but has the disadvantage to generate some boiler plate code. With this method, each INodeEngine can accept nodes (not components, I’ll talk about them later) through the following interface:

Being the library not complete yet, at the moment entities can be created only though GameObjects and components through MonoBehaviours, but this is a limitation I will eventually remove. In fact, it would be a mistake to associate GameObjects to entities and MonoBehaviours to components. What I wrote is just a bridge to make the transition painless and natural.

Components are related to the design of the Entity itself and they must not be created with the Engine logic in mind. They essentially are units of data that should be actually shareable between entities. More general is the design of the component, more reusable it is. For example the following components are entity-agnostic and reused among different entity types in our example:

The coder must face an important decision when it’s time to write components. Should several reusable small components (often holding just one data) or less not reusable “fatter” components be used? I faced this decision myself several times during the creation of the example and, in its limited scope, I eventually find out that shareable components are better in the long term, since they have more chance to be reused (therefore avoiding write new components for specific entities).

As you can see, components are defined through interfaces. This could seem like a weird decision, which instead is very practical when it’s time to define components through MonoBehaviors. I often notice that ECS frameworks have the drawback to force the coder to allocate dynamically several objects when a new entity is created. While object pooling could help, the run-time creation of new entities always have a negative impact. I then realised that, as long as the components are defined through interfaces, they could be all implemented inside bigger objects that are never directly used. With Unity, this also comes handy since few MonoBehaviours, that implement multiple interfaces, would do the job.

The MonoBehaviour above is defined in my framework as Implementor, as it implements the Components interfaces, but doesn’t have any other use, expect creating a bridge between the Unity logic and the Engines logic. The use of the explicit implementation of the interface is very convenient to remember which method implement which interface (note: since I wrote this article, I found out that explicit implementation of methods are actually always defined as virtual function, therefore they introduce a performance penalty. If performance is critical, ignore my suggestion).

The framework will actually pass the reference of the MonoBehaviour to the Engines through the nodes, but the engines will know the components only through their interfaces. The Engines are not aware of the fact that the object implementing the component interfaces is actually a MonoBehaviour and it would be wrong otherwise.

Nodes, what are they for?

While components are designed to work with entities, nodes are defined by the engines. In this way the coder won’t be confused on how to design a component. Should the component follow the nature of the Entity or fit the necessity of the Engine? This problem initially led me to very awkward code when I wrote the first draft of the demo. Separating the two concepts helped to define a simple, but solid, structure of the code.

With each Engine comes the relative set of nodes. It’s the Engine to define its own Nodes and Nodes shouldn’t be shared between Engines. An Engine must be designed as an independent and separated piece of code. However eventually I decided to relax this rule when I found out that the ECS framework is also useful to separate the classes logic into several levels of abstraction. Engines can be grouped into namespaces, according what they do. For example all the engines that manage enemy components are under the EnemyEngines namespace. All the engines that manage player components are under the PlayerEngines namespace. All the nodes usable by enemy engines are defined under the EnemyEngines namespace and all the nodes usable by player engines are defined under the PlayerEngines namespace. The rule is that nodes defined in a namespace, can be used only by the engines in that namespace and the namespace itself will help the coder to not mix up different classes from different environments.

Namespaces are logically layered and while enemy engines and player engines are relatively specific, since they can operate only on enemies and the player respectively, the HealthEngine is instead abstracted and can operate both on enemy and players. However, since the HealthEngine is neither in the Player nor in the Enemy namespace, it can know the entities components only through its own node, the HealthNode.  Generic and reusable nodes can only belong to generic and reusable engines which logic can be applied to Entities regardless their specialization. Nodes are simply new objects that group entity components in a way that is suitable to the engine logic.

From a forum I also read this definition, which fits well:

It is common for “systems” (or “engines” as they are known here) to need to access multiple components per entity. For example an EnemyEngine might need to access a Health component, an Ammo component and a Positioning component. This creates at least a 1-to-many relationship between Engines and Components. (In fact once you share components among several engines that turns into a many-to-many relationship – but only the 1-to-many side of it need be modeled).

Their solution to this problem is the Nodes concept. They represent that 1-to-many relationship as a Node object and that object might hold several components within it. Then they simply have a 1-to-1 relationship between Engines and Nodes. It also means an Engine need only manage a single object which fully embodies the relevant aspects of an entity.
Going further, I would imagine that a Node could slightly abstract over the components if it so wished. For example an EnemyNode might have a shoot() method which deducts from the Ammo component and uses the Positioning component to decide where to spawn a bullet entity. This provides a higher-level and more engine-specific API for the EnemyEngine to use, rather than it having to dig in and drive the components directly. Now the EnemyEngine can concentrate on just the AI logic itself and the menial details of keeping components in-sync is offloaded to the Node concept.

It is possible to create entities through scene GameObjects, in this case a NodeHolder MonoBehaviour must be defined (note: the syntax has been changed with the later versions, it’s now called EntityDescriptorHolder).  The GameObject can be created both statically, in the scene, as child of the Context gameobject or dynamically, using the GameObjectFactory.

Components and MonoBehaviours

components cannot hold logic and as I explained, our components are NOT MonoBehaviours, but can be implemented through MonoBehaviours. This is important not just for saving dynamic allocations, but also to not loose the features of the native Unity framework functionalities, which all work through MonoBehaviours.

Since at the end of the day it’s not very convenient to write a Unity game without GameObjects and Monobehaviours, I need the framework to coexist with and to improve the Unity functionalities, not fighting them which would just produce inefficient code.
This is the reason why I decided to not change the original implementation of the enemies player detection. In the original survival demo, the enemies detect if a player is in range through the OnTriggerEnter and OnTriggerExit MonoBehaviour functionalities.

The EnemyTrigger MonoBehaviour still uses these functions, but what it does is simply to fire an entityInRange event. Now pay extreme attention, the object will actually NOT set the playerInRange value, it will just fire an event. The MonoBehaviour cannot decide if the player is actually in range or not, this decision must be delegated to the Engine. Sounds strange? Maybe, but if you think of it, you will realise that it actually makes sense. All the logic must be executed by the engines only.

these values will be then used by the engine responsible of the enemy attack and the job of the MonoBehaviour simply ends here.

Communication in Svelto ECS

I understand that all these concepts are not easy to absorb, that’s why I wrote four articles before to introduce this one. I also understand that if you haven’t worked on large projects, it’s hard to see the benefits of this approach. That’s why I invite you to remember the intrinsic limits of the Unity Framework and then the pitfalls of the IoC containers that try to overcome them. Of course using such a different approach makes sense if eventually everything becomes easier to manage and I believe this is the case.

One of the most problematic issues that the framework solves brilliantly is the communication between entities. Engines are natural entity mediators, they know all the relevant components of all the entities currently alive, therefore they can take decisions being aware of the global situation instead to be limited to a single entity scope.

Let’s take in consideration the EnemyAttackEngine class, which is the engine that uses IEnemyTriggerComponent object. It relies on the IEnemyTriggerComponent event to know if a player is potentially in range or not, but I decided to do this just for the sake to show how the ECS framework can interact with the Unity Framework. I can also guess that the OnTriggerEnter performs better than c# code. What I could have simply done is instead to store the enemies transform components in a List and iterate over them every frame, through the Tick function, to know if the player is in range or not. The engine could have set the component playerInRange value without waiting for the MonoBehaviour event.

Why do components have events?

Interesting question. Many ECS frameworks do not actually have this concept. Events are usually sent through external observers or using an EventBus. The decision I took about using events inside components is possibly one of the most important. In this simple demo I actually use an observer once just to show that it’s possible to use them, but otherwise I wouldn’t have needed it. All the communication between entities and engines and among engines can be usually performed through just component events.

It’s very important to design these events properly though. Engines know nothing about other engines. They cannot assume/operate/be aware of anything outside their scope. This is how we can have low-coupled, high-cohesive code. Practically speaking this means that an Engine cannot fire a component event that semantically doesn’t make sense inside the engine itself. Let’s take in consideration the HudEngine class. The HudEngine has the responsibility to update the HUD (a little bit broad as responsibility, but it’s OK for such a simple project). One of the HUD functions is to flash when the player is damaged.

the DamageNode uses the IDamageEventComponent to know when the player is damaged. Obviously the IDamageEventComponent has a generic damageReceived event. It wouldn’t make sense otherwise. However, let’s say that, in a moment of confusion, thinking about the HudEngine functionalities, I decided to have instead a flashHudEvent. This would have been wrong on many levels:

  • first, every time an event is used as substitute of a public function (as explained at the begin of this article), it means there is a design code flaw. Events always follow the Inversion of flow control and are never designed to call a specific function.
  • I would have needed a new component that really, for the Entity point of view, wouldn’t have made any sense.
  • The HealthEngine, that decides when an entity is damaged, would have know the concept of flashing hud (waaaaat?)

Some times these reasoning are not so obvious, that’s why a self-contained engine, not aware of the outside world, will force the coder to think twice about the code design (which is my intent all along).

Putting all together

The framework is designed to push the coder to write high cohesive, low coupled code based on engines that follow the Single Responsibility and the Open Closed principles. In the demo the number of files is multiplied comparing the original version for two fundamental reason: the old classes were not following the SRP, so the number of files naturally incremented. however due to the current state of the framework, annoying boiler plate code is needed. Any suggestion and feedback is welcomed!

note: —->Svelto ECS is now production ready, read more about it here.<—–


Other famous ECS frameworks:


Ash :

ECS frameworks that work with Unity3D already:


I strongly suggest to read all my articles on the topic:

6 thoughts on “The truth behind Inversion of Control – Part V – Entity Component System design to achieve true Inversion of Flow Control

  1. That’s a nice set of tutorials. Explained really nice.
    I see how it’s possible to make APIs better and reduce boilerplate code. Have a look at EgoCS. It’s using generics to get entities with right components.
    Are you planning to support and update Svelto-ECS?

  2. How can I dinamically change Nodes of a Entity in Svelto?

    Think to the following problem => I need to track a map with many Enemy entities, however only entities that are near player needs to be visualized.

    A Enemy entity will have EnemyAI node exposed if we are going to update its logic, but we will add a EnemyRender node only if the enemy is near enough to be visualized.

    Currently in SveltoECS I have to add a new Entity which holds Rendering stuff and have it to be “linked” to the EnemyAI’s node so they share the same “lifetime”. If I missed some Svelto feature or I am designin it in the wrong way please tell me, otherwise I think that could be a cool Svelto Feature to be added.

    1. I guess you could separate the logic from the rendering with SVELTO.ECS. This is actually a designer’s choice.
      It strongly depends on the kind of title you are developing. Actually I don’t think Svelto takes care of the rendering pipeline, but GameObjects monobehaviors with their render component do.
      The fact you HAVE TO add a monobehaviour bridge interface onto gameobject prefabs needed for the game, gives you the freedom to do so, by eventually disable or not disable a renderer when the logic requires it.

      1. Thanks for your reply I solved the problem now. The rendering part (sprite in case of SpriteRenderer) and mesh buffer in case of (MeshRenderer) can just be disabled with a flag, and the sprite/buffer can be setted to “null” until it come nearby enough to camera. Until now this is the simplest solution and extra memory for “invisible” game objects has not been a problem so far.

Leave a Reply