Svelto ECS is now production ready

Six months have been passed since my last article about Inversion of Control and Entity Component System and meanwhile I had enough time to test and refine my theories, which have been  now proven to be working quite well in a production environment. During these months of putting in practice my ideas, I had the chance to consolidate my analysis and improve the code. As result Svelto ECS framework 1.0 is now ready.

Let’s start from the basics explaining why I use the word framework, which is very important. My previous work, Svelto IoC, cannot be defined as a framework; it’s instead a tool. As a tool, it helps to perform Dependency Injection and design your code following the Inversion of Control principle. Svelto IoC is nothing else, therefore whatever you do with it, you actually can do without it in a very similar way, without changing much the design of your code. As such, svelto IoC doesn’t dictate a way of coding, doesn’t give a paradigm to follow, leaving to the coder the burden to decide how to design the game infrastructure.

Svelto ECS won’t take the place of Svelto IoC, however it can be surely used without it. My intention is to add some more features to my IoC container in future and also write another article about how to use it properly, but as of now, I would not suggest to use it, unless you are actually aware of what IoC actually means and that you are actually looking for an IoC container that well integrates in your existing design. Without understanding the principles behind IoC, using an IoC container can result in very dangerous practices. In fact I have seen smell code and bad design implementations that actually have been aided by the use of an IoC container. These ugly results would have been much harder to achieve without it.

Svelto ECS is instead a framework and with it you are forced to design your code following its specifications. It’s the framework to dictate the way you have to write your game infrastructure. The framework is also relatively rigid, forcing to follow a well defined pattern. Using this kind of framework is important for a medium-large team, because coders have different levels of experience and it is impossible to expect from everyone to be fully able to design well structured code. Using a framework which implicitly solves the code design problem, will let all the coders focus only on the implementation of algorithms, knowing with a given degree of certainty, that the amount of refactoring needed in future will be more limited than it would have been if left on their own.

The first error I made, with the previous implementation of this framework, was to not well define the concept of Entity. I noticed that it is actually very important to have clear in mind what the elements of the project are before to write the relative code. Entities, Components, Engines and Nodes must be properly identified in order to write code faster. An Entity is actually a fundamental and well defined element in the game. The Engines must be designed thinking about managing Entities and not Components or Nodes. Characters, enemies, bullets, weapons, obstacles, bonuses, GUI elements, they are all Entities. It’s easier, and surely common, to identify an Entity with a view, but a view is not necessary for an Entity. As long as the entity is well defined and makes sense for the logic of the game, it can be managed by engines regardless its representation.

In order to give the right importance to the concept of Entity, Nodes cannot be injected anymore into Engines until an Entity is explicitly built. An Entity is now built through the new BuildEntity function and the same function is used to give an ID to the Entity. The ID doesn’t need to be unique, this is a decision left to the coder. For example as long as the ID of all the bullets are unique, it doesn’t matter if the same IDs are used for other entities managed by other engines.

The Setup

Following the example, we start from analysing the MainContext class. The concept of Composition Root, inherited from the Svelto IoC container, is still of fundamental importance for this framework to work as well. It’s needed to give back to the coder the responsibility of creating objects (which is currently the greatest problem of the original Unity framework, as explained in my other articles) and that’s what the Composition Root is there for. By the way, before to proceed, just a note about the terminology. While I call it Svelto Entity Component System framework, I don’t use the term System, I prefer the term Engine, but they are the same thing.

In the composition root we can create our ECS framework main Engines, Entities, Factories and Observers instances needed by the entire context and inject them around (a project can have multiple contexts and this is an important concept).

In our example, it’s clear that the amount of objects needed to run the demo is actually quite limited:

The EngineRoot is the main Svelto ECS class and it’s used to manage the Engines. If it’s seen as IEntityFactory, it will be used to build Entities. IEntityFactory can be injected inside Engines or other Factories in order to create Entities dynamically in run-time.

It’s easy to identify the Engines that manage the Player Entity, the Player Gun Entity and the Enemies Entities. The HealthEngine is an example of generic Engine logic shared between specialized Entities. Both Enemies and Player have Health, so their health can be managed by the same engine. DamageSoundEngine is also a generic Engine and deals with the sounds due to damage inflicted or death. HudEngine and ScoreEngine manage instead the on screen FX and GUI elements.

I guess so far the code is quite clear and would invite you to know more about it. Indeed, while I explained what an Entity is,  I haven’t given a clear definition of Engine, Node and Component yet.

An Entity is defined through its Implementers and Components. A Svelto ECS Component defines a shareable contextualized collection of data and events. A Component can exclusively hold data (through properties) and events (through the Svelto ECS Dispatcher classes) and nothing else. How the data is contextualized (and then grouped in components) depends by how the entity can been seen externally. While it’s important to define conceptually an Entity, an Entity does not exist as object in the framework, and it’s seen externally only through its components. More generic the components are, more reusable they will be. If two Entities need health, you can create one IHealthComponent, which will be implemented by two different Implementers. Other examples of generic Components are IPositionComponent, ISpeedComponent, IPhysicAttributeComponent and so on. Component interfaces can also help to reinterpret the same data in different ways or to access the same data in different ways. For example a Component could only define a getter, while another component a setter for the same data on the same Implementer. All this helps to keep your code more focused and clean. Once the shareable data options are exhausted, you can start to declare more entity specific components. Bigger the project becomes, more shareable components will be found and after a while this process will start to become quite intuitive. Don’t be afraid to create components even just for one property only as long as they make sense and help to share components between entities. More shareable components there are, less of them you will need to create in future. However it’s wise to make this process iterative, so if initially you don’t know how to create components, start with specialised ones and the refactor them later.
As I said, Components do not hold just data, the also hold events. This something I have never seen before in other ECS framework and it’s a very powerful feature, I will explain why later, but for the time being, just know that Components can also dispatch events through the Dispatcher class.

The Implementer concept was existing already in the original framework, but I didn’t formalize it at that time. One main difference between Svelto ECS and other ECS frameworks is that Components are not mapped 1:1 to c# objects. Components are always known through Interfaces. This is quite a powerful concept, because allows Components to be defined by shared objects that I call Implementers . This doesn’t just allow to create less objects, but most importantly helps to share data between Components. Several Components can been actually defined by the same Implementer . Implementers can be Monobehaviour, as Entities can be related to GameObjects, but this link is not necessary. However, in our example, you will always find that Implementers are Monobehaviour. This fact actually solves elegantly another important problem, the communication between Unity framework and Svelto ECS.
Most of the Unity framework feedback comes from predefined Monobehaviour functions like OnTriggerEnter and OnTriggerExit. One of the benefit to let Components dispatch events is that in this way the communication between the Implementer and the Engines become totally natural. The Implementer as Monobehaviour, dispatches an IComponent event when OnTriggerEnter is called. The Engines will then be notified later and act on it. When you need to communicate events between Implementers and Engines, as opposite between Engines and Engines, you can also use normal events.

It’s fundamental to keep in mind that Implementers MUST NOT DEFINE ANY LOGIC. They must ONLY implement component interfaces and act as a bridge between Unity and the ECS framework in case they happen to be also Monobehaviours.

So Components are a collection of Data and Events, Implementers define the components and don’t have any logic. Hence where all the game logic will be? All the game logic will be written in Engines and Engines only. That’s what they are there for. As a coder, you won’t need to create any other class. You may need to create new Abstract Data Types to be used through components, but you won’t need to use any other pattern to handle logic. In fact, while I still prefer MVP patterns to handle GUIs, without a proper MVP framework (which doesn’t exist for Unity yet), it won’t make much difference to use Engines instead. Engines are basically presenters and Nodes hold views and models.

OK, so, before to proceed further with the theory, let’s go back to the code. We have easily defined Engines, it’s now time to see how to build Entities. Entities are injected into the framework through Entity Descriptors. EntityDescriptor describes entities through their Implementers and present them to the framework through Nodes . For example, every single Enemy Entity, is defined by the following EnemyEntityDescriptor:

This means that an Enemy Entity and its components, will be introduced to the framework through the nodes EnemyNode, PlayerTargetNode and HealthNode and the method BuildEntity used in this way:

While EnemyImplementator implements the EnemyComponents, the EnemyEntityDescriptor introduces them to the Engines through the relative Enemy Nodes. Wow, it seems that it’s getting complicated, but it’s really not. Once you wrap your head around these concepts, you will see how easy is using them. You understood that Entities exist conceptually, but they are defined only through components which are implemented though Implementers. However you may wonder why nodes are needed as well. While Components are directly linked to Entities, Nodes are directly linked to Engines. Basically Nodes are a way to map Components inside Engines. In this way the design of the Components is decoupled from the Engines. If we had to use Components only without Nodes, often it would become very awkward to design them. Should you design Components to be shared between Entities or design Components to be used in specific Engines? Nodes rearrange group of components to be easily used inside Engines. Nodes are mapped directly to Engines. specialized Nodes are used inside specialized Engines, generic Nodes are used inside generic Engines. For example: an Enemy Entity is an EnemyNode for Enemy Engines, but it’s also a PlayerTargetNode for the PlayerShootingEngine. The Entity can also be damaged and the HealthNode will be used to let the HealthEngine handle this logic. Being the HealthEngine a generic engine, it doesn’t need to know that the Entity is actually an Enemy. different and totally unrelated engines can see the same entity in totally different ways thanks to the nodes. The Enemy Entity will be injected to all the Engines that handle the EnemyNode, the PlayerTargetNode and the HealthNode Nodes. It’s actually quite important to find Node names that match significantly the name of the relative Engines.

Entities can be easily created directly in code, but if Implementers are Monobehaviours, then it could be more convenient to use EntityDescriptorHolders to fetch the Implementers on GameObjects. For example, the GameObject that holds the Implementer Monobehaviours that define a Player Entity, can also hold the following Monobehaviour:

 this allows more generic and automatic code to create Entities implicitly when defined on GameObjects like this:

the code above simply looks for all the gameobjects that hold EntityDescriptorHolders and create Entities out of them. Once Engines and Entities are built, the setup of the game is complete and the game is ready to run.

The coder  can now focus simply on coding the Engines, knowing that the Entities will be injected automatically through the defined nodes. If new engines are created, is possible that new Nodes must be added, which on their turn will be added in the necessary Entity Descriptors.


The Logic in the Engines

the necessity to write most of the original boilerplate code has been successfully removed from this version of the Framework. An Important feature that has been added is the concept of IQueryableNodeEngine. The engines that implement this interface, will find injected the property

This new object remove the need to create custom data structures to hold the nodes. Now nodes can be efficiently queried like in this example:

This example also shows how Engines can periodically poll or iterate data from Entities. However often is very useful to manage logic through events. Other ECS frameworks use Observer or very awkward “Event Bus” objects to solve communication via events. Event bus is an anti-pattern and the extensive use of observers could lead to messy and redundant code. This is why I find the idea to dispatch events through Components quite ingenious. Most of the time Engines need to communicate Entity based events. If the Player hits an Enemy Entity, the PlayerGunShootingEngine needs to communicate to the HealthEngine that a specific Entity has been damaged. Doing this communication through the Entity Components is the most natural and therefore less painful way to do it:

The example above dispatches a isDamaged event or isDead event through the Component. Other Engines will listen the same event, through the same Component, therefore they will get the ID of the Entity in their callbacks, which can be used to query other nodes, like this isDead listener:

The framework introduces also a special Component called IRemoveEntityComponent. When this interface is implemented, will be automatically filled with a delegate that allows to remove the Entity from all the engines. It’s enough to add it in one node to have the ability to remove an Entity. In this example:

node.removeEntityComponent.removeEntity() will remove the entity from all the engines. the removeEntity delegate will be injected automatically by the framework, so you will be find it ready to use.

The unique feature to be able to receive the added and removed nodes in Engines through the interface functions Add and Remove has been maintained in this version of the framework. This functionalities is still of fundamental importance both to use custom data structures to hold nodes and to add/remove listeners from component events.


I understand that some time is needed to absorb all of this and I understand that it’s a radically different way to think of entities than what Unity taught to us so far. However there are numerous relevant benefits in using this framework beside the ones listed so far. Engines achieve a very important goal, perfect encapsulation. The encapsulation is not even possibly broken by events, because the event communication happens through injected components, not through engine events. Engines must never have public members and must never been injected anywhere, they don’t need to! This allows a very modular and elegant design. Engines can be plugged in and out, refactoring can happen without affecting other code, since the coupling between object is minimal.

Engines follow the Single Responsibility Principle. Either they are generic or specialized, they must focus on one responsibility only. The smaller they are, the better it is. Don’t be afraid of create Engines, even if they must handle one node only. Engine is the only way to write logic, so they must be used for everything. Engines can contain states, but they never ever can be entity related states, only engine related states.

Once the concepts explained above are well understood, the problem about designing code will become secondary. The coder can focus on implementing the code needed to run the game, more than caring about creating a maintainable infrastructure. The framework will force to create maintainable and modular code. Concept like “Managers”, “Containers”, Singletons will just be a memory of the past.

Please feel free to experiment with it and give me some feedback.

Note, if you are new to the ECS design and you wonder why it could be useful, you should read my previous articles:

C# Murmur 3 hash anyone?

If you need a fast and reliable (read very good to avoid collisions) hashing algorithm (not cryptographic), look no further, Murmur 3 is here to the rescue.

Take it from here, it’s compatible with Unity3D.

Directly translated from the c++ version.

More info on Wikipedia and on this definitive answer on stack overflow

The truth behind Inversion of Control – Part V – Entity Component System design to achieve true Inversion of Flow Control

I can imagine that someone, at this point, could wonder if I still recommend to use an IoC container. IoC containers are handy and are quite powerful with Unity 3D, but they are dangerous. If used without Inversion of Control in mind, they can lead to messy code. Let me explain why:

When an IoC container is used with a standard procedural approach, thus without inversion, dependencies are injected in order to be used directly by the specialised classes. Without inversion, there is no limit to the number of dependencies that can be injected. Therefore it doesn’t come natural applying the Single Responsibility Principle to the code written. Why should I split the class I am currently working on? It’s just an extra dependency injected and few lines of more code, what harm can it do? Or also: should I inject this dependency just to check its state inside my class? It seems the simplest thing to do, otherwise I would need to refactor my classes…

Similar reasoning, plus the proverbial coder laziness, usually result into very awkward code. I have seen classes with more than ten dependencies injected, without raising any sort of doubt about the legitimacy of such a shame.

Coders need constrains and IoC containers unluckily don’t give any. In this sense, using an IoC container is actually worse than manual injection, because manual injection would be limiting the problem due to the effort to pass so many parameters by constructors or setters.
Dependencies end up being injected in a sort of singleton fashion way. Binding a type to a single instance becomes much more common than binding an interface to its implementation.

One way to limit this problem is to use multiple Composition Roots according the application contexts. It’s also possible to have one Composition Root and multiple containers. in this way it would be possible to segregate classes and specify their scope, reflecting the internal layering of the application. An observer wouldn’t be injectable everywhere, but only by the classes inside the object graph of the specific container. In this sense, hierarchical containers could be quite handy.

I should definitively write an article on the best practices in using an IoC container with more examples, but right now let’s see how dangerous an IoC container could become without Inversion of Flow control in mind.

The following example is a classic class that doesn’t have a precise context or responsibility. It has very likely started with a simple functionality in mind, but being used as a sort of “Utility” class, there is no limit to the number of public functions it can have. Consequentially there is no limit to the number of dependencies it can have injected. “Look this class has already all the information we need to add this functionality” … “ok let’s add it” … “oh wait, this function actually needs this other dependencies to work, all right then, let’s add this new dependency”. This example is the worst of the lot, unluckily pretty common when the concept of Inversion of Control is unknown or not clear.

My definition of “Utility” class is simple: whatever class that exposes many public functions ends up being a Utility class, used in several places. Utility classes should be static and stateless. Stateful Utility classes are always a design error.

The following class uses Injection without Inversion of flow Control in Mind. It’s a utility class and exposes way to many public functions. Public functions reflect the class “responsibilities”, thus this class has way too many responsibilities.

Using events is a good way to implement inversion of flow control. The class reacts to events from the injected classes. Injecting dependencies to register to their events is a reasonable use of injection. What we need to achieve is encapsulation and of course public functions of course break encapsulation. What’s the problem about breaking encapsulation? The biggest problem is that your class is not in control of the flow anymore. The class won’t have any clue when and where the its public functions will be called, therefore cannot predict what will happen. Without being able to control the flow, it will be very simple to break the logic of the class. A classic example is a public function that assumes that some class states, set asynchronously, are actually set before the public function is used. Adding checks in this kind of scenario will quickly turn in some horrible and unmanageable code.

Thus, events help in this scenario, however we must be very careful when we use events. It’s very important to not assume what will listen to those events when the events are created. Events must be generic and have generic names. If events become specific, assuming what will happen when they will be dispatched, then there will be no difference compared to using public functions. Using events and callbacks The class will probably look like this:

Finally, Inversion of Control can be optimally achieved through properly designed code. I found the Template Pattern to be a good companion. The following class is not injected to any class, but actually registered inside a “manager” compatible with the template interface implemented. The manager class will hold a reference to this class and will use the object through the interface IFoo. The key is that the manager won’t have any clue about the IFoo implementation, following the Liskov Substitution Principle (the L in the SOLID principles).

Programmers tend to code according their own code design knowledge based on their own experience. If a tool is too flexible, it could be used in ways that can cause more harm than benefits. That’s why I love rigid frameworks, they must dictate the best way to be used by the coder without being open to interpretation. With this in mind, I started to research possible alternatives to IoC containers and, as discussed in my Entity Component System architecture article, I realised that a proper ECS implementation for Unity could be what I am looking for.

In order to prove that an ECS solution could be viable in Unity, I had to spend several hours during my weekends to create a new framework and an example with it. The framework and the example can be found separately at these addresses:

Download both repositories and sort out the right folders Svelto-ECS and Svelto-ECS-Example. Open the scene Svelto-ECS-Example\Scenes\Level 01.unity to run the Demo.

The example is the original Unity 3D survival shooter rewritten to be adapted to my ECS framework. However this framework is still not production ready, it misses some fundamental functions and it’s still rough (note: this is not true anymore, the framework is now production ready). Nevertheless, it has been really useful to prove and demonstrate my theories.

The intention was to create a framework able to push the coders to write High Cohesive, Low Coupled code[1] with Open/Close and Single Responsibility Principles in mind. Following these principles would naturally lead to less dependencies injected and, as it will be shown with the simple example I wrote, the use of dependency injection will be limited to the scope of showing its compatibility with the framework.

Of course it must be clear that the use of such a framework makes sense only when it’s applied on a big project maintained by several coders. The survival demo itself is too simple to really appreciate the potential of the idea. In fact, for this specific example, I could say that using my ECS framework is basically over-engineering. However I do suggest to experiment with the library anyway, so that you can understand its potentiality.

A real Entity Component System in Unity

Note: the following code examples are not updated to the new version of the library.

My implementation of ECS is very similar to many out there, although, after studying the most famous frameworks, I noticed that there isn’t a well defined way to let systems communicate between each other, a problem that I solved in a novel way. But let’s proceed step by step.

I have already talked about what an ECS is, so let’s see how we can make it fit in Unity. Let’s start from a list of rules created to use my framework properly:

  1. All the logic must be always written inside Systems.
  2. Components don’t hold logic, they are just data wrappers (Value Objects).
  3. Components can’t have methods, only getters and setters. (this is not enforced yet, but in the final version the framework will throw an exception otherwise)
  4. Systems do not hold components data or state. They can hold system states (not really enforced, but it’s a rule).
  5. Each System has one responsibility only. (not really enforced, but it’s a rule).
  6. Systems cannot be injected (not really enforced, but it’s a rule).
  7. Systems communicate between each other mainly through components, but also through observers and similar patterns.
  8. Systems can have injected dependencies.
  9. Systems do not know each other (not really enforced, but it’s a rule).
  10. Systems should be defined inside sensible namespaces according to their context.
  11. Entities are not defined by objects, they are just IDs.

First we need a composition root. We know already how to effectively introduce a composition root in Unity from my IoC container example and being the composition root independent by the container itself, we can surely re-use the same mechanism, without using any IoC container.

The Composition root becomes the place where the systems can be defined. In my framework, Systems are actually called Engines and they all implement the IEngine interface. For the example, most of the time, the INodeEngine specialization is used instead.

Following the code, it makes sense to start from our root; inside the MainContext.cs the setup of the engines is found:

enginesRoot is the new IEngine container, while tickEngine is instead not strictly part of the ECS framework and it’s used to add the Tick functionality to whatever class implements ITickable.

Entity Creation in Svelto ECS

Engines are sort of “managers” meant to solve one single problem on a set of components grouped by entity nodes. An Entity can hold several components and an engine is constantly aware of specific components from all the entities in game. Thus an engine must be able to select the ones it’s interested in.
Other frameworks implement a query mechanism that let the systems pickup the components to use, but as far as I understood this can limit the design of systems and have possible performance penalties (note: a fast query system has been introduced in the final version). My method is more flexible, but has the disadvantage to generate some boiler plate code. With this method, each INodeEngine can accept nodes (not components, I’ll talk about them later) through the following interface:

Being the library not complete yet, at the moment entities can be created only though GameObjects and components through MonoBehaviours, but this is a limitation I will eventually remove. In fact, it would be a mistake to associate GameObjects to entities and MonoBehaviours to components. What I wrote is just a bridge to make the transition painless and natural.

Components are related to the design of the Entity itself and they must not be created with the Engine logic in mind. They essentially are units of data that should be actually shareable between entities. More general is the design of the component, more reusable it is. For example the following components are entity-agnostic and reused among different entity types in our example:

The coder must face an important decision when it’s time to write components. Should several reusable small components (often holding just one data) or less not reusable “fatter” components be used? I faced this decision myself several times during the creation of the example and, in its limited scope, I eventually find out that shareable components are better in the long term, since they have more chance to be reused (therefore avoiding write new components for specific entities).

As you can see, components are defined through interfaces. This could seem like a weird decision, which instead is very practical when it’s time to define components through MonoBehaviors. I often notice that ECS frameworks have the drawback to force the coder to allocate dynamically several objects when a new entity is created. While object pooling could help, the run-time creation of new entities always have a negative impact. I then realised that, as long as the components are defined through interfaces, they could be all implemented inside bigger objects that are never directly used. With Unity it also comes handy, since few MonoBehaviours, that implement multiple interfaces, would do the job.

The MonoBehaviour above has no meaning for my framework, but instead to be destroyed, it can be reused to hold the data exposed by all the interfaces implemented. I found the use of the explicit implementation of the interface very convenient to remember which method implement which interface (note: since I wrote this article, I found out that explicit implementation of methods are actually always defined as virtual function, therefore they introduce a performance penalty. If performance is critical, ignore my suggestion).

The framework will actually pass the reference of the MonoBehaviour to the Engines, but the engines will know the components only through their interfaces. The Engines are not aware of the fact that the object implementing the component interfaces is actually a MonoBehaviour and it would be wrong otherwise.

Nodes, what are they for?

While components are designed to work with entities, nodes are defined by the engines. In this way the coder won’t be confused on how to design a component. Should the component follow the nature of the Entity or fit the necessity of the Engine? This problem initially led me to very awkward code when I wrote the first draft of the demo. Separating the two concepts helped to define a simple, but solid, structure of the code.

With each Engine comes the relative set of nodes. It’s the Engine to define its own Nodes and Nodes shouldn’t be shared between Engines. An Engine must be designed as an independent and separated piece of code. However eventually I decided to relax this rule when I found out that the ECS framework is also useful to separate the classes logic into several levels of abstraction. Engines can be grouped into namespaces, according what they do. For example all the engines that manage enemy components are under the EnemyEngines namespace. All the engines that manage player components are under the PlayerEngines namespace. All the nodes usable by enemy engines are defined under the EnemyEngines namespace and all the nodes usable by player engines are defined under the PlayerEngines namespace.

The rule is that nodes defined in a namespace, can be used only by the engines in that namespace and the namespace itself will help the coder to not mix up different classes from different environments.

Namespaces are logically layered and while enemy engines and player engines are relatively specific, since they can operate only on enemies and the player respectively, the HealthEngine is instead abstracted and can operate both on enemy and players. However, since the HealthEngine is neither in the Player nor in the Enemy namespace, it can know the entities components only through its own node, the DamageNode (I know, I should have called it HealthNode).

Nodes are simply new objects that group entity components in a way that is suitable to the engine logic. In order to create Nodes through GameObjects, a NodeHolder MonoBehaviour must be defined. While the amount of lines of code is very limited to enable this functionality, the boiler plate code is again a problem, so it’s something I will surely address in the next versions of the framework.

Once a GameObject defines its set of components and nodeholders, the framework will automatically inject the nodes in to the compatible engines when the gameobject is created. The GameObject can be created both statically, in the scene, as child of the GameContext gameobject or dynamically, using the GameObjectFactory.


Components and MonoBehaviours

components cannot hold logic and as I explained, our components are NOT MonoBehaviours, but can be implemented through MonoBehaviours. This is important not just for saving dynamic allocations, but also to not loose the features of the native Unity framework functionalities, which all work through MonoBehaviours.

Since at the end of the day it’s impossible to write a game without GameObjects and Monobehaviours, I need the framework to coexist with and improve the Unity functionalities, not fighting them which would just produce inefficient code.
This is the reason why I decided to not change the original implementation of the enemies player detection. In the original survival demo, the enemies detect if a player is in range through the OnTriggerEnter and OnTriggerExit MonoBehaviour functionalities.

The EnemyTrigger MonoBehaviour still uses these functions, but what it does is simply to fire an entityInRange event. Now pay extreme attention, the object will actually NOT set the playerInRange value, it will just fire an event. The MonoBehaviour cannot decide if the player is actually in range or not, this decision must be delegated to the Engine. Sounds strange? Maybe, but if you think of it, you will realise that it actually makes sense. All the logic must be executed by the engines only.

these values will be then used by the engine responsible of the enemy attack and the job of the MonoBehaviour simply ends here.


Communication in Svelto ECS

I understand that all these concepts are not easy to absorb, that’s why I wrote four articles before to introduce this one. I also understand that if you haven’t worked on large projects, it’s hard to see the benefits of this approach. That’s why I invite you to remember the intrinsic limits of the Unity Framework and then the pitfalls of the IoC containers that try to overcome them.
Of course using such a different approach makes sense if eventually everything becomes easier to manage and, beside the current problem of the boiler plate code, I believe this is the case.

One of the most problematic issues that the framework solves brilliantly is the communication between entities. Engines are natural entity mediators, they know all the relevant components of all the entities currently alive, therefore they can take decisions being aware of the global situation instead to be limited to a single entity scope.

Let’s take in consideration the EnemyAttackEngine class, which is the engine that uses IEnemyTriggerComponent object. It relies on the IEnemyTriggerComponent event to know if a player is potentially in range or not, but I decided to do this just for the sake to show how the ECS framework can interact with the Unity Framework. I can also guess that the OnTriggerEnter performs better than c# code, but I am not totally sure this is the case.

What I could have simply done is instead to store the enemies transform components in a List and iterate over them every frame, through the Tick function, to know if the player is in range or not. The engine could have set the component playerInRange value without waiting for the MonoBehaviour event.

Why do components have events?

Interesting question. Many ECS frameworks do not actually have this concept. Events are usually sent through external observers or using an EventBus. The decision I took about using events inside components is possibly one of the most important. In this simple demo I actually use an observer once just to show that it’s possible to use it, but otherwise I wouldn’t have needed it. All the communication between entities and engines and among engines can be usually performed through just component events.

It’s very important to design these events properly though. Engines know nothing about other engines. They cannot assume/operate/be aware of anything outside their scope. This is how we can have low-coupled, high-cohesive code.

Practically speaking this means that an Engine cannot fire a component event that semantically doesn’t make sense inside the engine itself. Let’s take in consideration the HudEngine class. The HudEngine has the responsibility to update the HUD (a little bit broad as responsibility, but it’s OK for such a simple project). One of the HUD functions is to flash when the player is damaged.

the DamageNode uses the IDamageEventComponent to know when the player is damaged. Obviously the IDamageEventComponent has a generic damageReceived event. It wouldn’t make sense otherwise. However, let’s say that, in a moment of confusion, thinking about the HudEngine functionalities, I decided to have instead a flashHudEvent. This would have been wrong on many levels:

  • first, every time an event is used as substitute of a public function, it means there is a design code flaw. This is a rule regardless this article. Events always follow the Inversion of flow control and are never designed to call a specific function.
  • I would have needed a new component that really, for the Entity point of view, wouldn’t have made any sense.
  • The HealthEngine, that decides when an entity is damaged, would have know the concept of flashing hud (waaaaat?)

Some times these reasoning are not so obvious, that’s why a self-contained engine, not aware of the outside world, will force the coder to think twice about the code design (which is my intent all along).

Putting all together

The framework is designed to push the coder to write high cohesive, low coupled code based on engines that follow the Single Responsibility and the Open Closed principles. In the demo the number of files is multiplied comparing the original version for two fundamental reason: the old classes were not following the SRP, so the number of files naturally incremented. however due to the current state of the framework, annoying boiler plate code is needed. Any suggestion and feedback is welcomed!

note: Svelto ECS is now production ready, read more about it here.


Other famous ECS frameworks:


Ash :

ECS frameworks that work with Unity3D already:


I strongly suggest to read all my articles on the topic:

How to perfectly synchronize workspaces with older versions of Perforce

The latest versions of perforce finally support the annoyingly for too long underestimated and useful option P4 clean. Using this command is possible to delete all the local files that are not present in the depot and sync all the files that have been modified locally, or to be more precise:

  1. Files present in the workspace, but missing from the depot are deleted from the workspace.
  2. Files present in the depot, but missing from your workspace. The version of the files that have been synced from the depot are added to your workspace.
  3. Files modified in your workspace that have not been checked in are restored to the last version synced from the depot.

This is very useful when is time to use perforce on a building machine, since it’s of fundamental importance to be sure that the workspace totally matches the depot.

Who, like me, is forced to use a previous version, though, is just destined to swear at infinitum. However, sick of the problem, I did some research around and found some articles with very good solutions. After I put them together I just wanted to share them with you.

The first part checks for all the files that are different in the workspace and, thanks to the piping facility, force sync them. I found out that is safer to call the commands separately instead to use the options -sd and -se on the same diff.

the second part uses a batch to delete all the files present in the workspace, but not in the depot, exploiting the reconcile command

as extra bonus, I add the following one

I don’t understand why often the revert unchanged files command doesn’t actually revert identical files. This command will succeed where the standard command fails.

The truth behind Inversion of Control – Part IV – Dependency Inversion Principle

The Dependency Inversion Principle is part of the SOLID principles. If you want a formal definition of DIP, please read the articles written by Martin[1] and Schuchert[2].

We explained that, in order to invert the control of the flow, our specialized code must never call directly methods of more abstracted classes. The methods of the classes of our framework are not explicitly used, but it is the framework to control the flow of our less abstracted code. This is also called (sarcastically) the Hollywood principle, stated as “don’t call us, we’ll call you.”.

Although the framework code must take control of less abstracted code, it would not make any sense to couple our framework with less abstracted implementations. A generic framework doesn’t have the faintest idea of what our game needs to do and, therefore, wouldn’t understand anything declared outside its scope.

Hence, the only way our framework can use objects defined in the game layer, is through the use of interfaces, but this is not enough. The framework cannot know interfaces defined in a less abstracted layer of our application, this would not make any sense as well.

The Dependency Inversion Principle introduces a new rule: our less abstracted objects must implement interfaces declared in the higher abstracted layers. In other words, the framework layer defines the interfaces that the game entities must implement .

In a practical example, RendererSystem handles a list of IRendering “Nodes”. The IRendering node is an interface that declares, as properties, all the components needed to render the Entities, such as GetWorldMatrix, GetMaterial and so on. Both the RendererSystem class and the IRendering interface are declared inside the framework layer. Our specialised code needs to implement IRendering in order to be usable by the framework.

Designing layered code

So far I used the word “framework” to identify the most abstracted code and “game” to identify the least abstracted code. However framework and game don’t mean much. Limiting our layers to just “the game layer” and “the framework layer” would be a mistake. Let’s say we have systems that handle very generic problems that can be found in every game, like the rendering of the entities, and we want to enclose this layer in to a namespace. We have defined a layer that can be even compiled in a DLL and be shipped with whatever game.

Now, let’s say we have to implement the logic that is closer to the game domain. Let’s say that we want to create a HealthSystem that handles the health of the game entities with health. Is HealthSystem part of a very generic framework? Surely not. However while HealthSystem will handle the common logic of the IHaveHealth entities, not all the game entities will have the same behaviors. Hence HealthSystem is more abstracted than more specialized behavior implementations. While this abstraction wouldn’t probably justify the creation of another framework, I believe that thinking in terms of layered code helps designing better systems and nodes.

Putting ECS, IoC and DIP all together

As we have seen, the flow is not inverted when, in order to find a solution, a bottom up design approach is used to break down the problem, that is when the specialized behaviors of the entities are modeled before the generic ones. Or also when the systems are designed as result of the specialized entity problems.

In my vision of Inversion of Control, it’s needed to break down the solutions using a top down approach. We should think of the problems starting from the most abstracted classes. What are the common behaviors of the game entities? What are the most abstracted systems we should write? What once would have been solved specializing classes through inheritance, it should be now solved layering our systems within different levels of code abstraction and declaring the relative interfaces to be used by the less abstracted code. Generic systems should be written before the specialized ones.

I believe that in this way we could benefit from the following:

  • We will be sure that our systems will have just one responsibility, modeling just one behavior
  • We will basically never break the Open/Close principle, since new behaviors means creating new systems
  • We will inject way less dependencies, avoiding using a IoC container as a Singleton alternative
  • It will be simpler to write reusable code
  • We could potentially achieve real encapsulation

In the next article I will explain how I would put all these concepts together in practice



I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part III – Entity Component System Design

In the previous article I explained what the Inversion of Control principle is, then I introduced the concept of Inversion of Flow control. In this article I will illustrate how to apply it properly, even without using an IoC container. In order to do so, I will talk about Entity Component System design. While apparently it has nothing to do with Inversion Of Control, I found it to be one of the best way to apply the principle.

Once upon the time, there was the concept of game engine. A game engine was a game-specialized framework that was supposed to run whatever game. This game engine used to have some common classes, designed as sort of “managers”, that were found more or less in all the game engines, like the Render class. Every time a new object with a Renderer was created, the Renderer component of the object was added to a list of Renderers managed by the Render class. This was also true for other components like the Collision component, the Culling component and so on. The Engine dictates when to execute the culling, when to execute the collisions and when to execute the rendering. The less abstracted objects don’t know when they will be rendered or culled, they assume only that at a given point, it will happen.

The game engine was taking the control of the flow, resulting in the first form of Inversion of Flow Control. There is no difference with what I just explained and what Unity engine does. Unity decides when is time to call Awake, Starts, Update and so on. Unity framework is capable to achieve both Inversion of Creation Control and Inversion of Flow Control. MonoBehaviour instances cannot be directly created by the users; either if they are already present in the scene or they are created dynamically, it’s Unity to create them for us. The Inversion of Flow control is instead achieved through the adoption of the Template Pattern. Our MonoBehaviour classes must follow a specific template (through the Awake, Start, Update and similar functions) in order to be usable by the Unity framework “managers”.

Now, let’s have a look at what a modern Entity Component System instead is.

Modern Entity Component Systems

A more advanced way of managing entities has been introduced in 2007 with the modern implementations of Entity Component System

In 2007, the team working on Operation Flashpoint: Dragon Rising experimented with ECS designs, including ones inspired by Bilas/Dungeon Siege, and Adam Martin later wrote a detailed account of ECS design[1], including definitions of core terminology and concepts. In particular, Martin’s work popularized the ideas of “Systems” as a first-class element, “Entities as ID’s”, “Components as raw Data”, and “Code stored in Systems, not in Components or Entities”.[2]

ECS Design uses Inversion of Flow control in its purer form. ECS is also a magnificent way to possibly never break the Open/Close principle when is time to add new behaviors.

the Open/Closed principle, which is also part of the SOLID principles, says:

software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

OCP is the holy grail of the perfect code design. If well adopted, it should never be needed to go back to previously created classes and add new functions in order to add new behaviors.

The ECS design works in this way:

  • Entities are just basically IDs.
  • Components are ValueObjects[3]. Components wrap data around an object that can be shared between Systems. Components do not have any logic to manage the data.
  • Systems are the classes where all the logic lies. Systems can access directly to list of components and execute all the logic the project needs.

I understand that the concept of data component could sound weird at first. I had problems to wrap my head around it initially as well. I guess that the key that cleared my doubts was to understand that there isn’t such a thing as a System being too small. A System can be everything that handles logic indeed, as the Presenter or the Controller handle logic in the MVP or MVC pattern respectively (if you don’t know what they are, never mind at the moment).

The really powerful aspect of this design is that the Systems must follow the Single Responsibility Rule. All the behaviors of our in-game entities are modeled inside Systems and every single behavior will have a single, well defined in terms of domain, System to manage it.

This is how we can apply the Open/Close principle without ever break it: every time we need to introduce a new specific behavior, we are forced to create a new System that simply uses components as data to execute it.

Systems are implicitly mediators as well, this is why Systems are great to let Entities communicate each other and with other Systems (through the use of the Components). However Systems are not always able to cope with all the possible communication problems just through the use of Components and for this reason Systems, that should be instantiated in the Composition Root, can have injected other dependencies (I’ll give more practical examples with the next articles).

Also, pay attention, Systems model all the logic of your framework AND game. When ECS design is used, there won’t be any implementation difference between the framework logic (like RenderingSystem, PhysicSystem and so on) and the game logic (like AliensAISystem, EnemyCollisionSystem and so on), but there will be still a sharp difference in the code separation. Framework and Game systems will still lie in different layers of the application. This introduce another very important concept: the design of our game application with multiple layers of abstraction. Using just two layers of abstraction is not enough for a complex game. Framework layer and Game layer alone are not enough. We need to make our Game Layering more granular.

All that said, I noticed that there is some confusion when it’s time to define the design adopted by Unity. Although its “managers” can be considered “systems”, they are not according the modern definition, since they cannot be defined or extended by the programmer.
Of course when is not possible to extend the “Systems” functionalities, the only way to extend the logic of our entities is to add logic inside Components. This is anyway a step forward from the classic OOP techniques. Component Oriented (I call them Entity Component, without System) frameworks like the one in Unity, push the coder to favor Composition over Inheritance, which is surely a better practice.
All the logic in Unity should be written inside focused MonoBehaviour. Every MonoBehaviour should have just one functionality, or responsibility, and they shouldn’t operate outside the GameObject itself. They should be written with modularity in mind, in such a way they can be reused independently on several GameObjects. Monobehaviours also hold data and their design clearly follows the basic concepts of OOP.
Modern design tends instead to separate data from logic. As Data, Views and Logic are separated when the Model View Controller pattern is implemented, the same happen with ECS design through Components and Systems in order to achieve better code modularity.

Before to finally shows the benefits of the ECS design approach with real code, I will explain the concept of Dependency Inversion Principle in the next article of this series.





I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part II – Inversion of Control

Note: this article assumes you already read my previous articles on IoC containers and the Part I

Inversion of Control is another concept that turns out to be very simple once it has been fully processed or rather saying “absorbed”. Absorbed as going over being understood and becoming part of one’s own forma mentis. However, make no mistake, Inversion Of Control is not the same of Inversion Of Control Container (which is not a principle, but just a tool to simplify Dependency Injection)[1]. Inversion Of Control container is a confusing name in this sense[2], therefore many use the name Dependency Injection Library instead.

While using an IoC container is very simple, being able to invert the control of the code is another matter. The process of adaptation is not straightforward since the entire code paradigm will have to change. In order to try to explain this paradigm, I will need to introduce the concept of code abstraction. The following definitions will be explored in more details in the next articles, so come back here if they won’t be grasped immediately.

The inversion of control cannot be really applied successfully without designing the application with multiple layers of abstraction. Higher is the abstraction, more general is the scope of the code. For example, a general framework is part of the highest levels of abstraction, since it could be used by whatever type of application. A class that manages the health of the enemies in a game belongs to a lower level of abstraction.

If we think of our code structure in terms of layers of abstraction, Inversion of control is all about giving the control to the more abstracted classes instead of the less ones. Of course you would ask: control of what? The idea is that general classes should control both the creation and the flow of the more specialised code.

How can the object creation be inverted? It’s simple: code that follows the Inversion of Creation Control rules, never uses the new keyword explicitly to create dependencies. All the dependencies are always injected, never created directly.

If we apply this reasoning to the injection waterfall discussed in the first article of this series, it is easy to see that ALL the dependencies, therefore all the objects, must be created and initially passed from the Composition Root. The Composition Root effectively becomes the only place where all the starting relations between objects are made.

If you wonder how to create dynamic dependencies, like objects that must be spawned in run-time, you asked yourself a good question. These objects are always created by factories and factories are always created and passed as dependency from the Composition Root.

Simply put, the new operator should be used only in the Composition Root or inside factories. An IoC container hides this process, creating and passing all the dependencies automatically instead to force the user to pass them by constructors or setters. The application-agnostic IoC container code takes control of the creation of all the dependencies. Of course dynamic allocation is still freely used to allocate data structures, but data structures are not dependencies.

Why is inverting the creation control important? Mainly for the following three reasons:

  1. the code becomes independent from the constructor implementation. A dependency can, therefore, be passed by interface, effectively removing all the coupling between the implementation of classes.
  2. Because of 1, your code will be dependent only but the abstraction of a class and not its implementation. In this way it’s possible to swap implementations without changing the code.
  3. the object injected comes already created with its dependencies resolved. If the code had to create the instance of the object, those dependencies must have be known as well.
  4. the flow of the code can change according the context. Without changing the code is possible to change its behaviour just passing different implementation of the same interface.

The first point is fundamental to be able to mock-up implementations when unit tests are written, but if unit tests are not part of your development process, the third point can lead to cleaner code when is time to implement different code paths.

Can Inversion of Creation Control be achieved without using an IoC container? Absolutely, let’s see how simply using manual dependencies injection:

while I am not really good with examples, it’s not simple to find a compact one that is also meaningful. However I think the above example includes all the discussed points.

Main is our simple Composition Root. It’s where all the dependencies are created and the initially injected.  If you try to run this code, it will actually work. It will run a dumb, not interactive simulation of a Player fighting Enemies.

All the game logic is encapsulated inside the Level class. Since the Level class uses the Template Pattern in order to implement the functions needed by the LevelManager class to manage a Level, is possible to extend the logic of the game creating new Level classes. However adding Level instances inside the LevelManager is not Dependency Injection, so it is irrelevant to our exercise (LevelManager doesn’t strictly need Level instances injected to work, the class is still functional even without any Level object added).

Each Level needs two dependencies. An implementation of an IEnemySpawner and the Player object. Note that the level name is not actually a dependency. A dependency is always an instance of a class needed by another class.

level1 and level2 are different because of the number and type of enemies created. level1 contains only two enemies of type A, level2 contains one enemy of type A and two enemies of Type B. Type B can be more powerful than type A when inflicts damage to the Player. However the Level implementation actually doesn’t change. The different behaviour is just due to the different implementation of the IEnemySpawner interface passed as parameter. Injecting two different IEnemySpawner objects changes the level gameplay, without changing the Level code.

EnemySpawner doesn’t build directly enemies, because is not its responsibility. The Enemyspawner just decides which enemies are spawned and how, but doesn’t need to be aware of what an enemy needs to be created.

As you can see both EnemyA and EnemyB depend on the implementation of the class Random to work, but EnemySpawner doesn’t need to know this dependency at all. Therefore we can use a factory both to encapsulate the operator new and pass the dependency Random directly from the composition root.

My explanation is probably more complicated than the example itself, where it’s clear that all the dependencies are created and passed through constructor from inside the main function. The only exception is when the EnemyFactory injects the Random implementation by setter.

In this example I haven’t used an Inversion of Control container but the control of the objects creation has been nevertheless inverted. The context takes away the responsibility of creating dependencies from the other objects, dependencies can be passed by Interface, the flow of the code changes according which implementation has been injected.

So the questions I am asking myself lately are: do we really need an IoC container to implement Inversion of Creation control? Are the side effects of using an IoC container  less important than the benefits of using such a tool? Searching an answer to these question is what led me to start to write these articles.

I can give a first answer though: manual Dependency Injection is very hard to achieve with the Unity framework. As I have already widely explained in my past articles, due to the Unity framework nature, dependencies can be injected only through the use of singletons or the use of reflection. C# reflection abilities is what actually enables mine and other IoC containers to inject dependencies in an application made with Unity. So how can we possibly adopt manual dependency injection in Unity? One possible solution is actually to reinterpret the meaning of the Monobehaviour class in order to never need to inject dependencies in it. Is it possible? As we soon find out, if we change our coding paradigm, it’s not just possible, but also convenient to do.
For the time being, keep in mind that if the code is designed without really knowing what Inversion of Control is, an IoC container merely becomes a tool to simplify the tedious work of injecting dependencies; a tool that is very prone to being abused. An IoC container cannot be used efficiently without knowing how to design code that inverts creation and flow control.

So far I talked mainly about Inversion of Creation control, but I mentioned several times the Inversion of Flow Control, therefore I need to give a first explanation before to conclude this article.

What is Inversion of Flow Control? Quoting Wikipedia:

inversion of control (IoC) describes a design in which custom-written portions of a computer program receive the flow of control from a generic, reusable library. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the reusable code that calls into the custom, or task-specific, code.”

In this sense an IoC container could be used to implement Inversion of Flow control when it’s time to “plug-in” implementations from the lower levels of abstraction to the higher levels of abstraction without breaking the Dependency Inversion Principle. Inversion of Flow control is even more important than Inversion of Creation Control and I will explore the reasons in detail in my next article.


[2] (but read all of it)

I strongly suggest to read all my articles on the topic:

The truth behind Inversion of Control – Part I – Dependency Injection

Note: this article series assumes you have already read my previous articles on IoC containers.

There is an evil truth behind the concept of Inversion of Control container. An unspoken code tragedy taking place everyday while passing unnoticed. Firm in my intentions to stop this shame, I decided to stand up and start writing this series of articles that will tell about the problem, the principles and the possible solutions.
I started to notice the symptoms of this “blasphemy” (against the code gods 🙂 ) quite a while ago, but I couldn’t pin down the reason of them. Nevertheless the problem was pretty clear: the IoC container solution was scarily used too often as alternative to the Singleton pattern to inject dependencies. With the code growing and the project evolving, many classes started to take the form of a blob of functions, with a common pitfall: dependencies, dependencies everywhere.
What means and what is wrong with using an IoC container as mere substitute of a Singleton, is something that I am going to describe the best I can with this series of articles. Don’t get me wrong, IoC containers are great tools, but if they are used in the wrong way, the can actually lead to major issues as well. I realised that IoC containers cannot be used without understanding how to use them, that’s why I started to look for a safer solution that could be adopted even by inexperienced coders. Before to look at this solution, let’s start taking some steps back and explain what Dependency Injection actually is:

What Dependency Injection is

Dependency Injection isn’t anything fancy. A dependency is just an interface of which a class is dependent on. Usually dependencies are solved in two ways: injecting them or passing them by Singletons. Singletons break encapsulation in the sense that, as a global variable, they can be used without a scope. Singletons awkwardly hide your dependencies: there is nothing in the class interface showing that the dependency is used internally. Singletons strongly couple your implementations, resulting eventually in too long and painful code refactoring. To be even more practical, Singletons, as all the global variables holding references, are also often source of memory leaks. For these reasons we use injection to solve dependencies.

Now, think for a moment if there wasn’t an IoC container in place. How would we inject our dependencies? For example, passing them as parameters in a constructor as shown in the example. Would you pass 10 parameters by constructor? I surely wouldn’t, at least just for how painful and inconvenient it is. Same reasoning applies when an IoC container is used. Just because it’s more convenient, it doesn’t mean you can take advantage of it. You are just making a mess with object coupling again. To be honest, if the design of the code would really follow the SOLID[1] principles, this problem wouldn’t arise, since the number of dependencies injected is directly linked to the number of responsibilities a class has got. One responsibility only should lead to a very few dependencies injected. However without a proper paradigm to follow, we all know that coders tend to break the Open/Close principle and add behaviours to existing classes instead to adopt a modular and extendible design. That’s when IoC containers start to be dangerous, since they actually help this process, making it less painful.

When dependencies are injected into an instance, where do these objects come from? If the dependencies are injected by constructor, they obviously come from the scope where the object, that needs the dependencies injected, has been created. On its turn, the class that is injecting the dependencies into the new object could need also dependencies injected, which therefore are passed by another class in the parent scope. This chain of dependency passages creates a waterfall of injections and the relationship between these objects is called Object Graph[1].

Albeit, where does this waterfall start from? It starts from the place where the initial objects are created. This place is called Composition Root. Root because is where the context is initialised, composition because is where the dependencies start to be created and injected and, therefore, the initial relations composed.

Now you can see what the real problem of the Unity framework is: the absence of a Composition Root. Unity doesn’t have a “main” class where the relations between objects can be composed. This is why the only way to create relationships with the bare Unity framework is using Singletons or static classes/methods.

Why are relationship between objects created? Mainly to let them communicate with each other. All forms of communications involve dependency injection. The only pattern that allows communication without dependency injection is the pattern called Event Bus[2] in the Java environment. The Event bus allows communications through events held by a Singleton, hence the Event Bus is one of the many anti pattern out there. Note that you could think to create something similar to an Event Bus without using a singleton (therefore injecting it). That’s an example of what I define to use injection as mere substitute of a Singleton.

Object Communication and Dependency Injection

Communication can couple or not couple objects, but in all the cases involves injection. There are several ways to let objects communicate:

  • Interface injection: usually A is injected in B, B is coupled with A [e.g.: Inside a B method A is used, B.something() { A.something());]
  • Events: usually B is injected in A, A is coupled with B [e.g.:Inside A, B is injected to expose the event, B.onSomething += A.Something]
  • Commands: B and A are uncoupled, B could call a command that calls a method of A. Commands are great to encapsulate business logic that could potentially change often. A Command Factory is usually injected in B.
  • Mediators: usually B and A do not know each other, but know their mediator. B and A pass themselves in the mediator and the mediator wires the communication between them (i.e.: through events or interfaces). Alternatively B and A are passed to the mediator outside B and A themselves, totally removing the dependency to the Mediator itself. This is my favourite flavour and the closest to dependency-less possible.
  • Various other patterns like: Observers, Event Queue[3] and so on.

How to pick up the correct one? If we don’t have guidelines it looks like one or another is the same. That’s why, some times, we end up using, randomly, one of those. Remember the first two patterns are the worst because they couple interfaces that could change over time.

We can anyway introduce the first sound practice of our guideline for our code design: our solution must minimize the number of dependencies.

Of course the second sound practice is about the concept of Single Responsibility Principles[4] (and Interface Segregation Principle[5]). One of the 5 principles of SOLID (ISP is another fundamental one), but the only one that must be actually taken as a rule. Your class MUST have one responsibility only. Communicating could be considered a responsibility, therefore it’s better to delegate it.

How we are going to achieve SRP and solve the dependencies blob problem is something I am going to explain in the next articles of this series.


[1] SOLID (object-oriented design)

[2] Event Bus

[3] Event Queue

[4] Single Responsibility Principle

[5] Interface Segregation Principle

I strongly suggest to read all my articles on the topic:

Svelto Inversion of Control Container

If it’s the first time you visit my blog, please don’t forget to read my main articles on the subject before to continue this post:

It’s finally time to share the latest version of my Inversion of Control container, which I named Svelto IoC, which I will keep updated from now on. This new version is the current IoC container that Freejam is using for the Unity3D game Robocraft (

Thanks to the possibility to use the library in production, I could analyse in depth the benefits and disadvantages in using extensively an IoC container on a big project with a medium sized team of programmers. I am preparing an exhaustive article on the subject, but I am not sure when I will be able to publish it, so stay tuned.

The new IoC container is structurally similar to the old one, but has several major differences. In order to use it, a UnityRoot specialised monobehaviour must still be created. The class that implements ICompositionRoot is the Compositon Root of the project. The game object holding the UnityRoot monobehaviour is the game context of the scene.

All the dependencies bound in the Composition Root, will be injected during the Unity Awake period. Dependencies cannot be used until the OnDependenciesInjected function or the Start function (in case of dependencies injected inside monobehaviours) are called. Be careful though, OnDependenciesInjected is called while the injection waterfall is still happening, however injected dependencies are guaranteed to have, on their turn, their dependencies injected. Dependencies are not guaranteed to be injected during the Awake period, therefore you shouldn’t use injected dependencies inside Monobehaviour Awake calls.

Other changes include:

  • Monobehaviours that are created by unity after the scene is loaded, don’t need to ask explicitly to fill the dependencies anymore. They will be automatically injected.
  • Monobehaviours cannot be injected as dependency anymore (that was a bad idea).
  • Dynamically created monobehaviours have always dependencies injected through factories (MonoBehaviourFactory and GameObjectFactory are part of the framework).
  • Now all the contracts are always injected through “providers”, this simplifies the code and makes it more solid. Also highlights the importance of providers in this framework.
  • A type injection cache has been developed, therefore injecting dependencies of the same type is way faster than it used to be.
  • It’s now possible to create new instances for each dependency injected, if the factory MultiProvider is used explicitly.
  • You can create your own provider for special cases.
  • Improved type safety of the code. It’s not possible anymore to bind contracts to wrong types. For this reason AsSingle() has been substituted by BindSelf().
  • Various improvements and bug fixes.
  • Dependencies can be injected as weak references automatically.

What is still need to do:

  • Improve the documentation and explain the bad practices
  • Add the possibility to create hierarchical context and explain why they are necessary
  • Add the possibility to inject dependencies by construction, in order to reduce the necessity to hold references.
  • Explain how to exploit custom providers.

The new project can be found at this link:


What I have learned while developing a game launcher in Unity

This post can be considered partially outdated as my mind changed quite a bit since when I originally wrote it.
First of all, I would never suggest to write a game launcher in Unity now. Use instead .net or the latest version of mono with xamarin. Mono library embedded with Unity 4 is too outdated and its use results in several problems when using advanced HTTP functions. However, if you decide to use .net, please be aware that your installer must be able to download and install .net from the internet if it is not installed on the machine already. It is also possible to embed mono, more or less like Unity does, but it is quite tricky under Windows.
We also changed our mind about the admin rights. Many of our users use Windows without admin rights, so we wanted our launcher to never ask for any.
I also think Unity now fixed the check of the SSL certificates, but I am not 100% sure about it.
At last, I would search around for libraries that can generate diff patches of binary files, since hashing and downloading files one by one is neither convenient nor efficient.

In case you didn’t know yet, it has been already a while since I co-founded a company called Freejam, in the UK, and started to work on a new game named Robocraft ( For this indie product we are extensively adopting the Lean startup approach even for the development cycles, so features come as they are actually requested by our early adopters.

Last feature our early adopters were eager to see was a proper game launcher. A game launcher is basically a must for every online game, since, as you all know, in this way is possible to patch and update the game without being forced to install it over and over.

I never implemented a launcher before, so I was completely ignorant about some tricky issues that could arise and their relative workarounds, which I now want to share with you to avoid wasting days trying to solve similar problems.

I started to develop the launcher in Unity just because it needed a graphical interface and I did not want to spend time learning/using new libraries for other development platforms (either c++ or pure c#). Size wise, considering that Unity applications embed the mono libraries and runtimes, the resulting compressed 6MB installer wasn’t too bad.

The graphic interface has been easily developed with NGUI and the information shown are a mix of rss news taken directly from the game blog and images configurable by the artists through an external xml.

The update process instead has been a little bit more convoluted, with a couple of unforeseen tedious obstacles that made my life a bit miserable for few days.

The update process is split into several predefined tasks:

  • check if the game is already running and ask to the user to close it before the update could be launched
  • check if another launcher is running and ask to close the other one
  • check if a new version of the patcher is available and in this case force to update the patcher
  • check if a new game build is available
  • download the list of files built together with the new game build. This list contains the name of the files, the size and a hash as checksum
  • download an asymmetric encrypted digital signature of the game
  • verify that the digital signature is valid using the public key embedded in the client
  • verify that all the files on the server are ready to be downloaded
  • verify which files on the hard disk must be updated, using file size and generated hash (comparing the value with the hash from the file list previously downloaded)
  • if it is needed, files are downloaded, uncompressed and saved (they are stored as gzip on the server)
  • delete obsolete files if there are any

On our side our Jenkins building machine handles two separate jobs (actually more, but I keep it simple for the sake of this article), one to build the patcher, generate a new patcher version and deploy it to the CDN and the other to build the game, generate a new game version, generate the files list with hashes, create the digital signature using the private key that only our building machine knows, compressing the files to upload and deploy everything to the CDN.

The whole development process has been long, but thanks to the .net framework relatively easy. Nevertheless there are two specific features I have to go into detail with, since they are very important to know and not so intuitive:

first one is the reason why I implemented an asymmetric encrypted digital signature verification. A launcher without this kind of protection is vulnerable to man in the middle attacks that could be shown under the form of DNS spoofing. When a hacker successfully spoof a dns node, simply creates a deviation of the normal TCP/IP routing that the client cannot recognize. In this way the client does not know that it is downloading files from unknown sources and since the game includes executables as well, it could be relatively easy for a hacker to attack a specific pool of users and let them download malicious files.

This is one of the reasons why HTTPS has been invented, however HTTPS is effective against this attack only if the client can verify the certificate provided by the HTTPS server ( With my surprise, I found out that while Unity supports HTTPS connections, it does not verify the SSL certificates at all; Therefore using HTTPS in Unity does not prevent Man In Middle Attacks. Luckily the implementation of a digital signature was already planned, so while I was disappointed of the Unity behaviour, we were already ready to face the issue.

Implementing a Digital Signature in C# and .net is very simple and a lot of code is already available around. Just look for RSACryptoServiceProvider on Google to know more.

Once all this stuff was implemented I thought I was finally done with the launcher, but a dark and vicious evil bugger was awaiting me behind the corner: the UAC implementation of Windows Vista+!

After I understood what the UAC implementation of windows actually does, I realized why most of the online games suggest to use the c:\games folder to install the game instead of the standard c\:Program Files. The Program Files folder is seen by modern windows operative system (excluding XP) as a protected folder and only administrators can write in it.

Our launcher is installed by Inno Setup, which asks to the user to run in administrative mode, so Inno Setup is able to write wherever the user wants the game to be installed in. However, once it is installed, the problems start.

At this point of the explanation, If the launcher is launched directly from Inno Setup, it inherits the administrative rights from the installer and then it is able to update the game folder under Program Files. However, once the launcher is started again by the user, it will not start as administrator, but as normal user changing the behaviour of the file writing.

This is when the things start to be idiotic. If a normal user application tries to write inside a folder under Program Files, the writing of the file does not fail as I initially expected. Instead Windows creates a Virtual Store folder under C:\UsersusernameAppDataLocalVirtualStore that virtualises the game folder. The result is that all the files that the launcher tries to write under program files are actually stored in a specific and predefined folder in the virtual store.

Hence, First lesson is: if your Unity application needs to write new files, never write them in to the folder where the application has been installed. Use the application data folder instead! However this cannot be applied to a launcher, since the launcher must be able to update the game wherever the user decided to install it in.

First solution came to my mind was then to embed a manifest in the application to ask to vista to run the launcher in administrative mode. This is easier than it sounds, at least once I understood that a command line tool, that can be found inside the windows SDK, can do it. Once followed the instructions, every time the launcher starts, windows UAC prompts a message box to the user, asking if to give the administrative rights to the application or not.

If the user authorizes the application, the launcher will be able to update the game, otherwise it will throw an error.

However and unluckily, this was not the final solution. The tricky situation here is that the launcher must be able to launch the game as well, but a process launched by another process will inherit its rights. This means that the game launched by the launcher would start as administrator, while if the user decides to start the game on its own, without using the launcher, it runs in normal mode (if the user is not administrator or the UAC is fully enabled).

Launching the game in administrative mode can be a bad idea for several reasons, but the most annoying one is that the user is not used to authorize a game everytime it is launched, so we decided to get rid of this issue.

After some research and after I tried all the possible solutions I could find on google and stackoverflow, I realized that the only working one is to use a bootstrapper to launch the launcher.

The bootstrapper must be a tiny application that runs in normal mode and must be able to launch the launcher as administrator user. This is pretty straightforward since .net allows to raise the application rights (but never allows to downgrade 🙁 ). Once the launcher does its dirty job, it will close itself and it will communicate to the bootstrapper that it is now time to launch the game. The bootstrapper is now able to launch the game as normal user, because the bootstrapper itself has not started with elevated rights.

This solution sounds convoluted, but it is actually quite commonly adopted. Of course I could not use unity to create the bootstrapper, since this must be just few hundred Kbytes. For this reason I downloaded xamarin and mono and I have to say I was quite impressed by it. I have been able to setup a project and run it in few minutes. The bootstrapper itself has been developed in few minutes as well. For this reason we were forced to create a very simple c++ application in order to be .net framework agnostic (that otherwise must be installed on the machine)

Hope all of this can help you one day!