In this potentially controversial article, I will explain why I believe ECS is a great paradigm to write simpler to mantain (and therefore less expensive) code and I will also try to explore the idea to formalise best practices through the application of the SOLID principles, which at glance look incompatible with the ECS design. This article continues the train of thought of my previous ones (could be useful to read them, but not necessary), and expands them through the elaboration of the feedback that I gathered in the last years from my team, developing two commercial products, and the Svelto Community. It is also mainly formed by personal opinions, that I am open to debate, as your feedback is part of the process that I adopt to push my reasoning even further. I will also keep on iterating its content as result, so it could be a good idea to check it back every now and then.
This article is targeted to people who already know the Entity Component System design, and want to explore the benefits of adopting this paradigm beyond the mere performance implication of writing Data Oriented code. It is also targeted to people who are interested in exploring alternative approaches to Object Oriented Design to develop maintainable code.
TL;WR; (Too Long, Won’t Read)
The goal of this article is too lay the foundation of my reasoning on the ECS paradigm and explain why I am convinced it provides a simpler framework to write maintainable code compared to the more bloated OOP solutions.
Based on anecdotal evidence from the commercial games as service I developed with ECS, I hypothesize that giving the coders a stricter but simpler set of code design rules, would help them to focus on the development of the algorithms more than on how to write the code itself. Relieving the coders from the burden to decide how to abstract, reuse and maintain the code, would hopefully lead to less technical debt piled over time and a leaner production.
I warn the user that the ECS paradigm alone is not enough to be sure that this happens as it can still be badly misused, especially if the developers try to adapt their OOP know-how to ECS. For this reason I also give a personal ECS re-interpretation of the SOLID software engineering principles , which I observed to be fundamentally useful to write maintainable ECS code. I am not the only one who attempted this, so I suggest to watch this interesting video by Maxim Zacks (the creator of Entitas) too.
What maintainable code means to me
I define maintainable code as code that is easy to read, expand and refactor. It often belongs to a codebase that is in continue evolution and never stops mutating. This kind of codebase can be found especially in products like games as services that continue serving players for several years. As these products must evolve to keep appealing the player base, the code itself keeps on changing through countless iterations. I personally can vouch that in this kind of environment, coding guidelines and code reviews alone are not enough to guarantee that the code won’t implode over the time, resulting in an expensive pile of technical debt hard to manage.
Historically, game development companies use(d) to underestimate the actual cost ($$$) of maintaining code, focusing only on the cost to develop it. However it’s now common knowledge that maintaining code (google it as well) is usually more expensive than writing it (this is even more true for games as service) and that’s why is so important to write good code to start with. Never fall in the mistake to write “prototypal” code without rewriting it properly before using it in production. All that said, I hope people realise that the “rewriting” part itself involves a cost, so why don’t write it as good as possible the first time? It’s common to think that writing efficient code, without proving its usefulness, is a high upfront investment but this is true only if coders are forced to think about how to design their code, instead to focus only on what the code must do.
Most of the coders are lazy by nature and when laziness is correctly channeled, it’s actually a good thing, as a lazy programmer will strive to find the simplest way to solve a specific problem. An experienced lazy programmer will also know how to make this code efficient with minimum effort. However one thing that a lazy coder won’t often do, is to think about the consequences of the directions taken, as for them is important to get things done as soon as possible, therefore forgetting the long term implications on the maintainability of the codebase.
However when it’s too complex to predict these consequences, coders could fall in two traps: over-engineering their code to try to be prepared for all the possible outcomes or just not caring at all, until when it’s probably too late, so that changing direction would be so expensive that ineffective compromises must be taken.
These are the reasons why I believe that coders would be better off using rigid frameworks. Some people are confused by the way I use the adjective rigid with framework. By rigid I mean that the framework (or even a programming language) must provide one or just few ways to solve a specific problem. I use rigid as synonym of strict, as “incapable of compromise and flexibility” or “not able to be changed or adapted”, as it must not bend, adapt or “compromise” to avoid misuse due to misinterpretations dictated by the user current knowledge and assumptions.
In this sense, saying to a coder to develop a game with a multi-paradigm language like it can be C++ or C#, would result in a meltin-pot of solutions adopted all over the codebase. Knowing also that the lazy programmer is famous for copying and pasting already working code, it’s not hard to imagine what the ramifications of not giving rigid directions could be.
In search of the root of the problem
All my reasoning started a long time ago, when I realised that, while I was happy with the solutions I used to develop, I wasn’t able to reuse them as I wasn’t able to identify the shared patterns among these problems. I was reinventing the wheel every time, which is a common problem in the industry. To be clear, I am not talking about patterns to implement algorithms, I am talking about how to write the relative code in order to be easy to read, expand and refactor.
I eventually concluded that code design problems, in Object Oriented Programming, always orbit around inter-objects communications. Whenever objects communication is made through events, messages, mediators or other ways, it always needs somehow to be aware of the object to communicate with. Objects communication, in my opinion, is the cause of every spaghetti code out there. All the forms of abstraction and relative good practices to uncouple code aim eventually to control inter-objects communication.
In C++ any form of inter-object communication eventually ends up with the direct use of object methods, so I started to wonder, is Encapsulation part of the problem?
Encapsulation, good or bad?
In my previous articles I mention Encapsulation multiple times, but I realised that my interpretation of encapsulation was too strict. The problem exists in the way I explained, but it’s not due to a flaw in the concept of Encapsulation.
In Object Oriented Programming, data encapsulation works well for Abstract Data Types. You create your data structure, you provide public functions to manage this data structure. Encapsulation works well to hide complexity. This is what Encapsulation and OOP have been created for. You can “black box” the complexity of a class, without need to know what this class does. This is what pushes, at a given point, a procedural programmer to switch to OOP.
However Encapsulation doesn’t prevent the object to be used in the wrong way. If it was possible to write an application without any form of inter-object communication, we would be able to write a perfectly encapsulated program. If we also design the classes to solve just one single problem, then we would have the perfect scenario for a maintainable codebase. The codebase would be lean, modular and totally uncoupled.
In real life this scenario is unrealistic as it’s unlikely that an useful application would work without communication and, as result, the beautifully encapsulated code will possibly turn into a spaghetti one due to the objects coupling needed for objects to communicate.
But is Encapsulation involved in this problem at all?
Using an interesting analogy, Encapsulation enables us to build a perfectly functional car, but cannot prevent a driver to destroy it. The OOP cannot prevent perfectly encapsulated objects to be used in the wrong way. Brakes() and Accelerates() can exists, but encapsulation alone cannot keep me from using Accelerates() instead of Brakes(). A more stupid example to prove the point is that a class can implement First() and Second(), but nothing stops me from calling Second() before First().
Encapsulation is a concept born from the necessity to limit the scope of global accessible states, however, following the standard definition of Encapsulation, Singletons, that are globally accessible objects, do not break it. The paradox is that is possible to globally access a perfectly encapsulated object.
The truth is that public methods belonging to well encapsulated classes prevents invalid states from being set, but they can’t prevent valid states to be set at the wrong time. The problem is not about data encapsulation any more, but about object control and ownership.
The logic inside the public methods is meant to control the object states, but can objects really control these states when the public functions can be used from everywhere in the code? How can an object prevent a state to switch to a perfectly valid state at unexpected times if the use of public functions is indiscriminate?
While the unwanted consequences of this misuse is clear in a concurrent and multi-threaded scenario, it’s of fundamental importance to the writing of maintainable code too.
All the good code design theories orbit around this problem, how to prevent the wild use of the public methods. Let’s think again about Singletons. Singleton are globally accessible objects with private states, that can be mutated through public functions.
Now without even needing to give you examples, how many times did you have to debug the states of a Singleton because they unexpectedly changed? However even if we decide to not use Singletons, what could prevent objects from being injected indiscriminately all over the places? Overzealous Dependency Injection is what leads to highly coupled code that is hard to maintain. The reason why I firstly abandoned the use of an Inversion of Control container is that it’s so easy to misuse and abuse when it’s used without Inversion of Control in mind. Unique Instances are so easily injected everywhere that, in all respects, they act as singletons, so that the same consequences arise even if they are not technically static classes.
Hence, it’s not the Singleton pattern to be a problem per se, but the fact that there isn’t any control over its ownership and this applies to any kind of pattern with similar issues, including the plain use of stateful static classes. I mentioned the Singleton to give you an idea of the worst kind of pattern that can be used for object control. I honestly never used Singletons in games, so I can’t even give a bad real life example, but I have seen Event Bus (Singleton based messaging system) causing the worst effects due to similar problems. The real problem around Singletons is the fact that code must rely on their implementation and not their abstraction, but this topic won’t be elaborated any further in this article.
The Solid Principles as foundation for all the reasonings
It’s clear to me that the majority of the GoF design patterns are, in way or another, finding a way to control objects ownership. The issue I have with the GoF design patterns is that they cannot be used without really comprehend the problems they are trying to solve (which often leads to wild interpretations of them! I am guilty too!) and the only way I found to realise why the GoF design patterns exist, was to study and understand the much simpler SOLID principles, which in my opinion are the foundation of every OOP well designed code.
I will list the SOLID principles later, when I will test the validity of their applications on ECS code, meanwhile I want to talk first about the single principle that is meant to solve the ownership problem and the consequences of its application: The Dependency Inversion Principle
Abstraction, Inversion of Control and the Hollywood principle
The term abstraction can actually have different meanings according the context. In OOP usually we “abstract” from the implementation details through the use of interfaces. The code using interfaces is not dependent by the details anymore. In software architecture, when we talk about abstraction layers, abstraction is used to define how far the code is from the most specialized layer, which is usually the user interface (another example is the OSI model). While the word abstraction can have more meanings, keep in mind the above difference while reading the following.
While encapsulation is very useful to hide complexity, it introduces the new problem for OOP code to be potentially dependent, in a way or another, on the classes accessible methods. I say accessible and not public, because in the moment a mediator or an observer is injected to a class with the purpose to register a private method, this private method is used not differently than would have been if public. Although more levels of abstraction have been added, eventually the method is called by another object.
You may not agree with this, but I reckon we may see the need to abstract the code as a way to solve the problems caused by the use of public functions. Obviously abstraction is more about being able to write code that is not dependent by any implementation, but does it solve the problems about highly coupled code? Being dependent by an interface is not much different than being dependent by an implementation in terms of object ownership. It does make a huge difference in terms of being independent by specialized code, so that refactoring becomes much simpler, as it is enough to change just the implementations of our objects in order to change the behaviour of the application, but this doesn’t mean that the code is not a highly coupled any more.
That’s what the Dependency Inversion Principle tries to solve through its application. It says:
- High-level modules should not depend on low-level modules. Both should depend on abstractions.
- Abstractions should not depend on details. Details should depend on abstractions.
While abstraction is here used in its OOP definition, The trick to understand what this says is to see an application in terms of abstracted layers. For example, in the simplest case, the low level application code and the high level framework code can be seen as two separate layers of abstraction. However this coarse separation between layers is not always optimal, as a finer interpretation is often more beneficial as we will soon see.
In the moment a codebase is designed in layers, the DIP suggests that the higher levels (of abstractions) must not depend by the interfaces declared in the lower levels (of abstraction) . What does it mean in simpler terms?
Let’s assume we have a Renderer method in a Rendering class at the framework level, the Renderer method accepts an IRenderable interface. The IRenderable interface is an interface provided by the framework and not by the application code. The application code must only implement an object with this interface (which very likely declares a Render method) so that the framework will be able to use it as its discretion.
The higher level of abstraction layers must provide the interfaces that the less abstracted code must use.
The first direct consequence of the DIP is the application of the Inversion of Control: The framework is taking over the control (and ownership) of the objects generated by less abstracted code. Generally there isn’t the need to have a relationship between objects at the same level of abstraction as the relationship is only between application layers.
This already gives a great boost in code maintainability as we are starting to provide rules for objects ownership and control, but there is much more. IoC is not a principle, but rather a pattern, which at first can be confusing, as some articles may refer to different kind of Inversion of Control. However there are mainly two: what I call Inversion of Flow Control and Inversion of Creation Control.
The Inversion of Creation Control is simpler and basically its application would result in the user never creating any dependency inside objects, as dependencies should always be created and injected from a composition root (an application can have more than an application root, however this is another story, you can check my older articles or google it).
The Inversion of Flow Control is instead what we are more interested in now. It’s also funnily called the Hollywood Principle: “don’t call us, we will call you”. What does it mean?
It means that the higher level of abstraction layers must not just provide the interfaces to implement but will, consequentially, take control over the objects, calling their public methods through their interfaces.
It’s important to realize that the responsibility of the actual logic to perform is shifted upward. The implemented objects are still encapsulated and can be relatively complex, but their scope is limited by the specification of the interfaces provided by the framework. It’s the framework that take over what these objects must be used for and eventually even letting them communicating together.
Here it’s when one could wonder if talking about application and framework is too limiting. What if the user wants to adopt the same IoC principle for logic that is not implemented inside the framework? This is where the coarse separation between layers becomes counter-productive as this principle should be adopted extensively adding more and finer layers in within the application itself.
For example, while your Rendering framework can take over the shared logic of the rendering functionalities, it doesn’t know anything about the shared logic that character that can attack can have. This is when then you may design a CharacterAttackManager that can take over the control of all the objects that implement the IAttack interface. This example leads directly to the next paragraph:
Inversion Of Flow Control in OOP
Inversion of Control contains all the ingredients for the recipe of the successfully maintainable code. It promotes abstraction and modularization. All the invariant modules are encapsulated at the higher levels and they are independent by each other. Objects can implement multiple interfaces to compose multiple behaviours. It gives a clear direction to how these objects must be controlled and what use their interfaces.
There are multiple ways to implement inversion of control, but the best way I found to apply it is the application of the GoF Strategy Pattern, or even better, the Game Programming Component Pattern. In fact, while the Strategy Pattern has usually a 1:1 relationship between the Controller and the “Implementation”, the Component Pattern has a more powerful 1:N relationship, where a Manager can control several components.
While the Strategy Pattern can be found in framework-level patterns like the Model-View-Controller, the Component Pattern is more commonly used in game engines like Unreal (as far as I remember it) and Unity (The Monobehaviour is a Component).
In this scenario, a Renderer class doesn’t just render one object, but it renders all the IRenderable objects, which is a much more useful scenario.
In a game engine, the component pattern is always used through “Managers” that iterate over several entities implementing the same interface at different times of the frame pipeline. For example the PhysicManager, the UpdateManager and the RenderingManager can be three different managers which Tick() is called one after the other. The Tick inside will then call the PhysicUpdate(), the Update() and the Render() of every single object registered in it.
This resembles a bit what happens in Unity with the Monobehaviour. Each Monobehaviour can have optionally implemented methods like Awake(), Start(), Update() and FixedUpdate(). However it doesn’t have a RenderUpdate(). Why is that? While Unity delegates to the user the implementation of the logic that may happens inside those customisable functions, it doesn’t need the user to specify how to render the objects because Unity already implements all the necessary logic to render them. From the monobehaviour it just needs its data.
Obviously while Unity can come with as many as “generic” managers as possible, it can never come with all the possible managers for all the possible logic a game can have and this is why it delegates to the user the implementation of this logic.
However if this sharp separation between the framework (Unity) and the user logic weren’t present, would have it been possible to write all the game logic in form of managers? Obviously it would, leading to the definitive inversion of control.
Once all the composable behaviours (game logic) are written in terms of managers, the components themselves won’t need any logic specified any more. All the logic is written in form of more or less abstracted (or generic) behaviours encapsulated in managers (it doesn’t matter if a manager is written for a behaviour that eventually is used by one component only), that are then composed through the use of the interfaces that these managers provide. Hence the components will turn to be just a collection of data leading to…ECS!
Finally I explained to you the whole path that brought me to ECS. It’s not just about performance (that too though!), it’s about total Inversion of Control!
With ECS there are three fundamental concepts:
The Systems, which are our managers. It’s essentially a block of usually stateless logic to be executed over Entity Components. They contain all the logic necessary to execute a specific behaviour, therefore from the entity they need only their data (Entity Components)
The Entity, while some define as ID, I advise to always conceptually visualize it as a well defined set of components. It’s the equivalent of the logic-less object. The data is effectively a composition of the:
Entity Components that are are usually structs of data stored sequentially in memory according the type.
Before to continue with the second part of this article, if you want to read a different, but similar point of view, please check Brian Will’s article and relative video:
How the Solid Principles (may) apply to ECS
Is ECS a paradigm? If we define as paradigm a style, or “way” of programming, then sure ECS is a paradigm. If a paradigm must be able to solve all the possible programming problems, then I am not sure if the ECS way to manage data can handle all the possible algorithms. I guess it may with the right ECS implementation together with the tools provided by a simple procedural language. The important thing though is that ECS gives a strict direction to solve most of the problems, so that a game can possibly be developed using 100% ECS (which is what I proved in Freejam using Svelto.ECS).
Going back to less “philosophical” matters, let’s talk about how to use ECS properly now. In this second part of the article I will make an exercise in trying to understand if the SOLID principles, born for the OOP paradigm, can be applied to the ECS paradigm too. The reason being that, if ECS is used to write an entire application and not only few parts that need to be “fast”, we need to have some sort of guidelines to be sure that our code is kept maintainable over the time.
Let’s start from the most fundamental fact: exactly like an IoC container would result in an absolutely evil tool if used without inversion of control, any ECS based codebase will fail to be easily maintained if developed without layers of abstractions. In fact an ECS codebase implemented without layers of abstractions, is fundamentally procedural code with globally accessible data. A scenario like this would just lead to unmaintainable code. ECS alone is not enough to be able to write code that is easy to refactor and maintain (trust me, I have proofs), but it show us the right mindset to implement the tools that will still need to be used correctly.
Compared to OOP, ECS should provide a framework that would allow less time to be spent on thinking how to design the code and more time to spend on writing the code, since, as I am trying to demonstrate, the ECS paradigm facilitates the adoption of total Inversion of Control.
It may also seem that, with ECS, data encapsulation is not existent any more. While we can argue that once we solve the object ownership and control problem, encapsulation may turn to be not so important any more, it’s not totally true that data encapsulation doesn’t exist in ECS. If we just access to any kind of data component from any system, then encapsulation is gone. Try to manage a code like that though! So, let’s approach this reasoning from another point of view before to connect all the dots: What is the responsibility of a system? A System is the encapsulated logic of a well defined single entity behaviour that must be applied to a set of entities sharing specific entity components. The rule should be that Systems can read immutable data from all the entities they need to, but they can mutate the data of only a specific set of entities, the entities that require that specific behaviour.
This is the application of the Single Responsibility Principle, which we must reinterpret for systems saying that:
THERE SHOULD NEVER BE MORE THAN ONE REASON FOR A ENTITY BEHAVIOUR TO CHANGE.
GATHER TOGETHER THE ENTITY COMPONENTS THAT CHANGE FOR THE SAME REASONS. SEPARATE THOSE ENTITY COMPONENTS THAT CHANGE FOR DIFFERENT REASONS.
In applying this principle, our Systems will turn into very modular blocks of logic applied to minimum sets of entities affected by the single Behaviour. Behaviour composition is not achieved through interfaces implementation, but through data composition. More components an entity holds, more behaviours will be applied independently on it, however a behavior code must change for a single reason only, for example a system must not change both the HealthComponent and the PositionComponent of an entity, as these components would very likely change for different reasons. This principle reinforces the ideas that while a System (Entity Behaviour) can read as many immutable entity components as it needs, it should define a specific single reason to change a specific set of entity components.
In the moment a System is seen as a single behaviour to execute, we could consequentially say that the Open/Closed principle can be seen as:
ENTITY BEHAVIOURS MUST BE OPEN FOR EXTENSION BUT CLOSED FOR MODIFICATION
Through data composition, I can extend the behaviours of an entity adding more components, but I should never need to modify a well written existing behaviour. If an entity needs a new behaviour, I should swap the old one for a new one, through the modification of the list of entity components. Since behaviours of the entities that still need the old logic must not be affected, a new system will be created instead. The new behaviour can also just extend the previous one, not through inheritance, but just applying both behaviours on the same entity through the relative systems (the existing one and the new one)
More thoughts on the possible consequences of the right application of these principles can be found in this video
With ECS is simpler to design the logic of the entire application in terms of layers of abstraction. Everything can be modelled under the form of systems, from the the Rendering logic, to the Character attack logic. There is no difference any more between framework and application code, but nevertheless, everything is still designed with multiple levels of abstraction with the goal to encapsulate the shared entities behaviours within well defined systems.
We can say that Entity specialization happens through components composition. An Entity is defined as a more or less fine set of components. The number and size of these components can vary according the Entity Granularity.
Granularity can either refer to the extent to which a larger entity is subdivided, or the extent to which groups of smaller indistinguishable entities have joined together to become larger distinguishable entities.Dictionary definition of Granularity
Since Entity specialization can be seen as composition of entity components instead of implementation of new interfaces, we may re-interpret the Liskov Substitution Principle saying that:
AN ABSTRACTED ENTITY BEHAVIOUR MUST BE ABLE TO USE SPECIALIZED ENTITIES THROUGH THE USE OF A SUBSET OF THEIR COMPONENTS
It would be too simple stop talking about my LSP interpretation here. LSP is obviously made for OOP and objects specialized through inheritance. When objects are specialized through inheritance, LSP turns to be very similar to the Design By Contract theory found in the Domain Driven Design. However when objects are just defined as implementation of interfaces, without inheritance, LSP applies too. In both cases the principle simple states that a class handling dependencies must never need to downcast a dependency to its specialized type to run its logic.
In the same way, it may not be such a stretch to say that systems must never need to be aware of the entirety of components a specific entity is made of, but must be able use this entity through the components that the system is responsible of.
The application of the Interface Segregation Principle may require some imagination to be applied as well. It basically states that many “lean” interfaces are better than few “fat” interfaces. Interfaces that an object implements should be as lean as possible, in such a way that modular code depending on not modified interfaces wouldn’t need to be “recompiled” if unrelated interfaces change. if modules depend on “fat” interfaces, the previous principles are not really broken, but the change of an interface for any reason, would lead independent logic to be affected too. More importantly, objects that implements fat interfaces will be forced to implement all the methods of these interfaces, reducing modularity and hindering behaviour composition.
Again, in ECS we may apply this principle to the Entity Components. Entity components are structs of data that could grow indefinitely, adding more and more data to each single component. What would drive the user toward the creation of multiple lean components instead of the creation of less, fat components? Theoretically all the entity data could be written inside one component only. However this would lead to the writing of entity components impossible to reuse in order to compose behaviours and consequentially to the writing of entity behaviors (systems) that cannot be reused between different specialized entities.
We can say then that the ISP for ECS means:
NO ENTITY BEHAVIOUR SHOULD BE FORCED TO DEPEND ON DATA THAT IT DOESN’T USE
this principle would work only if the entity components that define an entity are provided by the systems. While in OOP abstracted managers provide the interfaces that the objects must implement, in ECS entity behaviours should dictate the entity components, so that if we design the behaviours of the entity first, the entity will become a composition of the entity components used by the relative systems.
In more practical terms, if a WalkingCharacterBehaviour needs a PositionComponent and a DamagingCharacterBehaviour needs a HealthComponent, the CharacterEntity is defined by the PositionComponent and the HealthComponent.
In ECS we could introduce a new definition of Abstraction, which is linked to the isolation of the reusable behaviours (granularity). A more abstracted behaviour is then a more common behaviour between entities and the components used by these behaviour would be consequentially less specialised.
In my opinion, one of the strongest benefits of ECS, is the possibility to design reusable entity components without any overhead. There is no difference in terms of effort between writing a fat component or many slim, and therefore reusable, components as long the system related to these components make sense.
This article is not complete as I still need provide some examples of good and bad ECS practices and how ECS can be used for many aspects of game development, but since this article is growing too long, I decided to limit it to the theory part only.
A couple of fundamental points still need to be discussed: how data ownership is managed when objects are not used at all and how the inter-systems communication is solved in ECS. If I notice a good interest on the argument, then I will proceed with writing a new one with more practical examples. If you have any aspect you would like me to discuss further, please let me know too.
Since I am not an academic person, some of the definitions above may seem too subjective, so please let me know if you think they must be discussed further.
More related articles: