The truth behind Inversion of Control – Part IV – Dependency Inversion Principle

The Dependency Inversion Principle is part of the SOLID principles. If you want a formal definition of DIP, please read the articles written by Martin[1] and Schuchert[2].

We explained that, in order to invert the control of the flow, our specialized code must never call directly methods of more abstracted classes. The methods of the classes of our framework are not explicitly used, but it is the framework to control the flow of our less abstracted code. This is also called (sarcastically) the Hollywood principle, stated as “don’t call us, we’ll call you.”.

Although the framework code must take control of less abstracted code, it would not make any sense to couple our framework with less abstracted implementations. A generic framework doesn’t have the faintest idea of what our game needs to do and, therefore, wouldn’t understand anything declared outside itself.

Hence, the only way our framework can use objects defined in the game layer, is through the use of interfaces, but this is not enough. The framework cannot know interfaces defined in a less abstracted layer of our application, this would not make any sense as well.

The Dependency Inversion Principle introduces a new rule: our less abstracted objects must implement interfaces declared in the higher abstracted layers. In other words, the framework layer defines the interfaces that the game entities must implement .

In a practical example, RendererSystem handles a list IRendering “Nodes”. The IRendering node is an interface that declares, as properties, all the components needed to render the Entities, such as GetWorldMatrix, GetMaterial and so on. Both the RendererSystem class and the IRendering interface are declared inside the framework layer.


Designing layered code

So far I used the word “framework” to identify the most abstracted code and “game” to identify the least abstracted code. However framework and game don’t mean much. Limiting our layers to just “the game layer” and “the framework layer” would be a mistake. Let’s say we have systems that handle very generic problems that can be found in every game, like the rendering of the entities, and we want to enclose this layer in to a namespace. We have defined a layer that can be even compiled in a DLL and be shipped with whatever game.

Now, let’s say we have to implement the logic that is closer to the game domain. Let’s say that we want to create a HealthSystem that handles the health of the game entities with health. Is HealthSystem part of a very generic framework? Surely not. However while HealthSystem will handle the common logic of the IHaveHealth entities, not all the game entities will have the same behaviors. Hence HealthSystem is more abstracted than more specialized behavior implementations. While this abstraction wouldn’t probably justify the creation of another framework, I believe that thinking in terms of layered code helps designing better systems and nodes.


Putting DI, IOC and DIP all together

As we have seen, the flow is not inverted when, in order to find a solution, a top down design approach is used to break down the problem, that is when the specialized behaviors of the entities are modeled before the generic ones. Or also when the systems are designed as result of the specialized entity problems.

In my vision of Inversion of Control, it’s needed to break down the solutions using a bottom up approach. We should think of the problems starting from the most abstracted classes. What are the common behaviors of the game entities? What are the most abstracted systems we should write? What once would have been solved specializing classes through inheritance, it should be now solved layering our systems within different levels of code abstraction and declaring the relative interfaces to be used by the less abstracted code.

I believe that in this way we could benefit from the following:

  • We will be sure that our systems will have just one responsibility, modeling just one behavior
  • We will basically never break the Open/Close principle, since new behaviors means creating new systems
  • We will inject way less dependencies, avoiding using a IoC container as a Singleton alternative
  • It will be simpler to write reusable code

In the next article I will explain how I would put all these concepts together in practice

References:

  1. http://www.objectmentor.com/resources/articles/dip.pdf
  2. http://martinfowler.com/articles/dipInTheWild.html

The truth behind Inversion of Control – Part III – Entity Component Systems

In the previous article I explained what the Inversion of Control principle is. Now I will illustrate how to apply it properly, even without using an Inversion of Control container. In order to do so, I will talk about Entity Component Systems. While apparently they have nothing to do with IoC, I found them to be one of the best way to apply the principle.

Once upon the time, there was the concept of game engine. A game engine was a game-specialised framework that was supposed to run whatever game. This game engine used to have some common classes, that were found more or less in all the game engines, like the Render class.

Every time a new object with a Renderer was created, the Renderer of the object was added in a list of renderers managed by the Render class. This is, for example, also true for other classes like the Collision class, the Culling class and so on. The Engine dictates when to execute the culling, when to execute the collisions and when to execute the rendering. The lower level of abstraction object didn’t know when it was going to be rendered or culled, it knew only that at a given point, this was going to happen.

The game engine was taking the control of the flow, resulting in the first form of Inversion of Control. There is no difference with what I just explained and what Unity engine does. Unity decides when it’s time to call Awake, Starts, Update and so on.

Game Engines like Unity have an obsolete way to think about IoC. In fact they just are a magnification of the classic Template Pattern.

Modern Entity Component Systems

A more advanced way of managing entities has been introduced in 2007 with the modern implementations of Entity Component System

In 2007, the team working on Operation Flashpoint: Dragon Rising experimented with ECS designs, including ones inspired by Bilas/Dungeon Siege, and Adam Martin later wrote a detailed account of ECS design[1], including definitions of core terminology and concepts. In particular, Martin’s work popularized the ideas of “Systems” as a first-class element, “Entities as ID’s”, “Components as raw Data”, and “Code stored in Systems, not in Components or Entities”.[2]

ECS Design uses IoC in its purer form. ECS is also a magnificent way to possibly never break the Open/Close principle when is time to add new behaviours. The design works in this way:

  • Entities are just basically IDs, sometimes the concept of Entity doesn’t even exist anymore.
  • Components are ValueObjects[3]. They are never strings or integers, but always objects that wrap single values (including matrices and other data structures) and that can be shared between Systems. Components do not have any code to manage logic.
  • Systems are the classes where all the logic lies. Systems can access directly to list of components and execute all the logic your game and engine need.

The really powerful aspect of this design is that the Systems must respect the Single Responsibility Rule. All the behaviours of our in-game entities are modelled inside systems and every single behaviour will have a single, well defined in terms of domain, System to manage it.

This is how we can apply the Open/Close principle without ever breaking it: every time we need to introduce a new behaviour, we are forced to create a new System that simply uses components as data to execute it.

Systems are implicitly Mediators as well, this is why Systems are great to solve also communication between entities and other systems (through the use of the components). However Systems are not able to cope with all the possible communication problems just through the use of components, that’s why Systems, that should be instantiated in the Composition Root, can have injected other dependencies.

Also, pay attention, I said that Systems model all the logic of your Framework AND Game. When ECS design is used, there won’t be anymore any difference between framework logic (like RenderingSystem, PhysicEngine and so on) and the game logic (like AliensAI, EnemyCollisionEngine and so on) in terms of implementation, but there will be still a sharp difference in terms of layerization. Framework and Game systems will still lie in different packages of the application.

This introduce another very important concept: the layerization of our game application. Using just two layers of abstraction is not enough for a complex game. Framework layer and Game layer alone are not enough. We need to make our Game Layer more granular.

However before to finally shows the benefits of this design approach, I will explain the concept of Dependency Inversion Principle in the next article of this series.

[1]http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/

[2]http://en.wikipedia.org/wiki/Entity_component_system

[3]http://c2.com/cgi/wiki?ValueObject

[4]http://www.richardlord.net/blog/what-is-an-entity-framework

The truth behind Inversion of Control – Part II – Inversion of Control

IoC is another concept that turns out to be very simple only once it has been fully processed or rather saying “metabolised”. Metabolised as going over being fully understood, becoming part of one’s own forma mentis. However, make no mistakes, Inversion Of Control is not the same of Inversion Of Control Container (which is not a principle, but just a framework to simplify Dependency Injection)[1]. Inversion Of Control container is a confusing name in this sense[2].

IoC cannot be really applied successfully without designing the application with multiple different layers of abstraction. Higher is the abstraction, more generic is the scope of the code. For example the Svelto framework namespace is the highest level of abstraction since it could be used by whatever type of game.

Every level of abstraction (ideally encapsulated in a namespace) should be self contained and totally separated from the other levels. One namespace should be compilable in a separate assembly (.dll) and still full usable inside the application.

The level of abstraction idea must not be confused with concepts like the Presentation Layer and the Service Layer, those concepts come from the Domain Driven Design principles and aim to solve other kind of problems (uncoupling business logic from data storing and transmission).

If we think of our code structure in terms of layers of abstraction, Inversion of control is all about giving the control to the higher levels of abstraction instead of the lower ones. Of course you would ask: control of what? The idea is that higher levels of abstraction should control both the creation and the flow of the less abstracted code.

How can the object creation be inverted? It’s simple: code that follows the IoC rules, never uses the new keyword explicitly. Your low level initialisation code will never explicitly create a dependency. All the dependencies are always injected, never created directly.

If we apply this reasoning to the injection waterfall discussed in the first part of this articles series, it should be easy to realise that this would mean that ALL the dependencies, therefore all the objects, must be created and initially passed from the Composition Root. The Composition Root effectively becomes the only place where all the starting relations between objects are made.

If you wonder how to create dynamic dependencies, like objects that must be spawned in run-time, you asked yourself a good question. These objects are always created by factories and factories are always created and passed as dependency from the Composition Root.

An inversion of control container simply hides this process, creating and passing all the dependencies automatically instead to force the user to pass all the dependencies by constructors or setters.

Why is inverting the creation flow important? Because the code is not dependent anymore from the constructor implementation. A dependency can, therefore, be passed by interface, effectively removing all the coupling between the implementation of classes.

Using a simple example, if enemies can be spawned with different weapons, you don’t need to know exactly which weapon is used as long as an interface will abstract the behaviour of the weapon itself.

Something like

will become:

Now, what about inverting the logic flow? Quoting Wikipedia:

inversion of control (IoC) describes a design in which custom-written portions of a computer program receive the flow of control from a generic, reusable library. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the reusable code that calls into the custom, or task-specific, code.”

In this sense an IoC container could be used to implement Inversion Of control when it’s time to “plug-in” implementations from the lower levels of abstraction (less generic) to the higher levels of abstraction (more generic, framework style) without breaking the Dependency Inversion Principle.

IoC is also a powerful concept that helps to follow the Open/Closed principle, which is also part of SOLID:

software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

A better explanation of what inverting the flow actually means and how is possible to have code compliant to the Open/Closed principle in an evolving project using IoC is what I will talk about it in the next article of this series.

[1] http://www.sebaslab.com/ioc-container-for-unity3d-part-2/

[2] http://martinfowler.com/articles/injection.html#InversionOfControl (but read all of it)

The truth behind Inversion of Control – Part I – Dependency Injection

There is an evil truth behind the concept of Inversion of Control container. An unspoken code tragedy taking place everyday while passing unnoticed. Firm in my intentions to stop this shame, I decided to stand up and start writing this series of articles that will tell about the problem, the principles and the possible solutions.

I started to notice the symptoms of this “blasphemy” (against the code gods :) ) quite a while ago, but I couldn’t pin down a possible solution and most of all, the cause of them. Nevertheless the problem was pretty clear: the IoC container solution was used as alternative of the Singleton pattern to inject dependencies. With the code growing and the project evolving, many classes started to take the form of a blob of functions, with a common pitfall: dependencies, dependencies everywhere.
What means and what is wrong with using an IoC container as mere substitute of a Singleton, is something that I am going to describe the best I can, starting with the explanation of what Dependency Injection is:


What Dependency Injection is

Dependency Injection isn’t anything fancy. A dependency is just an interface of methods of which another class is dependent on. Usually dependencies are solved in two ways: injecting them or passing them by Singletons. Singletons break encapsulation in the sense that, as a global variable, they can be used without a scope. Singletons awkwardly hide your dependencies: there is nothing in the class interface showing that the dependency is used internally. Singletons strongly couple your entities, causing long and painful code refactoring. For these reasons we use injection to solve dependencies.

Now, think for a moment if we didn’t have an IoC container in place. How would we inject our dependencies? For example, passing them as parameters in a constructor. Would you pass 10 dependencies by constructor? I surely wouldn’t, just for how painful and inconvenient it is. Same thing applies when an IoC container is used. Just because it’s more convenient, it doesn’t mean you can take advantage of it. You are just making a mess with object coupling again.

When dependencies are injected into an instance, where do these dependencies come from? The answer is also a simple mental exercise: they come from the scope where the object, that needs the dependencies injected, has been created. On its turn, the object that is injecting the dependencies, would need also dependencies injected, which therefore are passed by another object in the parent scope. This chain of dependencies passed, creates a waterfall of injections and the relationship between these objects is called Object Graph[1].

Albeit, where does this waterfall start from? It start from where the objects are created for the very first time, which is the so called Composition Root. Root because it’s where all the initial objects are created, composition because it’s where the dependencies start to be injected and, therefore, the initial relations composed.
Now you can see what the real problem of the Unity framework is: the absence of a Composition Root. Unity doesn’t have a “main” class where the relations between objects can be composed. This is why the only way to create relationships with the bare Unity framework is using Singletons or static classes.

Why are relationship between objects created? Mainly to let them communicate each other. All forms of communications involve dependency injection. The only pattern that allows communication without dependency injection is the pattern called Event Bus in the Java environment. The Event bus allows communications through events held by a Singleton, hence the Event Bus is one of the many anti pattern out there.
Note that you could think to create something similar to an Event Bus without using a singleton (therefore injecting it). That would be called observer pattern. Think how awkward would be to inject a single observer pattern to all the classes that need somehow to communicate.

Object Communication and Dependency Injection

Communication can couple or not couple objects, but in all the cases involves injection. There are several ways to let objects communicate:

  • Interface injection: usually A is injected in B, B is coupled with A [e.g.: Inside a B method A is called, B.something() { A.something());]
  • Events: usually B is injected in A, A is coupled with B [e.g.:Inside A, B is injected to expose the event, B.onSomething += A.Something]
  • Commands: B and A are uncoupled, B could call a command that calls a method of A. Commands are great to encapsulate business logic that could potentially change often. A Command Factory is usually injected  in B.
  • Mediators: usually B and A do not know each other, but know their mediator. B and A pass themselves in the mediator and the mediator wires the communication between them (i.e.: through events or interfaces). Alternatively B and A are passed to the mediator outside B and A themselves, totally removing the dependency to the Mediator itself. This is my favourite flavour and the closest to dependency-less possible.
  • Various other patterns like: Observers, Pubs/Subs, Event Queue[2] and so on.

How to pick up the correct one? If we don’t have guidelines it looks like one or another is the same. That’s why, some times, we end up using randomly one of those. Remember the first two patterns are the worst because they couple interfaces that could change over time.

We can anyway introduce the first guideline for our code: our solution must minimize the number of dependencies. If you pick up the pubs/subs pattern to solve your communication problem and you need to inject it inside several subscribers, you are not probably using the best solution.

Of course the second guideline is about the concept of Single Responsibility Principles[3] (and Interface Segregation Principle[4]). One of the 5 principles of SOLID (ISP is another one), but the only one that must be actually taken as a rule. Your class MUST have one responsibility only. Communicating could be considered a responsibility, therefore it’s better to delegate it.

How we are going to achieve SRP and solve the dependencies blob problem is something I am going to explain in the next articles of this series.

bibliography

[1] IoC Container for Unity3D, part 2 http://www.sebaslab.com/ioc-container-for-unity3d-part-2/

[2] Event Queue http://gameprogrammingpatterns.com/event-queue.html

[3] Single Responsibility Principle http://en.wikipedia.org/wiki/Single_responsibility_principle

[4] Interface Segregation Principle http://en.wikipedia.org/wiki/Interface_segregation_principle

Svelto Inversion of Control Container

It’s finally time to share publicly the latest version of my Inversion of Control container, which I will keep updated from now on. This new version is the current IoC container that Freejam is using for the Unity3D game Robocraft (http://www.robocraftgame.com).

After all this time, I had the occasion to analyse in depth the benefits and disadvantages in using extensively an IoC container on a big project with a medium sized team of programmers. I am preparing an exhaustive article on the subject, but I am not sure when I will be able to publish it, so stay tuned.

The new IoC container is structurally similar to the old one, but has several major differences. In order to use it, a UnityRoot specialised monobehaviour must still be created. The class that implements ICompositionRoot is the Compositon Root of the project. The game object holding the UnityRoot monobehaviour is the game context of the scene.

All the dependencies bound in the Composition Root, will be injected during the Unity Awake period. Dependencies cannot be used until the OnDependenciesInjected function or the Start function (in case of dependencies injected inside monobehaviours) are called. Be careful though, OnDependenciesInjected is called while the injections are still happening, however injected dependencies are guaranteed to have their dependencies injected on their turn. Dependencies are not guaranteed to be injected during the Awake period.

Other changes include:

  • Monobehaviours that are statically created in the scene, don’t need anymore to ask explicitly to fill the dependencies. They will be automatically injected.
  • Monobehaviours cannot be injected as dependency anymore (that was a bad idea).
  • Dynamically created monobehaviours have always dependencies injected through factories (MonoBehaviourFactory and GameObjectFactory are part of the framework).
  • Now all the contracts are always injected through factory, this simplifies the code and makes it more solid. Also highlights the importance of factories in this framework.
  • A type injection cache has been developed, therefore injecting dependencies of the same type is way faster than it used to be.
  • It’s now possible to create new instances for each dependencies injected if the factory MultiProvider is used explicitly.
  • Improved compile time error detection. It’s not possible anymore to bind contracts to wrong types. For this reason AsSingle() has been substituted by BindSelf().
  • Various improvements and bug fixes.

The new project can be found at this link: https://github.com/sebas77/Svelto

 

What I have learned while developing a game launcher in Unity

This post can be considered partially outdated as my mind changed quite a bit since when I originally wrote it.
First of all, I would never suggest to write a game launcher in Unity now. Use instead .net or the latest version of mono with xamarin. Mono library embedded with Unity 4 is too outdated and its use results in several problems when using advanced HTTP functions. However, if you decide to use .net, please be aware that your installer must be able to download and install .net from the internet if it is not installed on the machine already. It is also possible to embed mono, more or less like Unity does, but it is quite tricky under Windows.
We also changed our mind about the admin rights. Many of our users use Windows without admin rights, so we wanted our launcher to never ask for any.
I also think Unity now fixed the check of the SSL certificates, but I am not 100% sure about it.
At last, I would search around for libraries that can generate diff patches of binary files, since hashing and downloading files one by one is neither convenient nor efficient.

In case you didn’t know yet, it has been already a while since I co-founded a company called Freejam, in the UK, and started to work on a new game named Robocraft (http://robocraftgame.com). For this indie product we are extensively adopting the Lean startup approach even for the development cycles, so features come as they are actually requested by our early adopters.

Last feature our early adopters were eager to see was a proper game launcher. A game launcher is basically a must for every online game, since, as you all know, in this way is possible to patch and update the game without being forced to install it over and over.

I never implemented a launcher before, so I was completely ignorant about some tricky issues that could arise and their relative workarounds, which I now want to share with you to avoid wasting days trying to solve similar problems.

I started to develop the launcher in Unity just because it needed a graphical interface and I did not want to spend time learning/using new libraries for other development platforms (either c++ or pure c#). Size wise, considering that Unity applications embed the mono libraries and runtimes, the resulting compressed 6MB installer wasn’t too bad.

The graphic interface has been easily developed with NGUI and the information shown are a mix of rss news taken directly from the game blog and images configurable by the artists through an external xml.

The update process instead has been a little bit more convoluted, with a couple of unforeseen tedious obstacles that made my life a bit miserable for few days.

The update process is split into several predefined tasks:

  • check if the game is already running and ask to the user to close it before the update could be launched
  • check if another launcher is running and ask to close the other one
  • check if a new version of the patcher is available and in this case force to update the patcher
  • check if a new game build is available
  • download the list of files built together with the new game build. This list contains the name of the files, the size and a hash as checksum
  • download an asymmetric encrypted digital signature of the game
  • verify that the digital signature is valid using the public key embedded in the client
  • verify that all the files on the server are ready to be downloaded
  • verify which files on the hard disk must be updated, using file size and generated hash (comparing the value with the hash from the file list previously downloaded)
  • if it is needed, files are downloaded, uncompressed and saved (they are stored as gzip on the server)
  • delete obsolete files if there are any

On our side our Jenkins building machine handles two separate jobs (actually more, but I keep it simple for the sake of this article), one to build the patcher, generate a new patcher version and deploy it to the CDN and the other to build the game, generate a new game version, generate the files list with hashes, create the digital signature using the private key that only our building machine knows, compressing the files to upload and deploy everything to the CDN.

The whole development process has been long, but thanks to the .net framework relatively easy. Nevertheless there are two specific features I have to go into detail with, since they are very important to know and not so intuitive:

first one is the reason why I implemented an asymmetric encrypted digital signature verification. A launcher without this kind of protection is vulnerable to man in the middle attacks that could be shown under the form of DNS spoofing. When a hacker successfully spoof a dns node, simply creates a deviation of the normal TCP/IP routing that the client cannot recognize. In this way the client does not know that it is downloading files from unknown sources and since the game includes executables as well, it could be relatively easy for a hacker to attack a specific pool of users and let them download malicious files.

This is one of the reasons why HTTPS has been invented, however HTTPS is effective against this attack only if the client can verify the certificate provided by the HTTPS server (http://en.wikipedia.org/wiki/Public_key_certificate). With my surprise, I found out that while Unity supports HTTPS connections, it does not verify the SSL certificates at all; Therefore using HTTPS in Unity does not prevent Man In Middle Attacks. Luckily the implementation of a digital signature was already planned, so while I was disappointed of the Unity behaviour, we were already ready to face the issue.

Implementing a Digital Signature in C# and .net is very simple and a lot of code is already available around. Just look for RSACryptoServiceProvider on Google to know more.

Once all this stuff was implemented I thought I was finally done with the launcher, but a dark and vicious evil bugger was awaiting me behind the corner: the UAC implementation of Windows Vista+!

After I understood what the UAC implementation of windows actually does, I realized why most of the online games suggest to use the c:\games folder to install the game instead of the standard c\:Program Files. The Program Files folder is seen by modern windows operative system (excluding XP) as a protected folder and only administrators can write in it.

Our launcher is installed by Inno Setup, which asks to the user to run in administrative mode, so Inno Setup is able to write wherever the user wants the game to be installed in. However, once it is installed, the problems start.

At this point of the explanation, If the launcher is launched directly from Inno Setup, it inherits the administrative rights from the installer and then it is able to update the game folder under Program Files. However, once the launcher is started again by the user, it will not start as administrator, but as normal user changing the behaviour of the file writing.

This is when the things start to be idiotic. If a normal user application tries to write inside a folder under Program Files, the writing of the file does not fail as I initially expected. Instead Windows creates a Virtual Store folder under C:\UsersusernameAppDataLocalVirtualStore that virtualises the game folder. The result is that all the files that the launcher tries to write under program files are actually stored in a specific and predefined folder in the virtual store.

Hence, First lesson is: if your Unity application needs to write new files, never write them in to the folder where the application has been installed. Use the application data folder instead! However this cannot be applied to a launcher, since the launcher must be able to update the game wherever the user decided to install it in.

First solution came to my mind was then to embed a manifest in the application to ask to vista to run the launcher in administrative mode. This is easier than it sounds, at least once I understood that a command line tool, that can be found inside the windows SDK, can do it. Once followed the instructions, every time the launcher starts, windows UAC prompts a message box to the user, asking if to give the administrative rights to the application or not.

If the user authorizes the application, the launcher will be able to update the game, otherwise it will throw an error.

However and unluckily, this was not the final solution. The tricky situation here is that the launcher must be able to launch the game as well, but a process launched by another process will inherit its rights. This means that the game launched by the launcher would start as administrator, while if the user decides to start the game on its own, without using the launcher, it runs in normal mode (if the user is not administrator or the UAC is fully enabled).

Launching the game in administrative mode can be a bad idea for several reasons, but the most annoying one is that the user is not used to authorize a game everytime it is launched, so we decided to get rid of this issue.

After some research and after I tried all the possible solutions I could find on google and stackoverflow, I realized that the only working one is to use a bootstrapper to launch the launcher.

The bootstrapper must be a tiny application that runs in normal mode and must be able to launch the launcher as administrator user. This is pretty straightforward since .net allows to raise the application rights (but never allows to downgrade :( ). Once the launcher does its dirty job, it will close itself and it will communicate to the bootstrapper that it is now time to launch the game. The bootstrapper is now able to launch the game as normal user, because the bootstrapper itself has not started with elevated rights.

This solution sounds convoluted, but it is actually quite commonly adopted. Of course I could not use unity to create the bootstrapper, since this must be just few hundred Kbytes. For this reason I downloaded xamarin and mono and I have to say I was quite impressed by it. I have been able to setup a project and run it in few minutes. The bootstrapper itself has been developed in few minutes as well. For this reason we were forced to create a very simple c++ application in order to be .net framework agnostic (that otherwise must be installed on the machine)

Hope all of this can help you one day!

What’s wrong with Unity SendMessage and BroadcastMessage and what to do about it

Unity 3D is a well designed tool, I can say that. It is clearly less painful to use than UDK, no matter what its limitations are. However, as I keep on saying, the c# framework is still full of bad design choices, probably unfortunate inheritances from unity script (why don’t they abolish it and just switch to c#?).

Emblematic example of these bad choices is the communication system built around SendMessage and BroadcastMessage. If you use them, you should just stop already!

In fact SendMessage and BroadcastMessage are so wrong mostly because they heavily rely on reflection to find the functions to call. This of course has a given impact on performance, but performance is not the issue here, the issue is instead related to the quality of the resulting code.

What can be so bad about it? First (and foremost) the use of  a string to recognize a function is way worse than using a string to recognize an Event. Just think about it: what happens if someone in the team decides to refactor the code where the calling function is defined and decides to rename the function or delete it?

I tell you what happens: the compiler will not warn the poor programmer of the mistake is doing and even worse, the code will just run as nothing happened. When the error will be found, it could be already too late.

Even worse is the fact that the calling function can be declared as private since the system uses reflection. You know what I usually do when I find a private function that is not used inside the declaring class? I just delete it, because the code is telling me: this function cannot be used outside this class and I am not using it, so why keep useless code?

Ok, in c# I mostly use delegate events, so I have to be honest, I basically do not use SendMessage, but I still find the BroadcastMessage useful when it is time to implement GUI widget communication.

Since GUIs are usually implemented in hierarchical way (i.e. if you use NGUI), being able to push a message down into a hierarchy could have several advantages, since it is basically not needed to know the target of the message. This is actually similar to Chain of Responsibility pattern.

For this reason I decided to implement a little framework to send a message through a GameObject hierarchy and it works like this:

If the root (or a parent) of the target that must be reached is known, you can use it to send a signal through the hierarchy in a top down fashion. All the nodes of the hierarchy will be searched until a compatible “listener” is found. The code is pretty trivial and, as my usual, relies on implementation of interfaces.

CubeAlone is a MonoBehaviour that is present in a GameObject that is outside the hierarchy to search. It could have been inside as well, but I will reserve this case for the next example.

Through the SignalChain object two events are sent to two different targets. I decided to do so to show you the flexibility of the interface. In fact it is possible to identify events using whatever kind of object, from a string to a more complicated type that could hold parameters.

In the hierarchy of the example that can be found at this address https://github.com/sebas77/SignalChain there are two targets listening the CubeAlone, these are the Sphere and the Cylinder.

In order to be recognized as listener, these two MonoBehaviour must implement a IChainListener interface:

and from the code it is pretty clear how it works. Should I add something? If leftmost cube is clicked, the Sphere and the Cylinder will change color.

Well, let’s see now the case when the dispatching event object is already in the hierarchy. In this case we could dispatch events in the hierarchy without knowing the root. However the root must be signed with IChainRoot interface:

and the dispatching event object can use the BubbleSignal object in this way:

 Try to click on the capsule now, the sphere will change again color but this time will be blue!

TaskRunner library and the SingleTask class

Along the classes of my Task Runner library, there is one called SingleTask, that is a simple implementation of an IEnumerator interface and accepts an IEnumerator as constructor parameter. I’d like to show you why I decided to introduce this class:

Sometime it’s needed to run asynchronous routines (yet single threaded) and wait for their execution before to let something else happen. For a long time I used to use events to solve these problems, but in c# and unity often it’s nicer to exploit the coroutine functionalities.

An example could be this:

The difference is subtle, yet crucial. If the first method is used, the DoSomethingAsynchronously does not wait for the function SomethingAsyncHappens to complete, but returns right after the yield. Instead with the use of the SingleTask class and the TaskRunner library, everything that is written after the yield will be executed only when the inner async routines are completed.

This means that the Debug.Log will output 0 in the first case and 100 in the second case.

How to compress and decompress binary streams in Unity

recently I needed to compress a binary stream that was getting to big to be serialized over internet. I could not find any way to compress files easily, so eventually I used a 3rd party library that works very well in Unity3D (web player included): SharpZipLib

I suppose the following code could be handy for someone:

 

 

 

Unity3D Task Runner Update: Pause and Resume Runner

soon or later I will implement a multithread job scheduler for my task runner library, but lately I spent few minutes to implement an idea I had in mind for long a time.

Being pretty sure that the core functionality of StartCoroutine is to iterate over a IEnumerator routine (although it actually does more), I decided to test the theory doing the same inside a FixedUpdate function. I decided to use a FixedUpdate over an Update just because I am not entirely sure if the polling frequency matters.

The main benefit of this approach is to be able to pause and resume the task sequencer, feature that recently I needed to support for the project I am working on.
The new RunManaged function supports enumerable routines and WWW, but does not support YieldInstruction yet.

The updated  code is available here: http://sebas77.github.com/TaskRunner/

Edit: A needed overhaul of the logic made me get rid of the Update and Fixed update approach, the new version now let pause routines managed through StartCoroutine. Simpler and more effective.