How to perfectly synchronize workspaces with older versions of Perforce

The latest versions of perforce finally support the annoyingly for too long underestimated and useful option P4 clean. Using this command is possible to delete all the local files that are not present in the depot and sync all the files that have been modified locally, or to be more precise:

  1. Files present in the workspace, but missing from the depot are deleted from the workspace.
  2. Files present in the depot, but missing from your workspace. The version of the files that have been synced from the depot are added to your workspace.
  3. Files modified in your workspace that have not been checked in are restored to the last version synced from the depot.

This is very useful when is time to use perforce on a building machine, since it’s of fundamental importance to be sure that the workspace totally matches the depot.

Who, like me, is forced to use a previous version, though, is just destined to swear at infinitum. However, sick of the problem, I did some research around and found some articles with very good solutions. After I put them together I just wanted to share them with you.

The first part checks for all the files that are different in the workspace and, thanks to the piping facility, force sync them. I found out that is safer to call the commands separately instead to use the options -sd and -se on the same diff.

the second part uses a batch to delete all the files present in the workspace, but not in the depot, exploiting the reconcile command

as extra bonus, I add the following one

I don’t understand why often the revert unchanged files command doesn’t actually revert identical files. This command will succeed where the standard command fails.

The truth behind Inversion of Control – Part IV – Dependency Inversion Principle

The Dependency Inversion Principle is part of the SOLID principles. If you want a formal definition of DIP, please read the articles written by Martin[1] and Schuchert[2].

We explained that, in order to invert the control of the flow, our specialized code must never call directly methods of more abstracted classes. The methods of the classes of our framework are not explicitly used, but it is the framework to control the flow of our less abstracted code. This is also called (sarcastically) the Hollywood principle, stated as “don’t call us, we’ll call you.”.

Although the framework code must take control of less abstracted code, it would not make any sense to couple our framework with less abstracted implementations. A generic framework doesn’t have the faintest idea of what our game needs to do and, therefore, wouldn’t understand anything declared outside its scope.

Hence, the only way our framework can use objects defined in the game layer, is through the use of interfaces, but this is not enough. The framework cannot know interfaces defined in a less abstracted layer of our application, this would not make any sense as well.

The Dependency Inversion Principle introduces a new rule: our less abstracted objects must implement interfaces declared in the higher abstracted layers. In other words, the framework layer defines the interfaces that the game entities must implement .

In a practical example, RendererSystem handles a list of IRendering “Nodes”. The IRendering node is an interface that declares, as properties, all the components needed to render the Entities, such as GetWorldMatrix, GetMaterial and so on. Both the RendererSystem class and the IRendering interface are declared inside the framework layer. Our specialised code needs to implement IRendering in order to be usable by the framework.


Designing layered code

So far I used the word “framework” to identify the most abstracted code and “game” to identify the least abstracted code. However framework and game don’t mean much. Limiting our layers to just “the game layer” and “the framework layer” would be a mistake. Let’s say we have systems that handle very generic problems that can be found in every game, like the rendering of the entities, and we want to enclose this layer in to a namespace. We have defined a layer that can be even compiled in a DLL and be shipped with whatever game.

Now, let’s say we have to implement the logic that is closer to the game domain. Let’s say that we want to create a HealthSystem that handles the health of the game entities with health. Is HealthSystem part of a very generic framework? Surely not. However while HealthSystem will handle the common logic of the IHaveHealth entities, not all the game entities will have the same behaviors. Hence HealthSystem is more abstracted than more specialized behavior implementations. While this abstraction wouldn’t probably justify the creation of another framework, I believe that thinking in terms of layered code helps designing better systems and nodes.


Putting ECS, IoC and DIP all together

As we have seen, the flow is not inverted when, in order to find a solution, a top down design approach is used to break down the problem, that is when the specialized behaviors of the entities are modeled before the generic ones. Or also when the systems are designed as result of the specialized entity problems.

In my vision of Inversion of Control, it’s needed to break down the solutions using a bottom up approach. We should think of the problems starting from the most abstracted classes. What are the common behaviors of the game entities? What are the most abstracted systems we should write? What once would have been solved specializing classes through inheritance, it should be now solved layering our systems within different levels of code abstraction and declaring the relative interfaces to be used by the less abstracted code.

I believe that in this way we could benefit from the following:

  • We will be sure that our systems will have just one responsibility, modeling just one behavior
  • We will basically never break the Open/Close principle, since new behaviors means creating new systems
  • We will inject way less dependencies, avoiding using a IoC container as a Singleton alternative
  • It will be simpler to write reusable code
  • We could potentially achieve real encapsulation

In the next article I will explain how I would put all these concepts together in practice

References:

  1. http://www.objectmentor.com/resources/articles/dip.pdf
  2. http://martinfowler.com/articles/dipInTheWild.html

The truth behind Inversion of Control – Part III – Entity Component Systems

In the previous article I explained what the Inversion of Control principle is, then I introduced the concept of Inversion of Flow control. In this article I will illustrate how to apply it properly, even without using an IoC container. In order to do so, I will talk about Entity Component Systems. While apparently they have nothing to do with Inversion Of Control, I found them to be one of the best way to apply the principle.

Once upon the time, there was the concept of game engine. A game engine was a game-specialised framework that was supposed to run whatever game. This game engine used to have some common classes, that were found more or less in all the game engines, like the Render class.

Every time a new object with a Renderer was created, the Renderer of the object was added in a list of renderers managed by the Render class. This is, for example, also true for other classes like the Collision class, the Culling class and so on. The Engine dictates when to execute the culling, when to execute the collisions and when to execute the rendering. The lower level of abstraction objects didn’t know when they were going to be rendered or culled, they assumed only that at a given point, it was going to happen.

The game engine was taking the control of the flow, resulting in the first form of Inversion of Flow Control. There is no difference with what I just explained and what Unity engine does. Unity decides when is time to call Awake, Starts, Update and so on.

Unity framework is capable to achieve both Inversion of Creation Control and Inversion of Flow Control. MonoBehaviour instances cannot be directly created by the users, both if the MonoBehaviour is already present in the scene or is created dynamically, it must be the Framework to create them for us. The Inversion of Flow control is instead achieved through adoption of the Template Pattern. Our MonoBehaviour class must follow a specific template in order to be usable by the Framework.

However Unity adopts an old concept of Entity Component framework. This concept has the huge limitation to not let us create our own framework or extend the existing one. It has also the limitation to not enable us to inject dependencies to the components, which lead to all the problems I mentioned in the past.

Let’s have a look at what a modern Entity Component System instead is:

Modern Entity Component Systems

A more advanced way of managing entities has been introduced in 2007 with the modern implementations of Entity Component System

In 2007, the team working on Operation Flashpoint: Dragon Rising experimented with ECS designs, including ones inspired by Bilas/Dungeon Siege, and Adam Martin later wrote a detailed account of ECS design[1], including definitions of core terminology and concepts. In particular, Martin’s work popularized the ideas of “Systems” as a first-class element, “Entities as ID’s”, “Components as raw Data”, and “Code stored in Systems, not in Components or Entities”.[2]

ECS Design uses Inversion of Flow control in its purer form. ECS is also a magnificent way to possibly never break the Open/Close principle when is time to add new behaviours.

the Open/Closed principle, which is also part of the SOLID principles, says:

software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

OCP is the holy grail of the perfect code design. If well adopted, we should never need to go back and adding functions to previously created classes in order to add new behaviours.

The design works in this way:

  • Entities are just basically IDs, sometimes the concept of Entity doesn’t even exist anymore.
  • Components are ValueObjects[3]. They are never strings or integers, but always objects that wrap single values (including matrices and other data structures) and that can be shared between Systems. Components do not have any code to manage logic.
  • Systems are the classes where all the logic lies. Systems can access directly to list of components and execute all the logic your game and engine need.

The really powerful aspect of this design is that the Systems must respect the Single Responsibility Rule. All the behaviours of our in-game entities are modelled inside systems and every single behaviour will have a single, well defined in terms of domain, System to manage it.

This is how we can apply the Open/Close principle without ever breaking it: every time we need to introduce a new behaviour, we are forced to create a new System that simply uses components as data to execute it.

Systems are implicitly Mediators as well, this is why Systems are great to solve also communication between entities and other systems (through the use of the components). However Systems are not able to cope with all the possible communication problems just through the use of components, that’s why Systems, that should be instantiated in the Composition Root, can have injected other dependencies.

Also, pay attention, I said that Systems model all the logic of your Framework AND Game. When ECS design is used, there won’t be anymore any difference between framework logic (like RenderingSystem, PhysicEngine and so on) and the game logic (like AliensAI, EnemyCollisionEngine and so on) in terms of implementation, but there will be still a sharp difference in terms of layerization. Framework and Game systems will still lie in different packages of the application.

This introduce another very important concept: the layerization of our game application. Using just two layers of abstraction is not enough for a complex game. Framework layer and Game layer alone are not enough. We need to make our Game Layer more granular.

Before to finally shows the benefits of this design approach, I will explain the concept of Dependency Inversion Principle in the next article of this series.

[1]http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/

[2]http://en.wikipedia.org/wiki/Entity_component_system

[3]http://c2.com/cgi/wiki?ValueObject

[4]http://www.richardlord.net/blog/what-is-an-entity-framework

The truth behind Inversion of Control – Part II – Inversion of Control

Note: this article assumes you already read my previous articles on IoC containers and the Part I

Inversion of Control is another concept that turns out to be very simple once it has been fully processed or rather saying “absorbed”. Absorbed as going over being understood and becoming part of one’s own forma mentis. However, make no mistake, Inversion Of Control is not the same of Inversion Of Control Container (which is not a principle, but just a framework to simplify Dependency Injection)[1]. Inversion Of Control container is a confusing name in this sense[2], therefore many use the name Dependency Injection Library instead.

While using an IoC container is very simple, being able to invert the control of the code is another matter. The process of adaptation is not straightforward since the entire code paradigm will have to change. In order to try to explain this paradigm, I will need to introduce the concept of code abstraction. The following definitions will be explored in more details in the next articles, so come back here if they won’t be grasped immediately.

The inversion of control cannot be really applied successfully without designing the application with multiple layers of abstraction. Higher is the abstraction, more general is the scope of the code. For example, a general framework is part of the highest levels of abstraction, since it could be used by whatever type of application. A class that manages the health of the enemies in a game  belongs to a lower level of abstraction.

If we think of our code structure in terms of layers of abstraction, Inversion of control is all about giving the control to the more abstracted classes instead of the less ones. Of course you would ask: control of what? The idea is that general classes should control both the creation and the flow of the more specialised code.

How can the object creation be inverted? It’s simple: code that follows the Inversion of Creation Control rules, never uses the new keyword explicitly to create dependencies. All the dependencies are always injected, never created directly.

If we apply this reasoning to the injection waterfall discussed in the first part of this article series, it is easy to see that ALL the dependencies, therefore all the objects, must be created and initially passed from the Composition Root. The Composition Root effectively becomes the only place where all the starting relations between objects are made.

If you wonder how to create dynamic dependencies, like objects that must be spawned in run-time, you asked yourself a good question. These objects are always created by factories and factories are always created and passed as dependency from the Composition Root.

Simply put, the new operator should be used only in the Composition Root or inside factories. An IoC container hides this process, creating and passing all the dependencies automatically instead to force the user to pass them by constructors or setters. The framework-like IoC container code takes control of the creation of all the dependencies. Of course dynamic allocation is still freely used to allocate data structures, but data structures are not dependencies.

Why is inverting the creation control important? Mainly for the following three reasons:

  1. the code becomes independent from the constructor implementation. A dependency can, therefore, be passed by interface, effectively removing all the coupling between the implementation of classes.
  2. the object injected comes already created with the needed dependencies resolved. If the code had to create the instance of the object, those dependencies must have be known as well.
  3. the flow of the code can change according the context. Without changing the code is possible to change its behaviour just passing different implementation of the same interface.

The first point is fundamental to be able to mock-up implementations when unit tests are written, but if unit tests are not part of your development process, the third point can lead to cleaner code when is time to implement different code paths.

Can Inversion of Creation Control be achieved without using an IoC container? Absolutely, let’s see how simply using manual dependencies injection:

while I am not really good with examples, it’s not simple to find a compact one that is also meaningful. However I think the above example includes all the discussed points.

Main is our simple Composition Root. It’s where all the dependencies are created and the initially injected.  If you try to run this code, it will actually work. It will run a dumb, not interactive simulation of a Player fighting Enemies.

All the game logic is encapsulated inside the Level class. Since the Level class uses the Template Pattern in order to implement the functions needed by the LevelManager class to manage a Level, is possible to extend the logic of the game creating new Level classes. However adding Level instances inside the LevelManager is not Dependency Injection, so it is irrelevant to our exercise (LevelManager doesn’t strictly need Level instances injected to work, the class is still functional even without any Level object added).

Each Level needs two dependencies. An implementation of an EnemySpawner and the Player object. Note that the level name is not actually a dependency. A dependency is always an instance of a class implementing an interface used by another object.

level1 and level2 are different because of the number and type of enemies created. level1 contains only two enemies of type A, level2 contains one enemy of type A and two enemies of Type B. Type B can be more powerful than type A when inflicts damage to the Player. However the Level implementation actually doesn’t change. The different behaviour is just due to the different implementation of the IEnemySpawner interface passed as parameter. Injecting two different EnemySpawner objects changes the level gameplay, without changing the Level code.

EnemySpawner doesn’t build directly enemies, because is not its responsibility. The Enemyspawner just decides which enemies are spawned and how, but doesn’t need to be aware of what an enemy needs to be created.

As you can see both EnemyA and EnemyB depend on the implementation of the class Random to work, but EnemySpawner doesn’t need to know this dependency at all. Therefore we can use a factory both to encapsulate the operator new and pass the dependency Random directly from the composition root.

My explanation is probably more complicated than the example itself, where it’s clear that all the dependencies are created and passed through constructor only inside the main function. The only exception is when the EnemyFactory injects the Random implementation by setter.

In this example I haven’t used an Inversion of Control container and it does checks all the three points I listed before. The context takes away the responsibility of creating dependencies from the other objects, dependencies can be passed by Interface, the flow of the code changes according which implementation has been injected.

So the questions I am asking lately are, do we really need an IoC container to implement Inversion of Creation and Flow control? Are the side effects of using an IoC container (because there are side effects that I’ll show later in my articles) less important than the benefits of using such a tool? Searching an answer to these question is what led me to start to write this series.

I can give a first answer though: manual Dependency Injection is impossible to achieve in Unity. As I have already widely explained in my past articles, due to the Unity framework nature, dependencies can be injected only through the use of singletons or the use of reflection. C# reflection abilities is what actually enables mine and other IoC containers to inject dependencies in an application made with Unity.

This answer is already enough to say that, yes, we do need an IoC container, but are there other possible alternatives not presented yet? Yes there are, but this is not the time to discuss them yet.

For the time being, stick in mind that if the code is designed without really knowing what Inversion of Control is, an IoC container merely becomes a tool to simplify the tedious work of injecting dependencies; a tool that is very prone to being abused.

So far I talked mainly about Inversion of Creation control, but I mentioned also the Inversion of Flow Control. I need to give a first explanation before to conclude this article.

What is Inversion of Flow Control? Quoting Wikipedia:

inversion of control (IoC) describes a design in which custom-written portions of a computer program receive the flow of control from a generic, reusable library. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the reusable code that calls into the custom, or task-specific, code.”

In this sense an IoC container could be used to implement Inversion of Flow control when it’s time to “plug-in” implementations from the lower levels of abstraction to the higher levels of abstraction without breaking the Dependency Inversion Principle. Inversion of Flow control is even more important than Inversion of Creation Control and I will explore the reasons in detail in my next article.

[1] http://www.sebaslab.com/ioc-container-for-unity3d-part-2/

[2] http://martinfowler.com/articles/injection.html#InversionOfControl (but read all of it)

The truth behind Inversion of Control – Part I – Dependency Injection

Note: this article series assumes you already read my previous articles on IoC containers.

There is an evil truth behind the concept of Inversion of Control container. An unspoken code tragedy taking place everyday while passing unnoticed. Firm in my intentions to stop this shame, I decided to stand up and start writing this series of articles that will tell about the problem, the principles and the possible solutions.

I started to notice the symptoms of this “blasphemy” (against the code gods :) ) quite a while ago, but I couldn’t pin down a possible solution and most of all, the cause of them. Nevertheless the problem was pretty clear: the IoC container solution was used as alternative of the Singleton pattern to inject dependencies. With the code growing and the project evolving, many classes started to take the form of a blob of functions, with a common pitfall: dependencies, dependencies everywhere.
What means and what is wrong with using an IoC container as mere substitute of a Singleton, is something that I am going to describe the best I can, but first I need to take several steps back and start with the explanation of what Dependency Injection is:


What Dependency Injection is

Dependency Injection isn’t anything fancy. A dependency is just an interface of which another class implementation is dependent upon. Usually dependencies are solved in two ways: injecting them or passing them by Singletons. Singletons break encapsulation in the sense that, as a global variable, they can be used without a scope. Singletons awkwardly hide your dependencies: there is nothing in the class interface showing that the dependency is used internally. Singletons strongly couple your entities, resulting eventually in to long and painful code refactoring. For these reasons we use injection to solve dependencies.

Now, think for a moment if there isn’t an IoC container in place. How would we inject our dependencies? For example, passing them as parameters in a constructor as shown in the example. Would you pass 10 parameters by constructor? I surely wouldn’t, at least just for how painful and inconvenient it is. Same reasoning applies when an IoC container is used. Just because it’s more convenient, it doesn’t mean you can take advantage of it. You are just making a mess with object coupling again.

When dependencies are injected into an instance, where do these objects come from? If the dependencies are injected by constructor, they obviously come from the scope where the object that needs the dependencies injected has been created. On its turn, the object that is injecting the dependencies into the new object, could need also dependencies injected, which therefore are passed by another object in the parent scope. This chain of dependencies passed creates a waterfall of injections and the relationship between these objects is called Object Graph[1].

Albeit, where does this waterfall start from? It starts from the place where the initial objects are created. This place is called Composition Root. Root because is where the context is initialised, composition because is where the dependencies start to be injected and, therefore, the initial relations composed.

Now you can see what the real problem of the Unity framework is: the absence of a Composition Root. Unity doesn’t have a “main” class where the relations between objects can be composed. This is why the only way to create relationships with the bare Unity framework is using Singletons or static classes.

Why are relationship between objects created? Mainly to let them communicate with each other. All forms of communications involve dependency injection. The only pattern that allows communication without dependency injection is the pattern called Event Bus[2] in the Java environment. The Event bus allows communications through events held by a Singleton, hence the Event Bus is one of the many anti pattern out there.
Note that you could think to create something similar to an Event Bus without using a singleton (therefore injecting it). That’s an example of what I define to use injection as mere substitute of a Singleton.

Object Communication and Dependency Injection

Communication can couple or not couple objects, but in all the cases involves injection. There are several ways to let objects communicate:

  • Interface injection: usually A is injected in B, B is coupled with A [e.g.: Inside a B method A is called, B.something() { A.something());]
  • Events: usually B is injected in A, A is coupled with B [e.g.:Inside A, B is injected to expose the event, B.onSomething += A.Something]
  • Commands: B and A are uncoupled, B could call a command that calls a method of A. Commands are great to encapsulate business logic that could potentially change often. A Command Factory is usually injected in B.
  • Mediators: usually B and A do not know each other, but know their mediator. B and A pass themselves in the mediator and the mediator wires the communication between them (i.e.: through events or interfaces). Alternatively B and A are passed to the mediator outside B and A themselves, totally removing the dependency to the Mediator itself. This is my favourite flavour and the closest to dependency-less possible.
  • Various other patterns like: Observers, Event Queue[3] and so on.

How to pick up the correct one? If we don’t have guidelines it looks like one or another is the same. That’s why, some times, we end up using, randomly, one of those. Remember the first two patterns are the worst because they couple interfaces that could change over time.

We can anyway introduce the first sound practice of our guideline for our code design: our solution must minimize the number of dependencies.

Of course the second sound practice is about the concept of Single Responsibility Principles[4] (and Interface Segregation Principle[5]). One of the 5 principles of SOLID (ISP is another one), but the only one that must be actually taken as a rule. Your class MUST have one responsibility only. Communicating could be considered a responsibility, therefore it’s better to delegate it.

How we are going to achieve SRP and solve the dependencies blob problem is something I am going to explain in the next articles of this series.

bibliography

[1] IoC Container for Unity3D, part 2 http://www.sebaslab.com/ioc-container-for-unity3d-part-2/

[2] Event Bus http://doc.akka.io/docs/akka/snapshot/java/event-bus.html

[3] Event Queue http://gameprogrammingpatterns.com/event-queue.html

[4] Single Responsibility Principle http://en.wikipedia.org/wiki/Single_responsibility_principle

[5] Interface Segregation Principle http://en.wikipedia.org/wiki/Interface_segregation_principle

Svelto Inversion of Control Container

It’s finally time to share publicly the latest version of my Inversion of Control container, which I will keep updated from now on. This new version is the current IoC container that Freejam is using for the Unity3D game Robocraft (http://www.robocraftgame.com).

After all this time, I had the occasion to analyse in depth the benefits and disadvantages in using extensively an IoC container on a big project with a medium sized team of programmers. I am preparing an exhaustive article on the subject, but I am not sure when I will be able to publish it, so stay tuned.

The new IoC container is structurally similar to the old one, but has several major differences. In order to use it, a UnityRoot specialised monobehaviour must still be created. The class that implements ICompositionRoot is the Compositon Root of the project. The game object holding the UnityRoot monobehaviour is the game context of the scene.

All the dependencies bound in the Composition Root, will be injected during the Unity Awake period. Dependencies cannot be used until the OnDependenciesInjected function or the Start function (in case of dependencies injected inside monobehaviours) are called. Be careful though, OnDependenciesInjected is called while the injections are still happening, however injected dependencies are guaranteed to have their dependencies injected on their turn. Dependencies are not guaranteed to be injected during the Awake period.

Other changes include:

  • Monobehaviours that are statically created in the scene, don’t need anymore to ask explicitly to fill the dependencies. They will be automatically injected.
  • Monobehaviours cannot be injected as dependency anymore (that was a bad idea).
  • Dynamically created monobehaviours have always dependencies injected through factories (MonoBehaviourFactory and GameObjectFactory are part of the framework).
  • Now all the contracts are always injected through factory, this simplifies the code and makes it more solid. Also highlights the importance of factories in this framework.
  • A type injection cache has been developed, therefore injecting dependencies of the same type is way faster than it used to be.
  • It’s now possible to create new instances for each dependencies injected if the factory MultiProvider is used explicitly.
  • Improved compile time error detection. It’s not possible anymore to bind contracts to wrong types. For this reason AsSingle() has been substituted by BindSelf().
  • Various improvements and bug fixes.

The new project can be found at this link: https://github.com/sebas77/Svelto-IoC

Note: if it’s the first time you visit my blog, please don’t forget to read my relative articles for a better overview on the topic:

 

What I have learned while developing a game launcher in Unity

This post can be considered partially outdated as my mind changed quite a bit since when I originally wrote it.
First of all, I would never suggest to write a game launcher in Unity now. Use instead .net or the latest version of mono with xamarin. Mono library embedded with Unity 4 is too outdated and its use results in several problems when using advanced HTTP functions. However, if you decide to use .net, please be aware that your installer must be able to download and install .net from the internet if it is not installed on the machine already. It is also possible to embed mono, more or less like Unity does, but it is quite tricky under Windows.
We also changed our mind about the admin rights. Many of our users use Windows without admin rights, so we wanted our launcher to never ask for any.
I also think Unity now fixed the check of the SSL certificates, but I am not 100% sure about it.
At last, I would search around for libraries that can generate diff patches of binary files, since hashing and downloading files one by one is neither convenient nor efficient.

In case you didn’t know yet, it has been already a while since I co-founded a company called Freejam, in the UK, and started to work on a new game named Robocraft (http://robocraftgame.com). For this indie product we are extensively adopting the Lean startup approach even for the development cycles, so features come as they are actually requested by our early adopters.

Last feature our early adopters were eager to see was a proper game launcher. A game launcher is basically a must for every online game, since, as you all know, in this way is possible to patch and update the game without being forced to install it over and over.

I never implemented a launcher before, so I was completely ignorant about some tricky issues that could arise and their relative workarounds, which I now want to share with you to avoid wasting days trying to solve similar problems.

I started to develop the launcher in Unity just because it needed a graphical interface and I did not want to spend time learning/using new libraries for other development platforms (either c++ or pure c#). Size wise, considering that Unity applications embed the mono libraries and runtimes, the resulting compressed 6MB installer wasn’t too bad.

The graphic interface has been easily developed with NGUI and the information shown are a mix of rss news taken directly from the game blog and images configurable by the artists through an external xml.

The update process instead has been a little bit more convoluted, with a couple of unforeseen tedious obstacles that made my life a bit miserable for few days.

The update process is split into several predefined tasks:

  • check if the game is already running and ask to the user to close it before the update could be launched
  • check if another launcher is running and ask to close the other one
  • check if a new version of the patcher is available and in this case force to update the patcher
  • check if a new game build is available
  • download the list of files built together with the new game build. This list contains the name of the files, the size and a hash as checksum
  • download an asymmetric encrypted digital signature of the game
  • verify that the digital signature is valid using the public key embedded in the client
  • verify that all the files on the server are ready to be downloaded
  • verify which files on the hard disk must be updated, using file size and generated hash (comparing the value with the hash from the file list previously downloaded)
  • if it is needed, files are downloaded, uncompressed and saved (they are stored as gzip on the server)
  • delete obsolete files if there are any

On our side our Jenkins building machine handles two separate jobs (actually more, but I keep it simple for the sake of this article), one to build the patcher, generate a new patcher version and deploy it to the CDN and the other to build the game, generate a new game version, generate the files list with hashes, create the digital signature using the private key that only our building machine knows, compressing the files to upload and deploy everything to the CDN.

The whole development process has been long, but thanks to the .net framework relatively easy. Nevertheless there are two specific features I have to go into detail with, since they are very important to know and not so intuitive:

first one is the reason why I implemented an asymmetric encrypted digital signature verification. A launcher without this kind of protection is vulnerable to man in the middle attacks that could be shown under the form of DNS spoofing. When a hacker successfully spoof a dns node, simply creates a deviation of the normal TCP/IP routing that the client cannot recognize. In this way the client does not know that it is downloading files from unknown sources and since the game includes executables as well, it could be relatively easy for a hacker to attack a specific pool of users and let them download malicious files.

This is one of the reasons why HTTPS has been invented, however HTTPS is effective against this attack only if the client can verify the certificate provided by the HTTPS server (http://en.wikipedia.org/wiki/Public_key_certificate). With my surprise, I found out that while Unity supports HTTPS connections, it does not verify the SSL certificates at all; Therefore using HTTPS in Unity does not prevent Man In Middle Attacks. Luckily the implementation of a digital signature was already planned, so while I was disappointed of the Unity behaviour, we were already ready to face the issue.

Implementing a Digital Signature in C# and .net is very simple and a lot of code is already available around. Just look for RSACryptoServiceProvider on Google to know more.

Once all this stuff was implemented I thought I was finally done with the launcher, but a dark and vicious evil bugger was awaiting me behind the corner: the UAC implementation of Windows Vista+!

After I understood what the UAC implementation of windows actually does, I realized why most of the online games suggest to use the c:\games folder to install the game instead of the standard c\:Program Files. The Program Files folder is seen by modern windows operative system (excluding XP) as a protected folder and only administrators can write in it.

Our launcher is installed by Inno Setup, which asks to the user to run in administrative mode, so Inno Setup is able to write wherever the user wants the game to be installed in. However, once it is installed, the problems start.

At this point of the explanation, If the launcher is launched directly from Inno Setup, it inherits the administrative rights from the installer and then it is able to update the game folder under Program Files. However, once the launcher is started again by the user, it will not start as administrator, but as normal user changing the behaviour of the file writing.

This is when the things start to be idiotic. If a normal user application tries to write inside a folder under Program Files, the writing of the file does not fail as I initially expected. Instead Windows creates a Virtual Store folder under C:\UsersusernameAppDataLocalVirtualStore that virtualises the game folder. The result is that all the files that the launcher tries to write under program files are actually stored in a specific and predefined folder in the virtual store.

Hence, First lesson is: if your Unity application needs to write new files, never write them in to the folder where the application has been installed. Use the application data folder instead! However this cannot be applied to a launcher, since the launcher must be able to update the game wherever the user decided to install it in.

First solution came to my mind was then to embed a manifest in the application to ask to vista to run the launcher in administrative mode. This is easier than it sounds, at least once I understood that a command line tool, that can be found inside the windows SDK, can do it. Once followed the instructions, every time the launcher starts, windows UAC prompts a message box to the user, asking if to give the administrative rights to the application or not.

If the user authorizes the application, the launcher will be able to update the game, otherwise it will throw an error.

However and unluckily, this was not the final solution. The tricky situation here is that the launcher must be able to launch the game as well, but a process launched by another process will inherit its rights. This means that the game launched by the launcher would start as administrator, while if the user decides to start the game on its own, without using the launcher, it runs in normal mode (if the user is not administrator or the UAC is fully enabled).

Launching the game in administrative mode can be a bad idea for several reasons, but the most annoying one is that the user is not used to authorize a game everytime it is launched, so we decided to get rid of this issue.

After some research and after I tried all the possible solutions I could find on google and stackoverflow, I realized that the only working one is to use a bootstrapper to launch the launcher.

The bootstrapper must be a tiny application that runs in normal mode and must be able to launch the launcher as administrator user. This is pretty straightforward since .net allows to raise the application rights (but never allows to downgrade :( ). Once the launcher does its dirty job, it will close itself and it will communicate to the bootstrapper that it is now time to launch the game. The bootstrapper is now able to launch the game as normal user, because the bootstrapper itself has not started with elevated rights.

This solution sounds convoluted, but it is actually quite commonly adopted. Of course I could not use unity to create the bootstrapper, since this must be just few hundred Kbytes. For this reason I downloaded xamarin and mono and I have to say I was quite impressed by it. I have been able to setup a project and run it in few minutes. The bootstrapper itself has been developed in few minutes as well. For this reason we were forced to create a very simple c++ application in order to be .net framework agnostic (that otherwise must be installed on the machine)

Hope all of this can help you one day!

What’s wrong with Unity SendMessage and BroadcastMessage and what to do about it

Unity 3D is a well designed tool, I can say that. It is clearly less painful to use than UDK, no matter what its limitations are. However, as I keep on saying, the c# framework is still full of bad design choices, probably unfortunate inheritances from unity script (why don’t they abolish it and just switch to c#?).

Emblematic example of these bad choices is the communication system built around SendMessage and BroadcastMessage. If you use them, you should just stop already!

In fact SendMessage and BroadcastMessage are so wrong mostly because they heavily rely on reflection to find the functions to call. This of course has a given impact on performance, but performance is not the issue here, the issue is instead related to the quality of the resulting code.

What can be so bad about it? First (and foremost) the use of  a string to recognize a function is way worse than using a string to recognize an Event. Just think about it: what happens if someone in the team decides to refactor the code where the calling function is defined and decides to rename the function or delete it?

I tell you what happens: the compiler will not warn the poor programmer of the mistake is doing and even worse, the code will just run as nothing happened. When the error will be found, it could be already too late.

Even worse is the fact that the calling function can be declared as private since the system uses reflection. You know what I usually do when I find a private function that is not used inside the declaring class? I just delete it, because the code is telling me: this function cannot be used outside this class and I am not using it, so why keep useless code?

Ok, in c# I mostly use delegate events, so I have to be honest, I basically do not use SendMessage, but I still find the BroadcastMessage useful when it is time to implement GUI widget communication.

Since GUIs are usually implemented in hierarchical way (i.e. if you use NGUI), being able to push a message down into a hierarchy could have several advantages, since it is basically not needed to know the target of the message. This is actually similar to Chain of Responsibility pattern.

For this reason I decided to implement a little framework to send a message through a GameObject hierarchy and it works like this:

If the root (or a parent) of the target that must be reached is known, you can use it to send a signal through the hierarchy in a top down fashion. All the nodes of the hierarchy will be searched until a compatible “listener” is found. The code is pretty trivial and, as my usual, relies on implementation of interfaces.

CubeAlone is a MonoBehaviour that is present in a GameObject that is outside the hierarchy to search. It could have been inside as well, but I will reserve this case for the next example.

Through the SignalChain object two events are sent to two different targets. I decided to do so to show you the flexibility of the interface. In fact it is possible to identify events using whatever kind of object, from a string to a more complicated type that could hold parameters.

In the hierarchy of the example that can be found at this address https://github.com/sebas77/SignalChain there are two targets listening the CubeAlone, these are the Sphere and the Cylinder.

In order to be recognized as listener, these two MonoBehaviour must implement a IChainListener interface:

and from the code it is pretty clear how it works. Should I add something? If leftmost cube is clicked, the Sphere and the Cylinder will change color.

Well, let’s see now the case when the dispatching event object is already in the hierarchy. In this case we could dispatch events in the hierarchy without knowing the root. However the root must be signed with IChainRoot interface:

and the dispatching event object can use the BubbleSignal object in this way:

 Try to click on the capsule now, the sphere will change again color but this time will be blue!

TaskRunner library and the SingleTask class

Along the classes of my Task Runner library, there is one called SingleTask, that is a simple implementation of an IEnumerator interface and accepts an IEnumerator as constructor parameter. I’d like to show you why I decided to introduce this class:

Sometime it’s needed to run asynchronous routines (yet single threaded) and wait for their execution before to let something else happen. For a long time I used to use events to solve these problems, but in c# and unity often it’s nicer to exploit the coroutine functionalities.

An example could be this:

The difference is subtle, yet crucial. If the first method is used, the DoSomethingAsynchronously does not wait for the function SomethingAsyncHappens to complete, but returns right after the yield. Instead with the use of the SingleTask class and the TaskRunner library, everything that is written after the yield will be executed only when the inner async routines are completed.

This means that the Debug.Log will output 0 in the first case and 100 in the second case.

How to compress and decompress binary streams in Unity

recently I needed to compress a binary stream that was getting to big to be serialized over internet. I could not find any way to compress files easily, so eventually I used a 3rd party library that works very well in Unity3D (web player included): SharpZipLib

I suppose the following code could be handy for someone: