Svelto Inversion of Control Container

If it’s the first time you visit my blog, please don’t forget to read my main articles on the subject before to continue this post:

It’s finally time to share the latest version of my Inversion of Control container, which I named Svelto IoC, which I will keep updated from now on. This new version is the current IoC container that Freejam is using for the Unity3D game Robocraft (http://www.robocraftgame.com).

Thanks to the possibility to use the library in production, I could analyse in depth the benefits and disadvantages in using extensively an IoC container on a big project with a medium sized team of programmers. I am preparing an exhaustive article on the subject, but I am not sure when I will be able to publish it, so stay tuned.

The new IoC container is structurally similar to the old one, but has several major differences. In order to use it, a UnityRoot specialised monobehaviour must still be created. The class that implements ICompositionRoot is the Compositon Root of the project. The game object holding the UnityRoot monobehaviour is the game context of the scene.

All the dependencies bound in the Composition Root, will be injected during the Unity Awake period. Dependencies cannot be used until the OnDependenciesInjected function or the Start function (in case of dependencies injected inside monobehaviours) are called. Be careful though, OnDependenciesInjected is called while the injection waterfall is still happening, however injected dependencies are guaranteed to have, on their turn, their dependencies injected. Dependencies are not guaranteed to be injected during the Awake period, therefore you shouldn’t use injected dependencies inside Monobehaviour Awake calls.

Other changes include:

  • Monobehaviours that are created by unity after the scene is loaded, don’t need to ask explicitly to fill the dependencies anymore. They will be automatically injected.
  • Monobehaviours cannot be injected as dependency anymore (that was a bad idea).
  • Dynamically created monobehaviours have always dependencies injected through factories (MonoBehaviourFactory and GameObjectFactory are part of the framework).
  • Now all the contracts are always injected through “providers”, this simplifies the code and makes it more solid. Also highlights the importance of providers in this framework.
  • A type injection cache has been developed, therefore injecting dependencies of the same type is way faster than it used to be.
  • It’s now possible to create new instances for each dependency injected, if the factory MultiProvider is used explicitly.
  • You can create your own provider for special cases.
  • Improved type safety of the code. It’s not possible anymore to bind contracts to wrong types. For this reason AsSingle() has been substituted by BindSelf().
  • Various improvements and bug fixes.
  • Dependencies can be injected as weak references automatically.

What is still need to do:

  • Improve the documentation and explain the bad practices
  • Add the possibility to create hierarchical context and explain why they are necessary
  • Add the possibility to inject dependencies by construction, in order to reduce the necessity to hold references.
  • Explain how to exploit custom providers.

The new project can be found at this link: https://github.com/sebas77/Svelto-IoC

 

What I have learned while developing a game launcher in Unity

This post can be considered partially outdated as my mind changed quite a bit since when I originally wrote it.
First of all, I would never suggest to write a game launcher in Unity now. Use instead .net or the latest version of mono with xamarin. Mono library embedded with Unity 4 is too outdated and its use results in several problems when using advanced HTTP functions. However, if you decide to use .net, please be aware that your installer must be able to download and install .net from the internet if it is not installed on the machine already. It is also possible to embed mono, more or less like Unity does, but it is quite tricky under Windows.
We also changed our mind about the admin rights. Many of our users use Windows without admin rights, so we wanted our launcher to never ask for any.
I also think Unity now fixed the check of the SSL certificates, but I am not 100% sure about it.
At last, I would search around for libraries that can generate diff patches of binary files, since hashing and downloading files one by one is neither convenient nor efficient.

In case you didn’t know yet, it has been already a while since I co-founded a company called Freejam, in the UK, and started to work on a new game named Robocraft (http://robocraftgame.com). For this indie product we are extensively adopting the Lean startup approach even for the development cycles, so features come as they are actually requested by our early adopters.

Last feature our early adopters were eager to see was a proper game launcher. A game launcher is basically a must for every online game, since, as you all know, in this way is possible to patch and update the game without being forced to install it over and over.

I never implemented a launcher before, so I was completely ignorant about some tricky issues that could arise and their relative workarounds, which I now want to share with you to avoid wasting days trying to solve similar problems.

I started to develop the launcher in Unity just because it needed a graphical interface and I did not want to spend time learning/using new libraries for other development platforms (either c++ or pure c#). Size wise, considering that Unity applications embed the mono libraries and runtimes, the resulting compressed 6MB installer wasn’t too bad.

The graphic interface has been easily developed with NGUI and the information shown are a mix of rss news taken directly from the game blog and images configurable by the artists through an external xml.

The update process instead has been a little bit more convoluted, with a couple of unforeseen tedious obstacles that made my life a bit miserable for few days.

The update process is split into several predefined tasks:

  • check if the game is already running and ask to the user to close it before the update could be launched
  • check if another launcher is running and ask to close the other one
  • check if a new version of the patcher is available and in this case force to update the patcher
  • check if a new game build is available
  • download the list of files built together with the new game build. This list contains the name of the files, the size and a hash as checksum
  • download an asymmetric encrypted digital signature of the game
  • verify that the digital signature is valid using the public key embedded in the client
  • verify that all the files on the server are ready to be downloaded
  • verify which files on the hard disk must be updated, using file size and generated hash (comparing the value with the hash from the file list previously downloaded)
  • if it is needed, files are downloaded, uncompressed and saved (they are stored as gzip on the server)
  • delete obsolete files if there are any

On our side our Jenkins building machine handles two separate jobs (actually more, but I keep it simple for the sake of this article), one to build the patcher, generate a new patcher version and deploy it to the CDN and the other to build the game, generate a new game version, generate the files list with hashes, create the digital signature using the private key that only our building machine knows, compressing the files to upload and deploy everything to the CDN.

The whole development process has been long, but thanks to the .net framework relatively easy. Nevertheless there are two specific features I have to go into detail with, since they are very important to know and not so intuitive:

first one is the reason why I implemented an asymmetric encrypted digital signature verification. A launcher without this kind of protection is vulnerable to man in the middle attacks that could be shown under the form of DNS spoofing. When a hacker successfully spoof a dns node, simply creates a deviation of the normal TCP/IP routing that the client cannot recognize. In this way the client does not know that it is downloading files from unknown sources and since the game includes executables as well, it could be relatively easy for a hacker to attack a specific pool of users and let them download malicious files.

This is one of the reasons why HTTPS has been invented, however HTTPS is effective against this attack only if the client can verify the certificate provided by the HTTPS server (http://en.wikipedia.org/wiki/Public_key_certificate). With my surprise, I found out that while Unity supports HTTPS connections, it does not verify the SSL certificates at all; Therefore using HTTPS in Unity does not prevent Man In Middle Attacks. Luckily the implementation of a digital signature was already planned, so while I was disappointed of the Unity behaviour, we were already ready to face the issue.

Implementing a Digital Signature in C# and .net is very simple and a lot of code is already available around. Just look for RSACryptoServiceProvider on Google to know more.

Once all this stuff was implemented I thought I was finally done with the launcher, but a dark and vicious evil bugger was awaiting me behind the corner: the UAC implementation of Windows Vista+!

After I understood what the UAC implementation of windows actually does, I realized why most of the online games suggest to use the c:\games folder to install the game instead of the standard c\:Program Files. The Program Files folder is seen by modern windows operative system (excluding XP) as a protected folder and only administrators can write in it.

Our launcher is installed by Inno Setup, which asks to the user to run in administrative mode, so Inno Setup is able to write wherever the user wants the game to be installed in. However, once it is installed, the problems start.

At this point of the explanation, If the launcher is launched directly from Inno Setup, it inherits the administrative rights from the installer and then it is able to update the game folder under Program Files. However, once the launcher is started again by the user, it will not start as administrator, but as normal user changing the behaviour of the file writing.

This is when the things start to be idiotic. If a normal user application tries to write inside a folder under Program Files, the writing of the file does not fail as I initially expected. Instead Windows creates a Virtual Store folder under C:\UsersusernameAppDataLocalVirtualStore that virtualises the game folder. The result is that all the files that the launcher tries to write under program files are actually stored in a specific and predefined folder in the virtual store.

Hence, First lesson is: if your Unity application needs to write new files, never write them in to the folder where the application has been installed. Use the application data folder instead! However this cannot be applied to a launcher, since the launcher must be able to update the game wherever the user decided to install it in.

First solution came to my mind was then to embed a manifest in the application to ask to vista to run the launcher in administrative mode. This is easier than it sounds, at least once I understood that a command line tool, that can be found inside the windows SDK, can do it. Once followed the instructions, every time the launcher starts, windows UAC prompts a message box to the user, asking if to give the administrative rights to the application or not.

If the user authorizes the application, the launcher will be able to update the game, otherwise it will throw an error.

However and unluckily, this was not the final solution. The tricky situation here is that the launcher must be able to launch the game as well, but a process launched by another process will inherit its rights. This means that the game launched by the launcher would start as administrator, while if the user decides to start the game on its own, without using the launcher, it runs in normal mode (if the user is not administrator or the UAC is fully enabled).

Launching the game in administrative mode can be a bad idea for several reasons, but the most annoying one is that the user is not used to authorize a game everytime it is launched, so we decided to get rid of this issue.

After some research and after I tried all the possible solutions I could find on google and stackoverflow, I realized that the only working one is to use a bootstrapper to launch the launcher.

The bootstrapper must be a tiny application that runs in normal mode and must be able to launch the launcher as administrator user. This is pretty straightforward since .net allows to raise the application rights (but never allows to downgrade 🙁 ). Once the launcher does its dirty job, it will close itself and it will communicate to the bootstrapper that it is now time to launch the game. The bootstrapper is now able to launch the game as normal user, because the bootstrapper itself has not started with elevated rights.

This solution sounds convoluted, but it is actually quite commonly adopted. Of course I could not use unity to create the bootstrapper, since this must be just few hundred Kbytes. For this reason I downloaded xamarin and mono and I have to say I was quite impressed by it. I have been able to setup a project and run it in few minutes. The bootstrapper itself has been developed in few minutes as well. For this reason we were forced to create a very simple c++ application in order to be .net framework agnostic (that otherwise must be installed on the machine)

Hope all of this can help you one day!

What’s wrong with Unity SendMessage and BroadcastMessage and what to do about it

Unity 3D is a well designed tool, I can say that. It is clearly less painful to use than UDK, no matter what its limitations are. However, as I keep on saying, the c# framework is still full of bad design choices, probably unfortunate inheritances from unity script (why don’t they abolish it and just switch to c#?).

Emblematic example of these bad choices is the communication system built around SendMessage and BroadcastMessage. If you use them, you should just stop already!

In fact SendMessage and BroadcastMessage are so wrong mostly because they heavily rely on reflection to find the functions to call. This of course has a given impact on performance, but performance is not the issue here, the issue is instead related to the quality of the resulting code.

What can be so bad about it? First (and foremost) the use of  a string to recognize a function is way worse than using a string to recognize an Event. Just think about it: what happens if someone in the team decides to refactor the code where the calling function is defined and decides to rename the function or delete it?

I tell you what happens: the compiler will not warn the poor programmer of the mistake is doing and even worse, the code will just run as nothing happened. When the error will be found, it could be already too late.

Even worse is the fact that the calling function can be declared as private since the system uses reflection. You know what I usually do when I find a private function that is not used inside the declaring class? I just delete it, because the code is telling me: this function cannot be used outside this class and I am not using it, so why keep useless code?

Ok, in c# I mostly use delegate events, so I have to be honest, I basically do not use SendMessage, but I still find the BroadcastMessage useful when it is time to implement GUI widget communication.

Since GUIs are usually implemented in hierarchical way (i.e. if you use NGUI), being able to push a message down into a hierarchy could have several advantages, since it is basically not needed to know the target of the message. This is actually similar to Chain of Responsibility pattern.

For this reason I decided to implement a little framework to send a message through a GameObject hierarchy and it works like this:

If the root (or a parent) of the target that must be reached is known, you can use it to send a signal through the hierarchy in a top down fashion. All the nodes of the hierarchy will be searched until a compatible “listener” is found. The code is pretty trivial and, as my usual, relies on implementation of interfaces.

CubeAlone is a MonoBehaviour that is present in a GameObject that is outside the hierarchy to search. It could have been inside as well, but I will reserve this case for the next example.

Through the SignalChain object two events are sent to two different targets. I decided to do so to show you the flexibility of the interface. In fact it is possible to identify events using whatever kind of object, from a string to a more complicated type that could hold parameters.

In the hierarchy of the example that can be found at this address https://github.com/sebas77/SignalChain there are two targets listening the CubeAlone, these are the Sphere and the Cylinder.

In order to be recognized as listener, these two MonoBehaviour must implement a IChainListener interface:

and from the code it is pretty clear how it works. Should I add something? If leftmost cube is clicked, the Sphere and the Cylinder will change color.

Well, let’s see now the case when the dispatching event object is already in the hierarchy. In this case we could dispatch events in the hierarchy without knowing the root. However the root must be signed with IChainRoot interface:

and the dispatching event object can use the BubbleSignal object in this way:

 Try to click on the capsule now, the sphere will change again color but this time will be blue!

How to compress and decompress binary streams in Unity

recently I needed to compress a binary stream that was getting to big to be serialized over internet. I could not find any way to compress files easily, so eventually I used a 3rd party library that works very well in Unity3D (web player included): SharpZipLib

I suppose the following code could be handy for someone:

 

 

 

My First Flash Multiplayer Game with Photon Cloud

For a long time I wanted to make some experiments with Photon Network Engine at home.
During my professional experience, I worked on several multiplayer games, therefore I was pretty curious about it.

Exit Games provides two versions of Photon Network Engine: the traditional Photon Server edition and the new Photon Cloud edition.

Photon is a server-client network engine dedicated to game development and, while the marketing department pushes the Unity3D version a lot, it has got APIs for most of the most famous development platforms existent.

Compared to the competitors, Photon Server has the advantage to be written in c++ and therefore be fast; however the server version has the limitation to work on Windows platform only, which could honestly be a problem.

Surely I would have not bothered to install the server for a tutorial, but luckily Photon comes with an amazing Cloud version with a very smart pricing model, including a free version that supports up to 20 simultaneous connections, more than enough to create the first prototype of whatever game, or a tutorial 😉

Of course Photon Cloud comes with some limitations as well. While Photon Server allows the server logic to be extended, the cloud version does not let any kind of extension, forcing to use authoritative clients while the server becomes just a simple message router (and lobby manager).

Let’s start right away

First thing first: you need to create your development account; once done you can create immediately your first server application with very few straightforward steps.

With the app generated, you get also an unique App ID, which is the only information needed to link your application to the cloud server.

Now, before to start your first Photon project, download the Photon flash SDK, this includes all the client code you need to create your game.

As usual, I use flashdevelop as IDE, but you can do everything with your favorite one as well. Download my code from https://github.com/sebas77/PhotonDemo and open the file mmo.as3proj (although it is not a MMO :P).

The project needs to include the library PhotonCoreAS3.swc as well as the source folder photon_loadbalancing_lib from the Photon SDK, so verify that everything is set correctly.

The load balancing version of the Photon client is the modern superstructure created on top of the Photon Core and, even if you do not need load balancing, I strongly suggest to use it, since it abstracts the Photon API even more and make everything easier to use.

Note that the code I released with this tutorial is based on my Entity Framework, however I never published it and the version used is just a work in progress, so do not focus on it. I think using this framework helps to highlight just the multiplayer logic implemented if everything else is used as a black box.

The code folders structure includes the startup files, the bullet folder, the character folder and the photon folder (plus my framework code). The Photon folder includes all the files to customize the events needed to control the game. The code I used to start is the chat example included in the SDK.

Where everything starts from

Looking at the code you will notice that all the server logic starts from:

pay attention to the fact that while the game starts immediately, your avatar will not appear until the game is not connected to the server. This is the standard logic of a multiplayer game: the graphical representation of your character will appear on the screen only when the server-client communication is setup.

The ConnectToServer class contains the APPLICATIONID value that must be filled with your AppID previously generated before to execute the game.

The ConnectToServer class provides also a series of useful callbacks that can help having a better understanding of the network engine logic.

The first thing your application must do as client, is to connect to the master server. The connection flow starts with a CONNECTING_TO_MASTER event and the Photon Core handling of the server connection. If all the data entered so far is correct, in few (milli)seconds the client receives the answer from the server as a CONNECTED_TO_MASTER event.

Once connected to the server, our code sets, in a simple and naive way, the properties of our actor; in our demo the properties set are limited to the name of the actor only. The actorID will be automatically set successively, but be aware of the fact that the actorID, unique for each player, is fundamental for every operation I will show you next.

Once the connection has been authorized, the client automatically connects to the lobby (all these operations are done “behind the scene”, so you relatively need to worry about it). The Lobby is a special “room” that contains the list of all the active games available.

Game must be seen like a “chat” room where users connect to and exchange messages in.

the Photon flows continues retrieving the games list from the server, this is when our class ConnectToServer must decide again what to do:
If the list of games is empty, our client will create a new game; if the list is not empty, it will simply join the existing game. So, in total, our demo will never have more than one active Game running.

Even though I do not use special features for this demo, you will find out with your experiments that each game, as well as each actor, can come with special properties that can be set and broadcasted to all the clients.

Left the Lobby and Joined our Game, finally the flow reaches the event we care most, that is the LoadBalancedJoinEvent, but before to continue, let’s take a step back for a moment.

Getting deeper into the code

I explained what happens just calling server.connect(), but now that the user joined the room it’s our turn to act.

An important class used to listen to the network events is the LoadBalancedPeer that I extended in our demo and called PhotonPeer. For convenience, it is used as Singleton, however as you know, I am not a fan of this pattern and in fact the use of this singleton makes some part of my code awkward, but this is not a big issue for this tutorial purposes (beside, LoadBalancePeer has been designed as a Singleton).

PhotonPeer is used both to listen and dispatch data throughout the demo code. The classes that use it are:

  • ActorSpawner, used to create and destroy avatars
  • BulletSpawners, used to create and destroy bullets
  • CharacterEngine, used to manage avatars
  • BulletEngine, used to manage bullets

Let’s spawn this Actor

ActorSpawner listens to user joining and leaving games thanks to the following lines:

once a user joins a game, and I mean whatever user (so both the local player or a remote player), a new character Entity is created. However depending if the player who joined is the local player or a remote player, the code behaves differently creating a local or a remote character entity.

As previously mentioned, the ActorID is important because it is used to recognise if the joining user is the local user or a remote user thanks to the following code:

However, since a user can join after other users joined already, the code must be able to query the existing list of users and recreate them on the local client as well. This is done through this function:

That’s it, now the user avatar is shown and it’s time that the client can run all the game logic!

Character Movement

CharacterEngine is a System for my Entity Framework, as far as possible, let’s use it as a black box.

CharacterEngine both listens and dispatches custom remote data. The way I used to handle this data is exploiting the ActorEvent.TYPE event. Remember that our server does not have any logic, so all the custom events must come from another client. In order to dispatch a custom event, I used the function opRaiseEventWithCode like the following example:

The function needs the event ID (GameConstants.EV_SENDPOS) to recognise the type of data sent and a Dictionary with the data itself. For example, to send the character position and rotation, I use the following dictionary:

at the same time, to receive the same data from other players, the relative event is listened through this code:

CharacterEngine handles the death of the character in a similar way.

Bullet Shooting

BulletSpawner and BulletEngine logic follow the same principles. When a character “shoots” the GameConstants.EV_FIRED event is dispatched from one client and broadcasted to all the other ones through the server.

Homework: how to improve the demo

This little demo can be a start point to more interesting experiments, however I would start improving two issues:

  1. I sample the position update at 25 fps. Honestly the sampling can be less frequent if an interpolation method is used to avoid jittering, saving precious bandwidth
  2. as said the hit should be sent from the client that has been hit and not checked locally

In this game no client is authoritative, however only local clients can dispatch “death” as well as “shoot” events, since the remote clients cannot determine these events by themselves. This is not ideal since the local client should dispatch the “hit” state too, otherwise the hit count could be mismatched between the local and remote client.

Conclusion

With this demo I showed the basic concept of Photon Cloud. The client can connect to the server, then to a specific game. All the clients can receive and dispatch data through events. It is important to notice that the data can be handled differently depending if the sender is the local user or a remote user.

To conclude, two words about the Photon documentation: it is OK, but not great. However if you have any question, the photon forum is a great way to get answers quickly.

 

 

My version of the game can be played from here (use wasd + mouse, kinda fps style 😉 )

And now good luck with the rest of your experiments!

 

On Commands and Events

Events and Commands are two communication tools that I use really often. I have planned more exhaustive articles on the topic, but now I want to write a short post about one rule I apply when I use them. Sometimes it occurs to me the question: should the listener class know the dispatcher  class or the dispatcher class know the listener class?

Generally I answer in this way:

If I use event listeners, I inject the dispatcher inside the class which defines the method callback, ie:

In this way I do not need to expose the private members of the listener.

On the other way around, if I use the command pattern to achieve object communication (and I do it at lot), I usually invert the logic.

This because the Command Pattern add an extra layer of abstraction that does not let the dispatch and the listener know each other directly.

Using this rule I often end up with tidy code, so I hope it can help you as well.

Inversion of Control with Unity3D – part 2

In my previous article, I briefly introduced the problems related to inject dependencies using the Unity 3D Framework. I did not give an in depth explanation because only those who understood the problem by themselves, can really understand why a solution is needed. Contrarily you may think that all this theory is irrelevant as long as “things get done”…well this can co28142879[1]me as a shock to you, but in a big team/complex project environment, is actually important how things get done too. Getting things done for the sake of finishing them as soon as possible, may lead to some tragic consequences. Have you ever seen memes about the css !important value? With it you can surely get your CSS done, but then your team would not really be able to maintain or extend the css anymore. The truth is that keeping your code maintainable is as important as getting things done since, for large projects, the long term cost of maintainability is much higher than the cost of developing features.Cjg9a5ZWgAAPTcS[1]

Completing a feature in two days and then spending five days to fix the relative bugs found in production, due to the complexity of the solution adopted, won’t make you look good. Breaking good design practices to complete a feature in record time and then forcing someone else to spend several days to refactor it in order to extend its functionalities, won’t make you look good.

I spent a long time, during my carrier, to understand how to write maintainable code and I eventually realised that the most important condition is to write simple code, so simple that doesn’t even need to be commented to be clearly understood by another coder. Writing simple code isn’t an easy task, often, in fact, who thinks to get things done easily, ends up writing very convoluted code, hard to read, maintain and refactor.  I also realised that the only way to be able to write simple code for each possible problem is to deeply understand all the aspects involved in good code design.

These aspects are fundamentally explained in the SOLID principles, principles that I will mention several times in my articles. Being able to inject dependencies efficiently plays a key role and that’s why I eventually decided to implement an IoC container.

Before to discuss the example I built on purpose to show the features of my IoC container, I want to share my experience with other IoC containers I used before to write mine. As game developer, the Inversion of Control concept was unknown to me, although I surely used IoC without knowing what it was and what it was called.

My experience with other IoC containers

When I was introduced to the concept of Dependency Injection and all the problems related to it, the IoC containers were (and still are) extensively used to ease the coders work. That’s why, when I shifted from traditional procedural C++ programming to event based coding in Actionscript, I started to use Robotlegs. Robotlegs is an IoC Container for the actionscript language.

After I used it for a couple of small personal projects, I decided to abandon it. This is because eventually I ended up convincing myself that manual Dependency Injection was a better practice. However after some time, when I moved to C#, I started to experiment with Ninject and after discussing the practices with its author, I realised that I was not appreciating the use of an IoC container because I did not understand its principles.

To understand the principles, we need to introduce some concepts first:

Composition Root

Composition Root is a key concept to understand. The Composition Root is where the IoC container must be initialized before everything else starts to use it. Since an application could use several contextualised containers, it could have more composition roots as well. The Composition Root is found in every application that has a main entry point, however Unity doesn’t really have a single entry point that allows objects to be initialized and this is actually quite simply the real cardinal issue of the whole Unity code design. Since it’s of such a significant importance, I will come back to this concept several times in my next articles.

Object Graph

Once objects are used inside other objects, they start to form a graph of dependencies. Let’s say that there is a class A and a class B, then there is a class C that uses A and a class D that uses B and C. D cannot work without B and C, C cannot work without A. This waterfall of dependencies is called Object Graph.

How to use an IoC Container

If, for the moment, we don’t consider objects that must be created dynamically after the start up phase, all the dependencies can be solved right at the begin of the application, within the Composition Root context.

When I started to use IoC containers, my first mistake was to misunderstand its use. I didn’t realise that the container would have taken the responsibility of creating dependency away from me. I just couldn’t imagine a design where I would stop using the new keyword to create dependencies. If we didn’t use an IoC container, this would mean that all the dependencies would be created and initially passed only from the Composition Root. With an IoC container instead, the creation and injection of dependencies works in a different way, as I will soon show to you. If  you haven’t grasped the meaning of this paragraph yet, don’t worry, I will return to this with more examples in my next articles.

Let’s have a look to what a Composition Root looks like using the example that can be found on github:

the container object is not used in any other part of the example, except for Factories. the Factory pattern is a special case and I will explain it later.

What is happening? Inside the SetupContainer method all dependency types are registered into the container. With the method Bind, we are telling to the container that every time a dependency of a known type is found, it must be solved with an instance of the type bound to it. The dependencies are solved lazily (that means only when needed) through the metatag [IoC.Inject].

The application flow starts from this method:

After the MonsterSpawner is built, it will have the dependency IMonsterCounter injected because it has been declared as

and because IMonsterCounter has been previously registered, and bound to a valid implementation (MonsterCountHolder), inside the container. On its turn if the MonsterCounterHolder class has some other dependencies, they would be injected as well and so on…

In a complicated project scenario, this system will eventually give the impression that the [Inject] metatag becomes a sort of magic keyword able to solve automatically our dependencies. Of course there is nothing magic and, differently than Singleton containers, our dependencies will be correctly injected ONLY if they are part of the Object Graph.

Compared to the use of a Singleton, an IoC container gives the same flexibility, but without risking to wildly use static classes everywhere. Dependencies will be correctly injected only if the object that use them is part of the Object Graph. Dependencies can be also injected through interfaces, which allows in future to change implementations without changing the code that uses them. In fact it would be enough to swap the MonsterCounterHolder with another type that implements IMonsterCounter to have new behaviours without changing code in all the places where the dependencies are used.

In reality you can perform more fancy tasks with an IoC container, for example through the use of the concept of providers introduced by my library, but this is not relevant at the moment.

IoC container and the keyword new

It is called Inversion Of Control because the flow of the objects creation is inverted. The control of the creation is not anymore on the user, but on the framework that creates and injects objects for us. The most powerful benefit behind this concept is that the code will not depend on the implementation of the classes anymore, but always on their abstractions (if interfaces are used). Relying on the abstraction of the classes gives many benefits that can be fully appreciated when practices like refactoring and unit testing are regularly used. all that said, one could ask: if I should not use new, how is it possible to create objects dynamically, while the application runs, like for example spawning objects in the game world? As we have seen, the answer is to use Factories. Factories, in my design, can use the container to inject dependencies in to the object that they create. Factories will be also injected as dependencies, so they can be easily used whenever are needed:

Remembering that factories are the only classes that can use the Container outside the Composition Root, the IBulletFactory Create function would look like:

In this way the Bullets will not just be created, but all their dependencies regularly injected, in run-time.

If you wonder why your classes should not be able to create objects on their own, remember that this is not over engineering, this is separation of concerns. Creating bullets and handle them are two different responsibilities.  The Interface Segregation Principle (part of the SOLID principles) is one of the most important design rule to follow in order keep your classes small and simple.

IoC container and Unity

The use of an IoC container, similar to the one I am showing in this article, will help to shift the coding paradigm from an intensive use of Monobehaviour to the use of normal classes that implement interfaces.

A MonoBehaviour, in its pure form, is a useful tool and its use must be encouraged when appropriate. Still our Monobehaviours may need dependencies that must be injected. If the MonoBehaviours are created dynamically through a factory, as it should happen most of the times, then the dependencies are solved through the Object Graph. On the other hand, if MonoBehaviours are created implicitly by Unity, their dependencies cannot be solved within the Composition Root because of the nature of the Unity Framework. Moreover, thanks to the Composition Root, we don’t need to use Monobehaviours to implement managers and other classes that do not naturally belong to a GameObject.

Although in my example the turrets are not created dynamically, their dependencies are automatically solved by the framework during the Awake time and before any Start function is called. As you will see from the code, all and only the Monobehaviours children of the UnityContext MonoBehaviour, will have their dependencies automatically injected.

IoC Container and MVP

IoC container becomes also very useful when patterns like the Model-View-Presenter are used.  Explaining the importance of using patterns like MVP is not part of the scope of this article, so maybe I will write about it in future. However you must know that MVP is great to separate the Logic from the View and therefore allowing to create simpler and focused Monobehaviour. A Monobehaviour would just handle the visual aspect of the entity, while a Presenter can handle the logic. This would look more or less like this:

I found the MVP pattern very useful to design GUI widgets code and in future I actually would love to write a dedicated MVP/GUI framework. MVP didn’t seem to work very well for other entities instead, that’s why I started to look for other solutions as you will see in the next articles.

IoC Container and hierarchical containers

A proper IoC container library should support hierarchical containers, without this fundamental feature, using an IoC library intensively can degenerate in spaghetti code as I will explain later in my next articles. It’s bad practice to just bind every possible dependency in a single container, as this won’t promote separation of concerns. Simply, some dependencies make sense only in given context and not application wise. For example, if we create dependencies for an MVP based GUI, these dependencies could make sense only for this GUI. This would mean that in order to create the GUI, we should use a specialized composition root (which could be defined in a factory). Hierarchical containers are powerful because, while they encapsulate the dependencies needed to a specific context, they will also inherit dependencies from parent containers.

IoC Container and Unit Tests

Often the IoC container concept is associated with Unit Tests. However there is not any direct link between the two practices. In fact, often, Unit Tests frameworks do not use IoC containers at all. Albeit using IoC container will help writing Unit Test friendly classes.
Unit tests must test ONLY the code of the testing class and NEVER its dependencies. An IoC container will make very simple to inject harmless mockups of dependencies that cannot ever break or affect the functionality of the code to test. This can happen thanks to the use of interfaces and the mockup is just one different (often empty) implementation used for the scope of the tests only.

Conclusion

before to conclude, I want to quote two answers I found on StackOverflow that could help to solve some other doubts:

from http://stackoverflow.com/a/2551161

The important thing to realize here is that you can (and should) write your code in a DI-friendly, but container-agnostic manner.

This means that you should always push the composition of dependencies to a point where you can’t possibly defer it any longer. This is called the Composition Root and is often placed in near the application’s entry point.

If you design your application in this way, your choice of DI Container (or no DI Container) revolves around a single place in your application, and you can quickly change strategy.

You can choose to use Poor Man’s DI if you only have a few dependencies, or you can choose to use a full-blown DI Container. Used in this fashion, you will have no dependency on any particular DI Container, so the choice becomes less crucial in terms of maintainability.

A DI Container helps you manage complextity, including object lifetime. Used like described here, it doesn’t do anything you couldn’t write in hand, but it does it better and more succinctly. As such, my threshold for when to start using a DI Container would be pretty low.

I would start using a DI Container once I get past a few dependencies. Most of them are pretty easy to get started with anyway.

and from http://stackoverflow.com/a/2066827

Pure encapsulation is an ideal that can never be achieved. If all dependencies were hidden then you wouldn’t have the need for DI at all. Think about it this way, if you truly have private values that can be internalized within the object, say for instance the integer value of the speed of a car object, then you have no external dependency and no need to invert or inject that dependency. These sorts of internal state values that are operated on purely by private functions are what you want to encapsulate always.

But if you’re building a car that wants a certain kind of engine object then you have an external dependency. You can either instantiate that engine — for instance new GMOverHeadCamEngine() — internally within the car object’s constructor, preserving encapsulation but creating a much more insidious coupling to a concrete class GMOverHeadCamEngine, or you can inject it, allowing your Car object to operate agnostically (and much more robustly) on for example an interface IEngine without the concrete dependency. Whether you use an IOC container or simple DI to achieve this is not the point — the point is that you’ve got a Car that can use many kinds of engines without being coupled to any of them, thus making your codebase more flexible and less prone to side effects.

DI is not a violation of encapsulation, it is a way of minimizing the coupling when encapsulation is necessarily broken as a matter of course within virtually every OOP project. Injecting a dependency into an interface externally minimizes coupling side effects and allows your classes to remain agnostic about implementation.

The Example

The framework and the example can be found on GitHub at: https://github.com/sebas77/Svelto-IoC. You are very welcome to modify it and share your improvements to other users.

Note: you can add more turrets, if you want, using the Editor; thanks to the IoC everything will work without touching code.

Final Notes

I wrote more articles on the subject to investigate further the concept of Inversion of Control. If you are interested, read my new articles starting about the Truth behind Inversion of Control

I strongly suggest to read all my articles on the topic:

Inversion of Control with Unity3D – part 1

Unity is a good game development tool and honestly I like most of its features. However the code framework is awkward to use on big projects where code maintainability is of fundamental importance.  As you may have guessed, the following considerations in this article do not apply to simple projects or projects developed by one or two coders. For these specific cases, Unity is good enough. The arguments in this article apply to relatively complex projects developed by a medium-big team.

After years of analysing the intrinsic issues of the Unity framework, I realised that they all share the same root: the way Unity injects dependencies inside entities, or better saying, how Unity makes this natural process very difficult.

A dependency can be defined as an object needed by another object to execute its code. If a class A needs the instance of a class B to execute its code, B is a dependency for A. When B is passed into A, we say that the dependency B is injected into the object A.

The problem

Unity is based on the concept of “Entity Framework”. An Entity Framework brings many benefits: pushes the users to favour composition over inheritance, keeps the classes small and clean focusing on single responsibilities, promotes modularity. This is in a perfect world.

In Unity, the Entites are called GameObjects and the Components are based on MonoBehaviour implementations. Components add “behaviours” to Entities and theoretically they should be perfectly encapsulated. The Entity Component structure relies on the fact that components define the whole entities logic. In theory:

  1. MonoBehaviours must operate within the GameObjects they are attached to. This is needed to ensure that encapsulation is not broken, as it shouldn’t be needed to. Components, by design, must handle only the Entity they are attached to.
  2. MonoBehaviours are modular and single responsibility classes reusable between GameObjects.
  3. The behaviour of a GameObject should be extended adding MonoBehaviours instead of adding methods to a single MonoBehaviour. One component must add one behaviour only to encourage modularity.
  4. GameObjects should not be created without a view such as a mesh or a collider or something that is really an entity in the game.
  5. MonoBehaviours can know other components on the same GameObject using GetComponent, however they cannot know MonoBehaviours from other GameObjects. This is still part of the framework design.

The first three points are obvious, even to who has never heard about Entity Component frameworks. The Components design allows to add logic to an entity without using inheritance. We can say that instead to specialize the behaviour of an Entity vertically (usually specializing classes through polymorphism), Components allow to expand behaviours horizontally. This is not the place to talk about it, but in case you didn’t know, composition over inheritance is a well known pattern which benefits have been proved extensively. The great thing about an Entity Component framework is then that composition comes naturally, without any particular effort. Smaller is the component, easier will be to reuse it, that’s why giving a single responsibility to each component will promote modularity.

The fourth point may seem an arbitrary rule, but Unity entities by default implement the transform class, implying that the object must be placed in the world. What’s the point to have a Transform by default if then it’s not used? Having a complex hierarchy in Unity doesn’t come for free, all the transforms must be computed hierarchically, therefore an extra GameObject is an extra matrix multiplication to execute every frame.

Last point starts to uncover the real issue. MonoBehaviours are meant to add behaviours on the GameObject that they are attached to and consequentially Unity can’t provide a clean way to let GameObects know each other without resorting to awkward solutions. The other problem is that Unity asks to the user to use MonoBehaviours even when the logic to code is not related to an Entity at all. For example: let’s say there is an object called MonsterManager in the game world which must know the monsters that are still alive. When a monster dies, how can this object know it?

Currently there are 2 simple solutions to this problem:

  1. MonsterManager is a Monobehaviour created inside a GameObject that has no view. All the monsters will look for the MonsterManager class using the GameObject.Find or Object.FindObjectOfType function inside the Start or Awake methods and add themselves into the MonsterManager. MonsterManager could potentially listen to a Monster event to know when it will die.
  2. The only other alternative is that MonsterManager is a Singleton and it’s used directly inside the Monster Monobehaviour. When a monster dies, a public MonsterManager function is called.

both cases are needed to solve the same problem: Inject dependencies. In the case A, MonsterManager  is a dependency for the Monster objects. The Monster objects cannot add themselves in the MonsterManager if they don’t know the MonsterManager object. In the second case, the dependency is solved through the use a global variable. You may have heard that global variables and singletons are very bad to use and you may not have fully understood why yet. For the moment, let’s focus on the practical aspects of these solutions:

What’s wrong with GameObject.Find

Beside being quite a slow function, GameObject.Find is one example of how awkward the Unity Framework is for big projects development. What happens if someone in the team decides to rename the GameObject? Should GameObjects renaming or deletion been forbidden? GameObject.Find can lead to several run-time errors that cannot be caught in compiling time. This scenario could be really hard to manage when dozens of GameObjects are searched through this function. GameObject.Find and similar functions should be definitively abolished by the Unity framework.

What’s wrong with the Object.FindObjectOfType

How should the MonsterManager object be injected inside the Monster class then? There is actually another solution: calling the function Object.FindObjectOfType. However, what is FindObjectOfType? Object.FindObjectOfType could be seen as a wrong implementation of the Service Locator Pattern, with the difference that it is not possible to abstract the implementation of the service from its interface. This is another problem of Unity framework, which I will now briefly hint at, but that would need another entire article, Unity discourages the use of interfaces. The interface is the most powerful concept at the core of every well designed code. Instead to push the coders to use interfaces, Unity pushes the coder to uses Monobehaviour, even when the use of Monobehaviour is not necessary.

What’s wrong with the Singleton

Singleton is a controversial argument since the pattern has been invented. I am personally against the use of Singleton; every single Singleton class I have encountered so far, led only to design problems hard to fix through refactoring. However if you ask me why I am against the Singleton, I will not answer you with the usual answers (they break encapsulation, hide dependencies, make impossible to write proper unit tests, bind the code to the implementation of a service instead of its abstraction, make refactoring really awkward), but with the hindsight of the practice: your code will become unmanageable after a short term. This because the use of Singletons come without any design constriction that, while apparently makes the coder life easier, will make it a hell later on, when the code turn in to an incomprehensible blob without a structured flow. Projects developed by one of two coders wouldn’t show the problem at first or ever (for example, if the project doesn’t need to be maintained over time), but mid-size projects will pay the consequences of wild design when it’s probably too late for refactoring. Think about it, you can use Singletons everywhere, without limits, without rules. What would keep a coder from making a total mess out of it? Common sense? Let’s not fool ourselves, common sense doesn’t apply when release deadlines approach fast.

To be honest, this is not enough to dismiss Singletons. What is a Singleton if not just another way to inject dependencies? When you don’t have any other choice than singletons, the real problem becomes how to manage the states of these classes when encapsulation is broken. Singletons usually and normally rely on public functions, public functions that normally change states inside the Singleton itself. When the singleton is used recklessly in several parts of the code, it becomes very hard to track when and how these states change. Refactoring becomes tremendously risky because changing the order of the call of the public functions of the same singleton could mean finding the singleton itself in an unexpected state. The biggest problem about Singleton is then the fact that they cannot be designed with Inversion of Control in mind. You are always forced to write functions like IsSingletonReady() or DisableThisSpecificBehaviour() or AccessToThisGlobalData(), leaving the control of the internal states to external entities there are hard to track, especially due to the Singleton global variable-like behaviour.

Let’s be clear, the main reason why dependencies are injected is to let objects communicate between each other. Communication is impossible without any form of injection. Therefore while is some rare case, the use of a Singleton could be acceptable, you can’t work with a framework that won’t let you use any other kind of injection. While it’s feasible, it’s totally unreasonable to solve all the kind of communication problems through singletons.

At last, being essentially static classes, singletons usually become the most common source of memory leaks in a project.

The solution

There are 3 ways to resolve dependencies: using the Service Locator pattern, injecting dependencies manually through constructors/setters, exploiting reflection capabilities through an IoC container and, obviously, using Singletons. Disregarding Singletons, I do not like the Service Locator Pattern because the SLP itself is a singleton (or a static class) and this could lead to some severe limitations compared to the IoC container solution. Since Unity doesn’t have a starting place where dependencies can be created and injected, manual injections become quite hard and basically impossible without a good understanding of the Unity limitations.

Therefore a good solution available (but not the only one as we will see later on in this article series!) is to use IoC containers. Another article is needed to explain what an IoC container is, therefore I will give a first simple definition: An IoC container is a…container that creates and contains the dependencies that must be injected inside the application objects. It is called Inversion Of Control because the design is thought in such a way that the objects are never created by the user, but they are lazily created by the container when they are needed.

There are several IoC containers out there, a lot written in c# as well, but practically no one works in Unity and most of all, they are damn complicated*. For these reasons I decided to create an IoC container for Unity trying to keep it as simple as possible. Actually creating a basic IoC container is pretty straightforward and everybody can do it, my implementation has just a few tweaks that makes it simple to use with Unity projects.

Conclusion (for this article)

The second part of this article (including source code) is available here: http://blog.sebaslab.com/ioc-container-for-unity3d-part-2

I strongly suggest to read all my articles on the topic:

*Since when I first wrote this article, new IoC containers have been developed. The best ones out there are StrangeIoC, ZenInject and Adic. Google them, it’s worth it.

Develop HTML5 games without JS and DOM for flash developers – Part 2: Jangaroo

The reason why I called the first tutorial of this series Part 1 is because I knew there were other alternatives around to develop HTML5 games, carefully avoiding javascript.

However after some researches it turned out that the only really working alternative suitable for actionscript developers is Jangaroo.

Let’s talk a little bit about it. Jangaroo is an open-source project mainly used by the German company Coremedia. The smart guys from Coremedia hate javascript so much that they decided to create a compiler capable to convert actionscript 3 to javascript and they called it Jangaroo.

However Jangaroo has not been created with the intent to support Flash libraries (including the display list), what the developers had in mind was just to have a way to create web applications with a language different from Javascript.
Jangaroo is for actionscript 3 what HaxeJS is for Haxe, that is a way to entirely map a language into another. In fact Jangaroo is mainly used with pre-existent and remapped javascript libraries, that is more or less what happens with HaxeJS.

This part of Jangaroo is supposed to be quite stable, in fact it has been already used for several internal projects at Coremedia. It is also interesting to note that several open-source applications have been made with it. Among these projects we can find the porting of the 2D engine Flixel and the physic engine Box2D.

The main source code of Jangaroo and all the applications can be found on several github repositories.

The first impact I had when I read the clear (but a little bit outdated) tutorial was: woah woah woah, what? Do I have to install Maven to run it?! Seriously?! It must be a pain! Thus, since for a long while now I wanted to try Realaxy IDE (man, it is not really easy to remember how to spell it) and since I knew that the beta version 1.1 is supporting jangaroo to convert the actionscript code to JS, I decided to install it thinking it would have been a smart decision.
Instead few minutes using it were enough to understand that it was a big mistake! First it is java based = a pachydermic piece of software as slow as the other incredibly overrated IDE Eclipse (and its cousin Flash Builder), second it lightly decided to delete my project folder, including all the code I have manually converted from the previous Haxe version of my experiment, that of course I did not back up urggghhh.

By the way, this is something that could be interesting: I actually tried to convert the Haxe code to actionscript 3 using the automatic method that Haxe provides, but it failed in several ways and so I had to spend more or less an hour to manually convert the source code.

At this point I decided to go for Maven and surprisingly everything worked out smoothly and painlessly. What Jangaroo asks to install in order to work is just the last version of Java Runtime Environment and Maven, nothing else.
The only thing to pay attention to (as explained in the tutorial) is to set the JAVA_HOME environment variable that must point to the folder where the JRE is installed.

Now everything is set to follow the aforementioned Jangaroo tutorial, which explains in few steps how to create the first “hello world” application without using flash libraries. With the knowledge gathered so far, you will be able to create a javascript based application using actionscript 3, but you will not be able to use the display list to make games.

Although the goal of this tutorial is to show to the reader how to use the Display List in jangaroo to make HTML5 games, let’s be clear: I am not a fan of the Display List and I do not think it is a good data structure to use for HTML5 games. So even if Jangaroo, as we will find out, has some flaws in converting the display list related code, I would not say this is a great limitation. The real problem is that from now on the documentation become very scarce. Your main source of information will be GitHub and the Jangaroo group on google. They are both managed by Frank Wienberg, who is the most active Coremedia employee developing and maintaining the project.

Honestly, while Frank is a great guy willing to help as much as he can, the fact that he is alone is another relatively big limitation of the project. In my opinion Jangaroo needs a bigger community to become really successful and I think it even deserves it. More people should at least give it a try, that’s also why I am happy to write this article.

In order to work with the DisplayList, the next step is to extend the pom.xml file. The pom.xml is a config file needed by Maven to understand what to do and which libraries to use. I do not think it is necessary at this point to explain every single node inside the XML, since the jangaroo tutorial is quite simple to understand. However in order to get more information two options are available: One is to use the template example in github, the other one is to use FlashDevelop and download the template to create a working jangaroo project.

Once the template is installed and the project is created, the pom.xml will be generated automatically inside the project folder; just be careful that Flashdevelop uses the JAVA_HOME variable as well, but it is NOT compatible with the 64bit version of JRE. In case you have both the 32bit and the 64bit versions installed and you decided to use the 64bit version to run maven, I suggest to edit the FD jvm.config file to instruct it to use JRE 32bit.

Do not be afraid to open the xml file, it is quite self-explanatory. The only attributes you need really to take care of are jangaroo_version node and jangaroo_libs_version node. The value inside must be kept updated with the latest version of the libraries available. Maven will automatically download it the first time the project will be compiled. Totally awesome!
I guess that the Maven repository is a good place where to check the latest Jangaroo version (the preview versions seems to not support the flash libraries yet).

Other two important nodes of the Pom.xml file are the dependencies and the resources nodes.

The dependencies node tells Maven which other libraries to use while compiling the source code. For this article sake, it is important to know that the jooflash extension, which enables the flash library, is not part of the standard jangaroo library, but it is among the extra ones and so it must be included inside the dependencies node.

<dependencies>
   <dependency>
     <groupId>net.jangaroo</groupId>
     <artifactId>jooflash</artifactId>
     <version>${jangaroo_libs_version}</version>
     <type>jangaroo</type>
   </dependency>
</dependencies>

Jooflash is a side project developed exclusively by Frank Wienberg, therefore it is not as stable or as complete as the main Jangaroo library.

the resources node is instead useful if you want to embed resources using the meta tag [Embed(source = “…”)]

<resources>
     <resource>
       src/main/assets
       <includes>
         <include>**/*.png</include>
         <include>**/*.jpg</include>
       </includes>
       joo/assets
     </resource>
     <resource>
       <directory>webapp</directory>
     </resource>
</resources>

Ok, I think I have said everything is needed to say to let you work with Jangaroo. Now let’s talk about my results. My intent was to compare the performance of Jangaroo against the Jeash performance using the incredibly bad code I wrote for the shoot’em’up I have created for the first tutorial of this series. However in this case bad code adds value to the experiment, because it can be used to gauge how good the compiler is.

The results are quite interesting. At first it seems that Jeash beats Jangaroo in every aspect. Jeash is infact much faster (60fps against 30fps) and more compatible across the browsers (the game works on every modern browser known using Jeash, while the Jangaroo version behaves quite differently and in some cases wrongly). Jeash seems to use pretty neat CSS tricks to incredibly optimize the performance. However there is a very important aspect to take in consideration: Canvas blitting. In this case Jangaroo seems way better than Jeash. Both version of FlashPunk and Flixel, as the flash coders know already, use only bitmap blitting to render the entire screen without using the flash display list at all. It is pure old school 2D blitting.

@teormech on Twitter is currently porting Flixel to Haxe and experimenting with Jeash as well. He found out that the Jangaroo Flx invasion demo runs 3 times faster than his Jeash porting. According my profiling the Jangaroo version of the game takes approximately 3ms to render the 640×480 screen, while the Jeash version takes about 12ms!!! That means that for pure canvas application currently Jeash performance is very disappointing.

This is all! And this is very likely my last article regarding HTML5 technology. I have always been skeptical about HTML5 as platform for videogame developers and the current performance on desktop, but especially on mobile, demonstrate that HTML5 is not ready for professional game development.

Note: My first article and the considerations about Jeash written in this second one are based on the last version of Jeash (0.8.7) before it has been merged to HaxeNME. Unfortunately as today, the jeash version included in HaxeNME appears to be broken. Jeash seems to be sensibly worse than the previous version I worked with.

How to run your Flex/AS3 iOS App without buying a developer license

So, I wanted to experiment with actionscript, flex and iPhone, but I do not have a mac and I am not really keen to pay 99$ just to make some plain experiments; besides I have not intention to make a living out of it, so I do not think it was really a good investment right now.

So I wondered, are there alternatives?

Yes there are and the one I found involves a jailbroken iPhone, which seems to be not illegal (and it makes a lot of sense), flashdevelop, adobe air SDK 3.2 (that is downloaded automatically with flashdevelop), a fake mobile provision  (this could be borderline) and this nice app: i-FunBox

it is very simple. You need to create a flashdevelop project using the air mobile template (do not worry if there is the android icon, it works for iOS too), add your actionscript code and make it compile. If you have problems running it (especially if you want to use the new air 3.2 features), these are my two tips: edit the application.xml file and change the air version from 3.1 to 3.2 and add the line  -swf-version=15 to the compiler options.

Once your application compiles, follow the flashdevelop instructions reading the file AIR_iOS_readme.txt that you can find inside the project structure. Once the fake certificate and provisioning file are copied, just set the right path where asked to.

Now you are just ready to build your first ipa with flash develop; it is as easy as run the PackageApp.bat file.

Good! You should have your ipa now. Next step is just to upload it on the iPhone. In order to do it I decided to use i-funbox. I do not even know if it is possible to do it with iTunes since I did not even try (you need to have it installed though), but uploading it was a piece of cake. If your iPhone has been just jailbroken, remember that you need to install Cydia before to use i-funbox.

Few second after I had a new application with an air icon that ran right away. My first Starling application was already on the screen exploiting the new stage3D feature as well. Enjoy 😉

By the way: buy your developer license if you want to upload the application on appstore 😀