I’ve been inspired for writing this article because of the news I read today – about (another) chief GUI architect leaving Unity Technologies.
The last architect was the NGUI developer (NGUI is the Asset Store bestseller).
Users are asking questions like: “What’s happening with Unity? Why such delays?”
Since I’m involved with GUI development for years, I believe I know GUI internals enough to have a feeling about what’s going on.
So hoping to explain (from my perspective) what could have happened with the creation of their GUI system and why it is so lengthy (and auto-destructive) process.
“Let’s scrap it!”
I’m wondering what will the next Unite (2014) GUI presentation look like.
Will the system being presented remain the current one?
Will the current system be scrapped in favor of starting from scratch?
Will a new chief architect be introduced?
Or maybe the acquisition of another GUI system?
Nobody knows for sure.
However, IMHO no ACS type-of GUIs could ever result in a strong, elegant and robust system that GUI developers use with non-gaming related platforms.
ACS: the Atlas-Collider-Script approach
In 3D game programming you clearly need to:
1. render your game objects
2. check for collisions between them
3. interact with them (touch, mouse, keyboard)
So – naturally – this is the shortest route that many people are taking for displaying their interactive 2D assets on screen.
The consequence of this approach is a number of GUI frameworks for Unity being focused around these issues.
A – Atlas
These type of systems usually focus on the texture atlas as a method of getting the minimal draw call count out of the system.
Since this is the greatest bottleneck of the current system (UnityGUI), it feels like a top priority often categorizing the system as “usable” (at least for some platforms).
C – Collider
Then, they are using colliders for interaction detection.
Although it looks very natural to use colliders in the game engine (they can handle all the “weird” angles and component shapes, even in 3D), it’s really the overkill because each of your GUI elements doesn’t need to be tested for collisions (there are more optimized ways for handling this problem).
S – Script
The third component to this approach is a script – or better: scripts.
Multiple scripts are attached to a game object (having a main component script already attached), each of which is supposed to add a certain behaviour to component itself.
However, although it seems to be the natural coding approach for games, it isn’t necessary the best approach for building a GUI framework.
However, I find this as a pretty naive approach to building a GUI system.
Simply because the interaction between GUI components, their children and siblings is of a magnitude greater than the ACS approach could handle.
Don’t get me wrong: ACS is great for low-level rendering, but there is a much “bigger picture” behind a GUI framework that must be handled by layers on top of the rendering layer.
The rendering layer is perhaps less than 30% of the complete GUI system (for comparison, take a look at the size of Flex framework compared to the size of Flash Player).
Glamour and shine
Shaders, glows, bangs and explositons… these eye-candies surely look nice with in-game GUIs.
But, the question is: are they adding the real value to the core of the GUI system?
I found the glamour to be very distracting, and learned to look behind it to see the real value (that is, the API).
The BIG picture
For creating a GUI system, you need to hire a person who’s seen things “from the above” and is able to see “the big picture”.
There’s also a number of architects that actually built such frameworks, so why not hiring them?
GUI systems are hard
GUI systems are hard and often underestimated.
One might think that they are pretty trivial compared to a full-blown 3D game engine – but this isn’t the case.
The reason is that they need to serve multiple needs and use-cases.
Let me cite the forum user (ShadowK):
Abstraction / cross compatibility … can be far more complicated than the engine itself.
The knowledge of GUI internals lies pretty much in:
1. Working with various GUI systems for years
2. Great passion for learning how they work “under the hood”
3. Tremendous motivation for learning from the source code of other (open-source) GUI frameworks
Since not everybody has these prerequisites, it remains the kind of a hidden knowledge, not accessible to everyone.
GUI architect simply cannot ignore the 30+ years of history of GUI systems.
This is because learning about problems from the past could really help you with – at least – knowing about the type of problems that will appear.
It’s important is to get a feeling about possible problems prior to building your own system.
The knowledge of a good GUI architect is often not testable.
The job candidate requires a GUI guru to check on his/her knowledge – but, what if you’re supposed to become the (one and only) guru in the company?
Recruiters and technical leaders really don’t have proper questions (and answers) by which they could conclude who fits the best.
So, the guy who’s asset sells the most gets the job.
However, it turns out the best seller isn’t necessary the best solution.
One solution for all
But, is there a solution so abstract and generic that you could accomplish each and every type of task?
Clearly – no.
There is a number of GUI genres, totally different in nature (programmable, with designer, raster-based, vector-based, rendering HTML, skinable, styleable etc.)
Having all of them inside a single package would be… well.. one hell of the package.
Let’s not mention the footprint (in MB) it would take.
So the wisdom here is programming in layers.
My guess is that all the attempts of a new GUI system failed because the approach taken was too naive and not seen as a part of the “bigger picture”.
I believe that making clever decisions on what goes inside each layer is the key to success.
I think the main task for UT would be to create a low-level renderer, abstract in a way that other systems could build upon, and covering the majority of needs for each particular GUI system.
I personally found UnityGUI (the immediate mode GUI) great when used as a renderer (unfortunately it’s downside is producing a number of draw calls).
The 2D sprite system tends to become really great rendering system, and I’m hoping it would someday become (at least) as flexible as UnityGUI currently is.
I’ll keep my fingers crossed for not having another 2 years of the GUI creation-destruction cycle (seems that 2 years is how long a single “GUI experiment cycle” lasts).