Bringing bits together

I’ve now got what’s shaping up to be a pretty good UI system:

  • The UI styling is defined via data in space-format (JSONish)
  • The layout of the UI is then defined by a second space-format file:
  • Then the “actions”, in the case above the single button, are defined in a space-script file:

    Basically, the UI Layout says the button will execute the function named “SeedButton”. That function creates a new button that will execute the function named “OutputNum”.

This all works well, surprisingly. The only gap is that every “execution” of the script, or execution of a function in the script, is completely independent of any other execution. Meaning that I can’t store information between button presses, at least not in the script.

To get around this, I’ll likely implement a key-store as a regular Engine Function so that scripts can store information for later use, even when it’s only ever needed by the script.

I didn’t think of this before because I only ever imagined scripting things like enemy behaviour, which would just access and set what it needed to.

Coupled the scripting engine with the UI engine

I’ve now managed to get the scripting engine working with the UI engine. This means that the UI can now be completely defined outside of the game-code, in run-time language files. Should result in quick changes to the game UI when it comes to it; and even modding etc.

Back onto the UI – Flowcharts

Now that my scripting language appears to be complete, it’s back onto the UI programming. Most of it is there, one thing remaining is the “Flowchart”.

It’s not exactly a flowchart widget, more like a “nodes” widget.


The idea is the eventually you’d be able to add new nodes, connect and disconnect them as you see fit. Each node also contains a function; the purpose if that you can send data from one end to the other; much like my uncompleted noise-machine.

Eventually, you’d be able to use it for things like displaying a tech-tree; except the Player won’t be able to modify it.

Anyway; it’s not clear from a screenshot, but that above runs at about 25fps on my machine. Completely unacceptable. To be fair there is about 198 individual rectangle draw calls there… it doesn’t seem like it should be slowing things down; but it is.

I need to look into why it’s slowing things down so much; and if it’s just because I’m hitting a bandwidth limit; then it’s time to look into batching the draw calls (which I really should have done from day 1). Hopefully the entire UI can be knocked down to a single draw call. Or if I think about it correctly, maybe I could draw just to a texture, or individual widgets as textures to save some calls.


I’ve now managed to implement a SpaceEngine Format file to use as a UI layout file. In it you basically use the JSON like data format to create the individual top-level UI elements for the game.

Next is to give the UI layout files some ability to implement actions. At the moment you can design everything, but you can’t actually say what “button x” will do.

I’m thinking of separating out a “scripting” language which you can create a function in. Then in the layout file you reference that function.

This would give the UI three separate files, the Class Definitions, Layout, and Scripting. Which I think is sort of akin to CSS, HTML and Javascript.

So scripting… I’ve thought about implementing Lua or something, but like everything else, I’d rather roll my own to learn how it’s done.

My initial thoughts are to use the regular lexical parser deal as per the Dragon Book. I don’t think I need to do compiled, but just interpreted. Something like Java.

How to actually make it all function together is something of a mystery to me. I imagine the scripting processor will have set functions mapped to identifiers to be used in the script. So the script could call Game.CreatePlanet()  or something like that. The parser will see this and invoke my engines random planet spawning function or whatever.

But at what level do I abstract out these? Maybe I should abstract out the planet creation, so that the method of creation can be modded, expanded etc.?

I imagine that in future I’d want random events and AI to be scripted as well. So it seems like the lower level would allow for more flexibility.

But it also seems like it’s going to mean I need to do a lot of coding to make a full-fledged scripting language.

UI needs reworking

As I was continuing to work on tools for the engine, I wondered why I’m not using an “in-engine” tool. Other engines often have things like the level editor in engine so as to avoid duplicate code-bases.

I looked at my work on Noise Factory and realised that I had built a wrapper around my noise library and then plugged that into a GUI API. Why do that when I could’ve just used the UI I have made in-engine?

Quickly I realised it’s because my UI wasn’t easy to use to put together a program. The GUI framework tricks I learnt from wxWidgets really showed me the flaws in my own design. So, I’m not re-working the UI library to be friendlier to use, and easier to adjust.

This means, that I’ll most likely spend another few weeks working on the UI library. I guess if it’s at a point where I can use it quickly and easily, it will be that much simpler to finally implement it in a game.

Progress on Fleet Battles

Fleet Battles is what I’m naming the miniature game that I’m making to realise the fleet warfare aspect of the game. I’m also doing it so that I actually complete something and hopefully gain some learning from it to then proceed with the rest of The Last Boundary.

I’m sort of thinking that I may end up with several games, but that’s far into the future.

Anyway, I decided that the UI I had created looked a little sad; so I’ve been working in GIMP to make a new one. The result required some reworking of my UI code to incorporate the new bits and bobs but I think it looks pretty good so far.

Fleet Battles title and UI
New UI and Title Screen

The title could probably use some work; but I’m far from a designer. So it’ll probably stay like that.

More UI work completed

I’ve done a bit of refactoring of my UI code and brought it down a few hundred lines of code.

I’ve also started noting the performance. Right now, a window with a table and a bit of text and another window with just text gets me around 500-550 fps. This is from a base of about 2500 (if nothing is being drawn to the screen).

I have no idea if this is acceptable or not. In contrast, my scene with a sun and an earth runs at about 200-250 fps. The thing is; I’m doing things in the most inefficient way.

So; I put some work into speeding up the code. Everything in the UI uses a single shader for one. Next is to get everything to use a single VBO; it’s not doing that now because my text quads are drawn with position info in the vertices, whereas everything else uses a translation and scale matrix to define the position.

If they both use the translation matrix, I can remove any need to bind different buffers between calls. This should make things faster.

Also, everything at the moment binds it’s texture. However everything uses the same texture (except text); so I may be able to remove the  glBindTexture calls for a lot of the time. Probably via a texture manager or something.

If it’s still not fast enough; I’ve been reading about instance drawing and also about using textures to hold position info (R,G is x,y and B,A is w,h). I think I can combine these so that my gl_InstanceID in the vertex shader is my lookup into the texture of position info. I’d probably also need a second texture to hold UV coords seeing as these don’t change much either (textures rather than providing a lookup array to the shader, only because that would need to be updated and passed each frame anyway).

Here’s a picture:

SpaceEngineUI 2015-01-02 23-16-02-99


I’ve been without the internet for a little while so I spent a good part of that time refactoring some of the UI code.

I quickly identified a distinction between widgets that hold a single other widget, and widgets that contain many other widgets. These are Containers vs Layouts. Obviously a widget that holds one other widget is a container, something that holds many widgets only does so for the sole purpose of positioning them, therefore it’s a layout.

This brings me a Container class that the Window now inherits from. Containers don’t seem to have a clear way of doing a hierarchy as each positions them differently. The two I have now, a Tetris like stack that I call Box (I might rename it to TetrisLayout) and TableLayout don’t even store their widgets in the same manner.

The UI code is really starting to look slick. It’s actually got more code than the current state of the graphics engine and more than 4x the code of any of the celestial mechanics and AI stuff I’ve already done.

Scrolling Box Complete

I’m pretty sure I’m now 99% complete with the scrolling boxes. There were a couple of bugs; one which was interesting.

Because the  box essentially draws its contents onto a render texture, the render texture created is the size of the screen. Then, as the box is scrolled, basically the region of this texture that’s then drawn to the boxes quad is updated.

What I found was that I was drawing the contents at normal window position on this texture. So a box close to the edge of the screen would result in the contents being partially drawn offscreen. Which is fine. Until you scroll and then the “region” of the texture you’re drawing is actually not on the texture. Graphical glitch.

Easy fix was to just draw to 0,0 on the texture and accommodate this in the region calculation. It also means that I can use a much smaller texture; probably the size of the widget contents. However, there would be a problem if that widget changed it’s size, I would then need to regenerate the texture, or resize it; and I’m concerned about the overhead needed. For now the screen dimensions cover all possible cases, so I’m using it until there’s performance problems.

I’ve also enabled mouse wheel and touch-pad scrolling via SDL2. So that’s pretty handy, makes it much more usable. I’m basically trying to recreate everything that people are used to.