Blender GUI design and implementation

I have some questions about the GUI of Blender. I have heard that it doesn’t use a common API or framework such as Qt. I heard something about GHOST, but don’t really know what it is or does on the side of GUI rendering.
I am curious to find out how the GUI was created, and if the current GUI implementation uses a Retained Mode or a Immediate Mode. I have noticed that Blender’s GUI is surprisingly fast, even on poor machines. I would like to know what makes Blender’s GUI what it is today.

Thanks in advance!

1 Like

Ghost does very little for the UI, it deals with the platform specific bits like creating windows, dealing with the mouse/keyboard/tablet input, locations for user preferences and creating an OpenGL context.

Once that context exists the platform independent code in source/blender/editors/interface does the actual drawing.

So Blender uses it’s own GUI? Do you know if it’s immediate or retained mode? And if I understood you correctly, OpenGL is used for drawing?

Yes it is drawing its own ui , no 3rd party toolkit is used for it, as for retained vs immediate , it’s not really my area, but afaik it’s some form of immediate drawing, you’d have to check the code if you want to know more.

The Blender GUI is established through an interplay of multiple parts of the code. I.e. on a high level:

  • GHOST – OS dependent code (windows, OpenGL context, device input, etc.)
  • Window-Manager – OS independent management of windows, events, keymaps, data-change notifiers, etc.
  • Interface (source/blender/editors/interface/) – button drawing, button event handling, layouts, UI tools (e.g. eyedroppers), menus/popups, …
  • Editors – screen-layouts (areas & regions – think of these as the sub-windows), editors (e.g. 3D View, Properties, Node Editor, …), gizmo libraries, tools for individual editors, etc. The Python scripts for layout definitions are executed as part of this too.

Of course this is a very simplified look . There are further things involved like file read/write, undo/redo, translations, context, preferences, add-ons…

As for immediate vs. retained mode (I think this relates to the GUI code, not the graphics drawing): One could argue it’s a mixture, but mostly retained. Buttons are defined, the layout engine runs, later on buttons can capture events, respond with state changes and ultimately tell our data system to update data based on that. It’s roughly a MVC design.
However, buttons are still created on the fly. On each redraw, all (non-active) buttons are destructed and the layout is re-constructed almost from scratch. That way, UI definitions (e.g. the Python scripts) can conditionally place items:

if condition:
  # Optionally create a button for the data.propname property.
  layout.prop(data, "propname")

This is a typical immediate mode characteristic. For retained mode you’d explicitly hide an existing button.

One thing the GUI code is quite “smart” about is reducing redraws: If data changes, a notifier can be sent, that categorizes this change. Different parts of the UI can listen to these notifiers, and tag themselves for a redraw if they care about the category of data. That way, a change in the Dopesheet usually does not cause the Image Editor regions to redraw.

There are some old 2.5 docs on these designs here:

All of this sounds like a careful designed architecture, and in some ways it is. But much of this is historical. The UI code contains some of the most messy code I know in Blender. It’s a mixture of very old code (literally from the first days of Blender), newer 2.5 designs and years and years of hacks. Nevertheless, I think especially the 2.5 design brought useful concepts to the design. Although I think the way they were implemented is problematic in many cases.


So GHOST is something similar to GLAD? Except probably doesn’t do as much. Would SDL do something like GHOST?