Vulkan Window System Integration Talk at SIGGRAPH 2015

Alon Or-bach of Samsung gave this presentation on the Vulkan Window System Integration at SIGGRAPH 2015. WSI is the mechanism by which images rendered with Vulkan appear on a screen. We can get a fair bit of information about how the system works from this talk. I am posting it a bit late but it is the latest WSI information available and I have added my notes on it below.

Vulkan WSI has a notion of a platform, and the intention of it is to abstract you from the particular operating system or windowing system your app is running on. Vulkan physical devices advertise properties of their queues, and with the WSI extension, a property of those queues is the ability to present images to the presentation engine. The presentation engine is an abstraction for the system compositor or whatever process gets rendered pixels to a screen. A presentable image is a standard VkImage from the point of view of a Vulkan app, but created by the platform (probably by the presentation engine). Being presentable means that it can be fed into the presentation engine via WSI to eventually be displayed, as well as being a target for rendering within a Vulkan app. A set of presentable images is formed into a swapchain for reuse, where each image in the chain is either under control of the app or the presentation engine at any given moment, with explicit synchronization via a semaphore to control the handover of images between app and WSI.

Allocation of presentable VkImages is done up-front ahead of time. The system thus knows all the images which can ever be presented to it. The application asks for a minimum number of images and the presentation engine will return at least that number. There is also a mechanism to tell the application that it needs to recreate a swap chain. This comes into effect when the window is resized for example.

Presentable images are either under the control of the presentation engine or of the application and it is an error for the application to attempt to draw to an image that is being displayed. Acquiring the next image from the presentation engine for the application to rendered into and presenting a complete image from the application to the presentation engine are two separate operations. This separation allows the acquire to occur at the point in the app’s render loop after it has done side work like dynamic command buffer generation and just before it wants to submit its first drawing commands, and allows the present to be at the end of it. This separation should allow the CPU to do other useful things after a present before it has to enter a potentially blocking acquire during the next iteration of its display loop.

Update 2015-10-29:

Tobias Hector of Imagination has spoken about this a little in a newer video.

TAGS > ,

Post a comment