How to render to HDR displays on Windows 10


If you missed it, yesterday I wrote an artisanal blogue poaste about a tool I am working on called the HDR injector. Today, I will write about how to present an HDR image in Windows. Some of this will be a summary of the talk by Evan Hart at GDC 2018.

There are three steps, which I don't think are explained anywhere, except maybe within these ultra-rare DX12 samples that I had to catch midair from a hot tweet.

  1. Set your swapchain effect to DXGI_SWAP_EFFECT_FLIP_DISCARD. Emphasis on FLIP. The GDC talk above seems to think this isn't 100% required, but I couldn't get any HDR working without it.
  2. Set your swapchain buffer format to an HDR compatible backbuffer format. The ones that work for me are: DXGI_FORMAT_R16G16B16A16_FLOAT and  DXGI_FORMAT_R10G10B10A2_UNORM, though I expect other formats might work. I haven't tested all of them.
  3. Depending on which format you picked in (2) you need to select the correct color space. If you  selected RGBA1010102, then you want to pick  DXGI_COLOR_SPACE_RGB_FULL_G2084_NONE_P2020. If you picked float16 then you want to use DXGI_COLOR_SPACE_RGB_FULL_G10_NONE_P709
The color space selection and backbuffer formats have some implications, that I'll go over now.


This is the format which expects you to do your own color space conversion and PQ encoding. For those of you not familiar with the minutiae of HDR, the standard way to present HDR data is using something called the Rec2020 color space and output format using Perceptual Quantizer encoding or PQ. A really inaccurate way to say it would be that it's SRGB for HDR (please be kind, pedantic nerds). Here's the code for doing PQ encoding. Thanks Microsoft!

REC2020 colorspace is too powerful.

An aside: This weird horseshoe looking thing is all the visible colors. Rec2020 color space is the triangle inside this weird horseshoe, and it's real big. No monitors or TVs that normal people can buy today can display the entire color space. The primaries (corners of the triangle) are pure frequencies of light - only attainable via LASERS. That's right; the colors are so rare and powerful that you need lasers to make them. If you are interested in all this color theory junk, please read this amazing and super long article on Pointers Gamut. That article is how I got started with a lot of this garbage.

Ok, back on track.
When you select DXGI_FORMAT_R10G10B10A2_UNORM as your backbuffer format, you should select DXGI_COLOR_SPACE_RGB_FULL_G2084_NONE_P2020 as your colorspace. Either render using Rec2020 primaries, or stretch your content into that color space, then apply that PQ encoding code I linked above, and it should work! This is what the HDR injector currently does, though there may be performance costs associated, because windows uses a 16 bit float buffer for compositing, and you may need to convert.


This format does things a little differently. It's what the GDC talk above recommends, because that is what Windows uses for compositing, but it's kind of unintuitive to me. When you pick a float16 back buffer, you'll want to use DXGI_COLOR_SPACE_RGB_FULL_G10_NONE_P709. That means that you're actually using linear colors (not PQ encoded) and Rec709 primaries - aka SDR/SRGB colors.
Rec709 overlaid on Rec2020 overlaid on the weird horseshoe

The diagram shows how much smaller 709 is than 2020. When you use the DXGI_COLOR_SPACE_RGB_FULL_G10_NONE_P709 color space option, then you actually use negative values to access colors outside of rec709. This whole weird thing is called scRGB. I think there's some kind of gap in my intuition here, as my esteemed colleague Robin Green says, (paraphrased) "thinking about colorspaces as triangles on horseshoes is the wrong idea." 
Well, he's right, because I don't know how you get luminance out of an XY coordinate. The GDC talk says that the coordinate (1,1,1) is white at 80 nits brightness, and 12.5 would be 1000 nits. 12.5 what? Are these coordinates like, "how bright should I make R, G, B?" individually? Because that doesn't correspond to a location within the color space. Are they barycentric coordinates for the triangle? Then it seems like you'd never get luminance out of it. Are they (X,Y) for coordinates on the horseshoe, and the Z for brightness (essentially YCbCr), well that makes sense for some things, but not for scaling or negative colors. Someone please help me with this. In the meantime, I'll be using the other colorspace.

Because this is the colorspace that Windows uses for compositing, you don't need to do the PQ encoding yourself, instead windows will do it for you, which might be able to save you some cycles.

I think that's all you'll need to get some eye-searing HDR up on the screen. I'm off to go research this color space conversion stuff - maybe I'll do a follow up post if I can figure it out.

Hit me up on the tweeto @pyromuffin if you got any questions, comments, hate-mail, sexts, etc.



  1. Hey, the whole theory behind this magical triangle is nicely explained here:

    I understand it this way: If you have 3 components RGB, but you normalize them so their sum is always 1 (thus disregarding brightness, leaving only hue), then you can use only 2 numbers, just like barycentric coordinates on the surface of a triangle.


Post a Comment

Popular posts from this blog

iamagamer game jam: Blood

How to Simulate and Render Blobs