Dithering is a process, it's not something you get for free when supporting 16bit types. For free, you get 'colour banding' due to loss of precision of the colour gamut. A 16-bit float represents one value with floating point precision. 16-bit colour refers to 16-bits representing an RGB (3 values, 5,6,5 bits respectively).
The vendor implements dithering, it could be done via software in the driver, or hardware. Traditionally it had to be done fundamentally due to the limitation of hardware's colour buffer. I can't imagine the rationale behind re-implementing this feature in these kinds of modern architectures.
If it was there, then wasn't, then was again, my money would be on the driver implementing this software side mainly, maybe hardware accelerated somewhat (compute), but not entirely in hardware? but idk.
The Serpent Rider wrote:
Translation:
If you are working with values which have no need for the higher precision, you can get performance gains from the hardware because with a native FP16 type, it can do twice the work in the same time as 32-bit float. Plus you can hold twice the amount of data in an array, which means copies/operations can be done on twice as much information in the same amount of time. There are many areas which would not require anything more than 16-bit precision, but RGB colour representation is clearly something which does 😀