Standard Def Steve wrote:Yep, Win7/8/10 uses a very different method of 2D acceleration than XP. Many new apps designed with the new OS in mind can use Direct2D instead of GDI.
That is one thing...
Another thing is that the GDI acceleration works differently as well.
A huge difference between XP and later GUIs (say 'Aero-based') is that the later ones are designed for GPUs, and basically run on D3D9.
This means that they assume your video card has 3d texturing capabilities and zbuffering.
You see, in early days, video memory was expensive, so you often didn't have more video memory than what was visible on the screen, at least in high resolutions and high-colour modes.
This meant that everything had to be drawn directly on screen, because double-buffering was not an option.
So to solve the z-order overlapping, the GUI framework kept track of bounding rectangles of each window (and in Windows, every component is a 'window', also buttons, checkboxes etc) and their z-order, in a tree-structure.
Using this tree, the framework could determine exactly which parts of which windows were visible, and which parts were not, and it would redraw only those parts of the windows, directly onto the screen.
You have probably seen this in action: when you drag a window quickly, you 'invalidate' the rectangles underneath, but they do not get redrawn instantly. You'll see remnants of your dragged window there, until the OS has enough time to redraw everything, and the underlying controls 'snap' back into place. Or of course, if the underlying application froze, it can no longer redraw, so its window remains corrupt.
By the time Vista came along, even the simplest integrated GPUs had basic Direct3D 9 acceleration capabilities, and videomemory was not an issue anymore either. You no longer wanted flickery-teary GUIs, you wanted smooth, solid GUIs with double-buffering and v-sync.
So, Microsoft dropped the old tree-of-rectangles, and instead let the GPU solve it. It's very elegant really:
Each window gets its own offscreen video buffer, basically a texture. The desktop is 'composited' by rendering each window as a textured quad, with the z-value of that window. The zbuffer will then automatically handle the z-order for you.
A huge difference in this approach is that overlapping windows no longer invalidate eachother, because they are not physically overlapping the offscreen buffers, and their contents do not get corrupted. This means that there is a lot less redrawing going on.
This is the main reason why Windows Vista got away with no GDI-acceleration so well: sure, it had to redraw components on the CPU, but it didn't have to do it all that often.
There was a catch though: because CPU-access to GPU-managed buffers is expensive (contention issues and all that), Vista had both a system memory and a video memory buffer for each window. The CPU could update the system memory buffer, and it was then staged to the GPU when done.
With Windows 7, Microsoft reintroduced GDI-acceleration. However, it worked in a completely different way from how it was implemented in XP and earlier.
It built on the Vista system, but removed the need for the system memory. Instead it used 'aperture memory' (you know the AGP aperture? It doesn't really have a name for PCI-e, but the idea is the same: memory that is shared by CPU and GPU). This means that the GPU can now draw directly to the backbuffers, opening the possibility of hardware-acceleration once more. It also means that you don't need a texture in video memory if you already have one in aperture memory, so Windows 7 is more efficient than Vista with its memory usage.
However, only a few things are actually GPU-accelerated. Mainly blits, alphablend, colorfill and font rendering (ClearType). XP had all sorts of operations such as line-drawing, rectangles, circles, polygons etc. These are not accelerated because GPUs use a very different way of rendering than what is required for these old bitplane-based operations (then again, modern CPUs can do these very quickly, so it may not even be worth it to set up specific commands for the GPU, which takes a lot of overhead). Which is why Direct2D was introduced: a new API which supports rendering operations that map nicely to GPU operations.
Another thing I seem to recall from GDI in Windows 7 is that they made GDI asynchronous. That is, drawing operations from different GDI contexts can overlap to speed up processing. I cannot find a reference to that at the moment though.
Anyway, TL;DR: There's a difference between 'GDI acceleration' and 'GDI acceleration'.