GPU and CPU rendering is pretty much mutually exclusive.
That is, the only logical point where you can make a division is with vertex processing. It is (or was rather) somewhat doable to let the CPU do vertex processing, and then push the data to the GPU for the actual triangle rasterization and per-pixel shading, texturing and whatnot.
But once you go into rasterization, there's no efficient way to let the CPU do 'part of the work'. The CPU doesn't have the same quick access to the textures, videomemory and whatnot, so in practice that is not going to work.
You either want to do everything on the CPU then, or everything on the GPU. The overhead of switching from one to the other would defeat the point.
In recent years, games have used such heavy geometry loads, that CPUs can no longer process this quick enough either, so vertex processing is no longer an option.
Which is why games have been 100% for some years now. They do allow you to scale complexity and detail up and down somewhat to cater for different speeds and capabilities of GPUs, but nothing that involves the CPU. Offloading to the CPU simply isn't an option anymore.
Even the slowest of integrated GPUs today would render orders of magnitude faster than the fastest CPU with the most efficient software renderer. There is just so much more parallelism in GPUs, even low-end ones, and then there are all sorts of specialized circuits, such as texture prefetching, filtering etc, that make GPUs so much more efficient than CPUs at rendering tasks (and various other tasks at that).