Durability could be greatly improved with thermal throttling. It's hard to kill an Intel CPU through overheating (or excessive thermal cycling) because Intel takes this seriously. By the time Intel chips were dissipating enough heat for this to be a big concern, they were on top of it. But it's easy to kill a graphics card this way.
Supposedly there is a thermal limit in nVidia's drivers (don't know about ATI). But the threshold is set so high that it's of no value.
Do modern cards detect fan failures yet? Older ones certainly don't.
I had a Ti4200 die because a heatsink push pin popped loose. That's an older card, but build quality on 3D gaming cards was like that for a long time. Newer cards at least have screws, and not just two of them, so I guess that's progress.
My 9800 Pro started artifacting at boot. It probably ran too hot. I don't think it even had a temperature sensor. Next time I want to use it I'll install an aftermarket heatsink and see what happens, but it's clearly damaged and will probably be flaky from now on.
A relative's 9800GT died after about 2 years of heavy gaming. Typical story for those cards.
At the other extreme, my Geforce2 MX cards were/are just about unbreakable. I don't think they're any more failure prone than older 2D era cards, because they don't use enough power to put much stress on themselves. They don't even need a heatsink.
I still use GT200 cards on modern systems, but I modified the fan profile to keep them at or below 75C. As shipped, they want to run at 90C+. Cleaning them is also a PITA and most people (who don't value old hardware like we do here) wouldn't bother with it. I also don't play games nearly as often as I did when I was younger. One year with a heavy gamer is probably equal to 10-15 years for me.