I think VRAM has become an umbrella term for a variety of different memory technologies that happen to suit graphics well.
Indeed, in early times, there was dual-ported memory, which meant you could avoid waitstates between CPU and video circuit. This dual-ported memory I believe was the original 'VRAM'.
These days we have GDDR, which is memory more optimized for high bandwidth, at the tradeoff for generally longer latencies than conventional DDR memory. Since modern GPUs are highly pipelined with very predictable (and therefore cacheable/prefetchable) memory access patterns, the latency can be hidden quite well, and bandwidth is all-important (it's your fillrate).
And ironically there are machines where the CPU also uses GDDR, because it's a shared memory system, with the focus on graphics, such as modern Xbox and PlayStations.
And there have been many technologies in between, including the aforementioned SGRAM (which was actually a budget option as well), and 'WRAM'... etc.
DRAM-based cards were budget cards, and generally had worse performance with accelerated tasks, but with pure software rendering like in most DOS games, you wouldn't really notice.
Because they're budget cards, it's often difficult to compare directly between DRAM and VRAM versions, due to different clockspeeds etc. VRAM cards generally also have the graphics chip and memory running at higher clocks.
In general, 'VRAM' just means Video RAM, and basically just means "the memory that is connected to your video circuit". The term is also used in cases where the machine doesn't actually have physically different types of memory, but some part of the memory is just reserved for video use (such as on a PCjr or Tandy 1000).