First post, by thecrankyhermit

User metadata
Rank Member

Just something I'm curious about. I think CGA was simultaneously the worst and the best of early computer graphics adapters. RGB mode was the worst, and it's what most gamers remember since that's the only mode that worked in backwards compatibility. But composite mode with a properly supported game looked better than any of the 8-bit competitors, and I think they were all using composite too.

But on the surface it doesn't look like CGA is actually doing that much. Competitors had sprites and tiles, Commodore 64 even had hardware scrolling support. IBM CGA doesn't do any of that. It looks like the CPU has to do all the heavy lifting. So why are CGA cards enormous compared to everything else available at the time, which seemed to have more functions?

Windows 10
Core i5-6600
Geforce GTX 970

Reply 1 of 3, by Scali

User metadata
Rank l33t

The simple answer is: off-the-shelf components.
Commodore designed its own custom VIC-II chip, which integrated all the logic for the display, including a CRT controller, sprite support etc.
The CGA card uses a standard Motorola 6845 CRTC chip, and requires separate logic to make it into an actual graphics card. This is mostly implemented with discrete 74xx logic ICs (building in fancy features like sprite support would have been very complex this way).
Another thing that makes the card big is that its 16KB of memory is implemented with very low capacity memory chips, so they needed many of them (probably was a good way to keep the cost down in 1981, not so much in later years).
The C64 originally needed 8 64kbit chips to get its 64KB of memory, CGA needs 8 16kbit chips (4516) to get its 16KB of memory. So same amount of chips, 4 times the storage.

I guess the CGA card had two design criteria:
1) Has to be done ASAP
2) Has to be as simple/cheap as possible while delivering somewhat competitive graphics and text modes

IBM was under extreme pressure from companies like Apple, Atari and Commodore grabbing the home/personal computer market, and the PC was rushed to have something to compete. As a result, there was no time to do fancy custom chips and integrate logic to make it more compact. So they did what they could do with off-the-shelf components.

Clone builders would later integrate the basic CGA logic into a single chip, similar to how Commodore did it. See the ATi Small Wonder for example.
The same goes for the rest of the system: The PC's motherboard is huge and filled with single-task components and glue logic. C64 has integrated chips for those as well. Later clones would also integrate most of the motherboard logic in just 1 or 2 chips, sometimes even including other functionality as well, such as HDD and floppy controllers, LPT and COM ports etc.

Last edited by Scali on 2016-11-01, 12:59. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 3 of 3, by noop

User metadata
Rank Member

CGA was developed earlier. When it was developed, memory chip capacities were smaller, or, at least, devices made with multiple smaller chips were cheaper. Just before C64 was released, RAM prices fell sharply, so it got bigger chips. Also, apparently, IBM engineers weren't allowed/able to design specialized chips, like MOS engineers could. They have a different design goal. Quickly design a business computer that could be bigger and more expensive than Apple II or Atari, produced in smaller numbers and sold at higher margin.