VOGONS


Reply 40 of 41, by ragefury32

User metadata
Rank Member
Rank
Member
diagon_swarm wrote on 2020-03-21, 19:45:

VW 320 was far more interesting product than 540 (540 had just more PCI-X slots and doubled number of CPU slots). You could get VW 320 under $4,000 and get 3D and geometry performance comparable with $10,000 workstations (3D core is exactly the same in 320 and 540). That was not a bad deal. In addition to it, the texture fill-rate was superior to any PC workstation card available at that time. Btw, I've measured many professional cards from the 90s - http://swarm.cz/gpubench/_GPUbench-results.htm (it's not in an easily readable form but provides multiple useful details ) ... the hi-res texture performance was bad with Intergraph and 3Dlabs products.

Rage LT - That's what I think, but I have never found evidence to support it (other that I haven't found any other laptop with this chip... but that could be like EGA-equipped laptops... I thought that only few were made and then I found a lot of them from both - well-known brands and OEMs).

Savage4 - I'm not sure about that. The info I found was always very fuzzy. I just know that when I tested the chip by myself, the per-clock performance was perfectly comparable with desktop Savage4. If you have relevant sources, I would be happy to read about this.

From what I remember, the Cobalt architecture is similar to the O2 (also a UMA setup)- that is, the geometry is done on the CPUs, and then the onboard chips of the Cobalt chipset takes care of texture mapping, mip-mapping and some codec decompression. That's why I bought up Pro/E versus, say, texture mapping on a video. If your task at hand is to work on CAM/CAD related stuff, you pretty much want as much CPU horsepower as possible (upgrading SGI O2s from R4ks to an R12K would significantly boost the performance of the rendering pipeline)...and that's why the 320 (1-2 socket instead of 4 max) is such a niche product. If you want to do CAD/CAM you probably do not want it since you want as many cores driving the 3D pipeline as possible. If you do broadcast graphics (where you are mapping video streams on a simple 3D mesh) it'll probably be something you want.

I doubt that the RageLT (the PCI version, not the Rage Pro LT, which is a different animal and really common) were found on many of the earlier laptops, at least not the mainstream ones. Neomagic and S3 (and to a certain extent C&T/Trident) dominated the field back then. My guess with the appearance of the RageLT on the Mainstreet/Wallstreets was that ATi (back then just a small graphics ASIC provider north of Toronto, certainly not the "red team" juggernaut going up against nVidia in later years) were willing to work with Apple to ship PowerPC/MacOS based drivers for their machines (the Rage II+ were found in the original iMacs as well), while Neomagic and S3 were not.

As for the Savage IX, here's a rather recent writeup regarding its capabilities -> https://b31f.wordpress.com/2019/10/24/s3-savage-ix-on-trial/. My own (not very scientific) comparisons between my Dell Latitude C600 (ATi Rage 128 Mobility/M4) and the Thinkpad T21 (Savage/IX 😎 seem to suggest that the Savage were a little behind the M4 in most benchmarks but still a decent performer at 800x600 resolutions for most games made before 2001. For me the Savage was more valued for decent compatibility with DOS VESA games.

Reply 41 of 41, by diagon_swarm

User metadata
Rank Newbie
Rank
Newbie
ragefury32 wrote on 2020-03-23, 16:57:

From what I remember, the Cobalt architecture is similar to the O2 (also a UMA setup)- that is, the geometry is done on the CPUs, and then the onboard chips of the Cobalt chipset takes care of texture mapping, mip-mapping and some codec decompression. That's why I bought up Pro/E versus, say, texture mapping on a video. If your task at hand is to work on CAM/CAD related stuff, you pretty much want as much CPU horsepower as possible (upgrading SGI O2s from R4ks to an R12K would significantly boost the performance of the rendering pipeline)...and that's why the 320 (1-2 socket instead of 4 max) is such a niche product. If you want to do CAD/CAM you probably do not want it since you want as many cores driving the 3D pipeline as possible. If you do broadcast graphics (where you are mapping video streams on a simple 3D mesh) it'll probably be something you want.

I doubt that the RageLT (the PCI version, not the Rage Pro LT, which is a different animal and really common) were found on many of the earlier laptops, at least not the mainstream ones. Neomagic and S3 (and to a certain extent C&T/Trident) dominated the field back then. My guess with the appearance of the RageLT on the Mainstreet/Wallstreets was that ATi (back then just a small graphics ASIC provider north of Toronto, certainly not the "red team" juggernaut going up against nVidia in later years) were willing to work with Apple to ship PowerPC/MacOS based drivers for their machines (the Rage II+ were found in the original iMacs as well), while Neomagic and S3 were not.

As for the Savage IX, here's a rather recent writeup regarding its capabilities -> https://b31f.wordpress.com/2019/10/24/s3-savage-ix-on-trial/. My own (not very scientific) comparisons between my Dell Latitude C600 (ATi Rage 128 Mobility/M4) and the Thinkpad T21 (Savage/IX 😎 seem to suggest that the Savage were a little behind the M4 in most benchmarks but still a decent performer at 800x600 resolutions for most games made before 2001. For me the Savage was more valued for decent compatibility with DOS VESA games.

Savage IX: Thanks for the link. I should have read it first before doing my own research today. He is right with the 100/100 MHz core/mem clock - that's what I've identified years ago using low-level tests. Now I see that Savage IX doesn't support multitexturing so it seems to be more like Savage3D. However, it supports S3DT (contrary to what he said). I've checked my notes and there are all relevant extensions:

S3 Savage/IX OpenGL

HP Omnibook XE3 / P3 Celeron 850MHz (8.5x100) / 256MB PC133 RAM
Windows 2000 SP4

S3 Savage/IX+MV (86c294) with 8MB RAM
AGP 2x @ 2x

100 MHz core/mem
ROP:TMU = 1:1

8MB 64bit SDR

GL_VENDOR: S3 Graphics, Incorporated
GL_RENDERER: SavageMX
GL_VERSION: 1.1 2.10.77

GL_EXTENSIONS:
GL_ARB_texture_compression GL_EXT_texture_compression_s3tc GL_EXT_abgr GL_EXT_bgra GL_EXT_clip_volume_hint GL_EXT_compiled_vertex_array GL_EXT_fog_coord GL_EXT_packed_pixels GL_EXT_point_parameters GL_EXT_paletted_texture GL_EXT_separate_specular_color GL_EXT_shared_texture_palette GL_EXT_texture_lod_bias GL_EXT_vertex_array GL_KTX_buffer_region GL_S3_s3tc GL_WIN_swap_hint

I assume that they just fixed easy bugs and used the old 3D core to save as much transistors as possible - although the fastest mobile chip, Savage/IX was pretty efficient. It would be interesting to check if video features are on the same level as Savage4 or Savage3D.

Btw DELL Latitude C600 has just Mobility M3 (low-cost version with just 64bit 8MB video ram embedded). This is the same as was used in one version of PowerBook G3. This version was heavily limited by the 64-bit data bus and hi-res multi-texturing textures or hi-res single-texturing+blending slowed down the chip to the Savage/IX level. I have C600 and also C800. The hi-end C800 has Mobility M4 (16MB 128-bit) and offers much better performance in games. It's sad that the 128-bit versions were available only in workstation-class laptops. 128-bit interface was necessary to fully utilize both pixel pipelines of the chip.

RageLT and Mac: I think ATI was not very small during the mid- to late-90s. They had big ads in magazines saying something like "every third computer graphics chip (sold) is made by ATI" back then. I don't know what was behind the deal between ATI and Apple. On the other side, they didn't have much to choose from - maybe the only other choice was S3 (NVIDIA was too "young", CL didn't have good 3D accelerator and it was going down as a company, 3Dfx didn't have 2D core...).

Cobalt: Although the main concept of the "Cobalt" GPU is similar with "Crime" (SGI O2), Cobalt is a different / better / faster design. Cobalt uses two pixel pipelines running on 100 MHz (instead of just one on 66 MHz). As a rasteriser, Cobalt has 3-5x better performance than Crime. Fill rate was better than on Indigo2 Maximum Impact / Octane with MXI/MXE. Even the geometry performance was better than in Octane, until Octane2 was released with VPro V6/V8 (early 2000). In comparison with VPro V6/V8, fill-rate and geometry performance is 40% lower on Cobalt. Still good for a system that was so much cheaper (SGI 320 was cheaper than O2 with the slow R5200).

Sadly, upgrading O2 from R5K to R10K/R12K didn't help much with the 3D performance. It helped only in cases where the rendering was draw call limited. Geometry performance was increased by no more than 50% and that was still just third of Cobalt. Unlike with O2, where geometry was calculated using R5K+ vector units, Cobalt ASIC handles all geometry processing and provides steady performance regardless the CPU speed (I tested this on multiple SGI 320 machines). Geometry acceleration is mentioned even in the SGI 320 official white paper.

I would not said that SGI 320 was for niche market. They marketed wrong/niche features (video textures and similar stuff) and together with the sales force, who didn't want to sell the machine, it was doomed from the beginning. For CAD/CAM, having four CPUs was not mandatory. In general, most CAD/CAE workstations were configured with just one/two CPUs and that was true even for hi-end software packages. SGI didn't try hard to sell the machine and didn't try hard to fix the bugs (like the one with OpenGL&VSync) or improve the driver performance. Even with all the flaws, SGI VW 320 was the fastest sub $5000 workstation in the world for anyone who needed to handle extremely complex 3D models in CAD/CAM/CAE software.

Attachments