VOGONS


Reply 40 of 45, by ragefury32

User metadata
Rank Member
Rank
Member
diagon_swarm wrote on 2020-03-21, 19:45:

VW 320 was far more interesting product than 540 (540 had just more PCI-X slots and doubled number of CPU slots). You could get VW 320 under $4,000 and get 3D and geometry performance comparable with $10,000 workstations (3D core is exactly the same in 320 and 540). That was not a bad deal. In addition to it, the texture fill-rate was superior to any PC workstation card available at that time. Btw, I've measured many professional cards from the 90s - http://swarm.cz/gpubench/_GPUbench-results.htm (it's not in an easily readable form but provides multiple useful details ) ... the hi-res texture performance was bad with Intergraph and 3Dlabs products.

Rage LT - That's what I think, but I have never found evidence to support it (other that I haven't found any other laptop with this chip... but that could be like EGA-equipped laptops... I thought that only few were made and then I found a lot of them from both - well-known brands and OEMs).

Savage4 - I'm not sure about that. The info I found was always very fuzzy. I just know that when I tested the chip by myself, the per-clock performance was perfectly comparable with desktop Savage4. If you have relevant sources, I would be happy to read about this.

From what I remember, the Cobalt architecture is similar to the O2 (also a UMA setup)- that is, the geometry is done on the CPUs, and then the onboard chips of the Cobalt chipset takes care of texture mapping, mip-mapping and some codec decompression. That's why I bought up Pro/E versus, say, texture mapping on a video. If your task at hand is to work on CAM/CAD related stuff, you pretty much want as much CPU horsepower as possible (upgrading SGI O2s from R4ks to an R12K would significantly boost the performance of the rendering pipeline)...and that's why the 320 (1-2 socket instead of 4 max) is such a niche product. If you want to do CAD/CAM you probably do not want it since you want as many cores driving the 3D pipeline as possible. If you do broadcast graphics (where you are mapping video streams on a simple 3D mesh) it'll probably be something you want.

I doubt that the RageLT (the PCI version, not the Rage Pro LT, which is a different animal and really common) were found on many of the earlier laptops, at least not the mainstream ones. Neomagic and S3 (and to a certain extent C&T/Trident) dominated the field back then. My guess with the appearance of the RageLT on the Mainstreet/Wallstreets was that ATi (back then just a small graphics ASIC provider north of Toronto, certainly not the "red team" juggernaut going up against nVidia in later years) were willing to work with Apple to ship PowerPC/MacOS based drivers for their machines (the Rage II+ were found in the original iMacs as well), while Neomagic and S3 were not.

As for the Savage IX, here's a rather recent writeup regarding its capabilities -> https://b31f.wordpress.com/2019/10/24/s3-savage-ix-on-trial/. My own (not very scientific) comparisons between my Dell Latitude C600 (ATi Rage 128 Mobility/M4) and the Thinkpad T21 (Savage/IX 😎 seem to suggest that the Savage were a little behind the M4 in most benchmarks but still a decent performer at 800x600 resolutions for most games made before 2001. For me the Savage was more valued for decent compatibility with DOS VESA games.

Reply 41 of 45, by diagon_swarm

User metadata
Rank Newbie
Rank
Newbie
ragefury32 wrote on 2020-03-23, 16:57:

From what I remember, the Cobalt architecture is similar to the O2 (also a UMA setup)- that is, the geometry is done on the CPUs, and then the onboard chips of the Cobalt chipset takes care of texture mapping, mip-mapping and some codec decompression. That's why I bought up Pro/E versus, say, texture mapping on a video. If your task at hand is to work on CAM/CAD related stuff, you pretty much want as much CPU horsepower as possible (upgrading SGI O2s from R4ks to an R12K would significantly boost the performance of the rendering pipeline)...and that's why the 320 (1-2 socket instead of 4 max) is such a niche product. If you want to do CAD/CAM you probably do not want it since you want as many cores driving the 3D pipeline as possible. If you do broadcast graphics (where you are mapping video streams on a simple 3D mesh) it'll probably be something you want.

I doubt that the RageLT (the PCI version, not the Rage Pro LT, which is a different animal and really common) were found on many of the earlier laptops, at least not the mainstream ones. Neomagic and S3 (and to a certain extent C&T/Trident) dominated the field back then. My guess with the appearance of the RageLT on the Mainstreet/Wallstreets was that ATi (back then just a small graphics ASIC provider north of Toronto, certainly not the "red team" juggernaut going up against nVidia in later years) were willing to work with Apple to ship PowerPC/MacOS based drivers for their machines (the Rage II+ were found in the original iMacs as well), while Neomagic and S3 were not.

As for the Savage IX, here's a rather recent writeup regarding its capabilities -> https://b31f.wordpress.com/2019/10/24/s3-savage-ix-on-trial/. My own (not very scientific) comparisons between my Dell Latitude C600 (ATi Rage 128 Mobility/M4) and the Thinkpad T21 (Savage/IX 😎 seem to suggest that the Savage were a little behind the M4 in most benchmarks but still a decent performer at 800x600 resolutions for most games made before 2001. For me the Savage was more valued for decent compatibility with DOS VESA games.

Savage IX: Thanks for the link. I should have read it first before doing my own research today. He is right with the 100/100 MHz core/mem clock - that's what I've identified years ago using low-level tests. Now I see that Savage IX doesn't support multitexturing so it seems to be more like Savage3D. However, it supports S3DT (contrary to what he said). I've checked my notes and there are all relevant extensions:

S3 Savage/IX OpenGL

HP Omnibook XE3 / P3 Celeron 850MHz (8.5x100) / 256MB PC133 RAM
Windows 2000 SP4

S3 Savage/IX+MV (86c294) with 8MB RAM
AGP 2x @ 2x

100 MHz core/mem
ROP:TMU = 1:1

8MB 64bit SDR

GL_VENDOR: S3 Graphics, Incorporated
GL_RENDERER: SavageMX
GL_VERSION: 1.1 2.10.77

GL_EXTENSIONS:
GL_ARB_texture_compression GL_EXT_texture_compression_s3tc GL_EXT_abgr GL_EXT_bgra GL_EXT_clip_volume_hint GL_EXT_compiled_vertex_array GL_EXT_fog_coord GL_EXT_packed_pixels GL_EXT_point_parameters GL_EXT_paletted_texture GL_EXT_separate_specular_color GL_EXT_shared_texture_palette GL_EXT_texture_lod_bias GL_EXT_vertex_array GL_KTX_buffer_region GL_S3_s3tc GL_WIN_swap_hint

I assume that they just fixed easy bugs and used the old 3D core to save as much transistors as possible - although the fastest mobile chip, Savage/IX was pretty efficient. It would be interesting to check if video features are on the same level as Savage4 or Savage3D.

Btw DELL Latitude C600 has just Mobility M3 (low-cost version with just 64bit 8MB video ram embedded). This is the same as was used in one version of PowerBook G3. This version was heavily limited by the 64-bit data bus and hi-res multi-texturing textures or hi-res single-texturing+blending slowed down the chip to the Savage/IX level. I have C600 and also C800. The hi-end C800 has Mobility M4 (16MB 128-bit) and offers much better performance in games. It's sad that the 128-bit versions were available only in workstation-class laptops. 128-bit interface was necessary to fully utilize both pixel pipelines of the chip.

RageLT and Mac: I think ATI was not very small during the mid- to late-90s. They had big ads in magazines saying something like "every third computer graphics chip (sold) is made by ATI" back then. I don't know what was behind the deal between ATI and Apple. On the other side, they didn't have much to choose from - maybe the only other choice was S3 (NVIDIA was too "young", CL didn't have good 3D accelerator and it was going down as a company, 3Dfx didn't have 2D core...).

Cobalt: Although the main concept of the "Cobalt" GPU is similar with "Crime" (SGI O2), Cobalt is a different / better / faster design. Cobalt uses two pixel pipelines running on 100 MHz (instead of just one on 66 MHz). As a rasteriser, Cobalt has 3-5x better performance than Crime. Fill rate was better than on Indigo2 Maximum Impact / Octane with MXI/MXE. Even the geometry performance was better than in Octane, until Octane2 was released with VPro V6/V8 (early 2000). In comparison with VPro V6/V8, fill-rate and geometry performance is 40% lower on Cobalt. Still good for a system that was so much cheaper (SGI 320 was cheaper than O2 with the slow R5200).

Sadly, upgrading O2 from R5K to R10K/R12K didn't help much with the 3D performance. It helped only in cases where the rendering was draw call limited. Geometry performance was increased by no more than 50% and that was still just third of Cobalt. Unlike with O2, where geometry was calculated using R5K+ vector units, Cobalt ASIC handles all geometry processing and provides steady performance regardless the CPU speed (I tested this on multiple SGI 320 machines). Geometry acceleration is mentioned even in the SGI 320 official white paper.

I would not said that SGI 320 was for niche market. They marketed wrong/niche features (video textures and similar stuff) and together with the sales force, who didn't want to sell the machine, it was doomed from the beginning. For CAD/CAM, having four CPUs was not mandatory. In general, most CAD/CAE workstations were configured with just one/two CPUs and that was true even for hi-end software packages. SGI didn't try hard to sell the machine and didn't try hard to fix the bugs (like the one with OpenGL&VSync) or improve the driver performance. Even with all the flaws, SGI VW 320 was the fastest sub $5000 workstation in the world for anyone who needed to handle extremely complex 3D models in CAD/CAM/CAE software.

Attachments

Reply 42 of 45, by Stiletto

User metadata
Rank l33t
Rank
l33t

Came across this interesting thread about the ALi Aladdin 7 integrated GPU over at Beyond3D.com:
https://forum.beyond3d.com/threads/the-fabled … n-7-igpu.61051/

"I see a little silhouette-o of a man, Scaramouche, Scaramouche, will you
do the Fandango!" - Queen

Stiletto

Reply 43 of 45, by yawetaG

User metadata
Rank Oldbie
Rank
Oldbie
dionb wrote on 2020-03-07, 21:38:

Why the focus on Intel? They were pretty late to the game. Integrated video for the PC started with the 1996-era SiS 5511+6202 chipset. Maybe "integrated" isn't quite the term, as it was still a discrete chip, but it shared system memory, which is the defining feature of integrated VGA. A year later, SiS came with the 5596, which was the first actually integrated solution, integrating the 6205 into the 5571 northbridge. This was two years before Intel's 810.

Of course performance was awful, particularly in the 5511+6202 and 5596, as shared bandwidth in an EDO system left the CPU (and VGA core for that matter) completely starved. Intels i810 had a significantly better core, but it suffered just as much from having to share bandwidth with CPU, all the more so when the i810 was paired with 133MHz FSB, but only allowed 100MHz memory - and then halved that.

And by "awful", dionb actually means "truly, truly horrible".

5596 is horribly slow even for simple tasks such as redrawing the screen in the Windows 9x shutdown sequence (+ not handling the "dimming" of the screen well at all). The graphics memory can be set to various sizes (IIRC, 1, 2 or 4 megabytes), with little difference in actual performance because the shared memory implementation is very primitive and somewhat unstable on the graphics side. It is supposed to support early versions of Direct X, but given the horrid standard performance I don't see that going right...ever.

Anyone who complains about early Intel integrated graphics being awful should try out one of those SiS chipsets.
Just don't get a laptop with it because you'll want to upgrade the graphics ASAP.

Reply 44 of 45, by 386SX

User metadata
Rank Oldbie
Rank
Oldbie

My experiences with iGPUs are mostly sad even if I always admired the fact of having so low power integrated modules inside with sometimes high end or still modern features.
I had:

NeoMagic 256AV
ATi Rage Mobility (I don't remember which model) on a Dell notebook
ATi Radeon Mobility 9700
Intel GMA950 on the N270 netbook
Intel GMA3150 on the N450 netbook
Intel GMA3600.... on the mini-itx Atom D2x00
AMD Radeon HD 6310 on AMD E350 notebook

From the Intel ones I always felt like performances even with older games weren't the best but I had good memories of compatibility at least but the GMA3600 that as tested a lot lately it's a different gpu and different complex story. The ATi ones were probably my favourites, having something like the Radeon Mobility 9700 back in those days it looked like a powerful machine that probably increased the possibility the whole notebook temperature went up a lot and that one I had suffered commonly with soldering joints problems as I think early X360 console did.
But also the Rage Mobility was a good one, I remember playing Unreal and various old games on that quite well.
The Neomagic was installed into one of those early subnotebook high end for its time but was just a good 2D for the purpose of that Windows 98 SE GUI and no more. I remember myths about some D3D acceleration that I never got to enable and here a user confirmed me it was a myth that came from a driver problem.

Reply 45 of 45, by diagon_swarm

User metadata
Rank Newbie
Rank
Newbie
yawetaG wrote on 2020-09-12, 18:26:

Just don't get a laptop with it because you'll want to upgrade the graphics ASAP.

I don't think that these early IGP chipsets were ever used in laptops. IGPs started to be a real thing in the mobile segment in ~2001 with SiS 630 / Trident CyberBlade / i830MG /VIA ProSavage.