VOGONS


My thought on integrated video chipsets

Topic actions

Reply 40 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
diagon_swarm wrote on 2020-03-21, 19:45:

VW 320 was far more interesting product than 540 (540 had just more PCI-X slots and doubled number of CPU slots). You could get VW 320 under $4,000 and get 3D and geometry performance comparable with $10,000 workstations (3D core is exactly the same in 320 and 540). That was not a bad deal. In addition to it, the texture fill-rate was superior to any PC workstation card available at that time. Btw, I've measured many professional cards from the 90s - http://swarm.cz/gpubench/_GPUbench-results.htm (it's not in an easily readable form but provides multiple useful details ) ... the hi-res texture performance was bad with Intergraph and 3Dlabs products.

Rage LT - That's what I think, but I have never found evidence to support it (other that I haven't found any other laptop with this chip... but that could be like EGA-equipped laptops... I thought that only few were made and then I found a lot of them from both - well-known brands and OEMs).

Savage4 - I'm not sure about that. The info I found was always very fuzzy. I just know that when I tested the chip by myself, the per-clock performance was perfectly comparable with desktop Savage4. If you have relevant sources, I would be happy to read about this.

From what I remember, the Cobalt architecture is similar to the O2 (also a UMA setup)- that is, the geometry is done on the CPUs, and then the onboard chips of the Cobalt chipset takes care of texture mapping, mip-mapping and some codec decompression. That's why I bought up Pro/E versus, say, texture mapping on a video. If your task at hand is to work on CAM/CAD related stuff, you pretty much want as much CPU horsepower as possible (upgrading SGI O2s from R4ks to an R12K would significantly boost the performance of the rendering pipeline)...and that's why the 320 (1-2 socket instead of 4 max) is such a niche product. If you want to do CAD/CAM you probably do not want it since you want as many cores driving the 3D pipeline as possible. If you do broadcast graphics (where you are mapping video streams on a simple 3D mesh) it'll probably be something you want.

I doubt that the RageLT (the PCI version, not the Rage Pro LT, which is a different animal and really common) were found on many of the earlier laptops, at least not the mainstream ones. Neomagic and S3 (and to a certain extent C&T/Trident) dominated the field back then. My guess with the appearance of the RageLT on the Mainstreet/Wallstreets was that ATi (back then just a small graphics ASIC provider north of Toronto, certainly not the "red team" juggernaut going up against nVidia in later years) were willing to work with Apple to ship PowerPC/MacOS based drivers for their machines (the Rage II+ were found in the original iMacs as well), while Neomagic and S3 were not.

As for the Savage IX, here's a rather recent writeup regarding its capabilities -> https://b31f.wordpress.com/2019/10/24/s3-savage-ix-on-trial/. My own (not very scientific) comparisons between my Dell Latitude C600 (ATi Rage 128 Mobility/M4) and the Thinkpad T21 (Savage/IX 😎 seem to suggest that the Savage were a little behind the M4 in most benchmarks but still a decent performer at 800x600 resolutions for most games made before 2001. For me the Savage was more valued for decent compatibility with DOS VESA games.

Reply 41 of 74, by diagon_swarm

User metadata
Rank Newbie
Rank
Newbie
ragefury32 wrote on 2020-03-23, 16:57:

From what I remember, the Cobalt architecture is similar to the O2 (also a UMA setup)- that is, the geometry is done on the CPUs, and then the onboard chips of the Cobalt chipset takes care of texture mapping, mip-mapping and some codec decompression. That's why I bought up Pro/E versus, say, texture mapping on a video. If your task at hand is to work on CAM/CAD related stuff, you pretty much want as much CPU horsepower as possible (upgrading SGI O2s from R4ks to an R12K would significantly boost the performance of the rendering pipeline)...and that's why the 320 (1-2 socket instead of 4 max) is such a niche product. If you want to do CAD/CAM you probably do not want it since you want as many cores driving the 3D pipeline as possible. If you do broadcast graphics (where you are mapping video streams on a simple 3D mesh) it'll probably be something you want.

I doubt that the RageLT (the PCI version, not the Rage Pro LT, which is a different animal and really common) were found on many of the earlier laptops, at least not the mainstream ones. Neomagic and S3 (and to a certain extent C&T/Trident) dominated the field back then. My guess with the appearance of the RageLT on the Mainstreet/Wallstreets was that ATi (back then just a small graphics ASIC provider north of Toronto, certainly not the "red team" juggernaut going up against nVidia in later years) were willing to work with Apple to ship PowerPC/MacOS based drivers for their machines (the Rage II+ were found in the original iMacs as well), while Neomagic and S3 were not.

As for the Savage IX, here's a rather recent writeup regarding its capabilities -> https://b31f.wordpress.com/2019/10/24/s3-savage-ix-on-trial/. My own (not very scientific) comparisons between my Dell Latitude C600 (ATi Rage 128 Mobility/M4) and the Thinkpad T21 (Savage/IX 😎 seem to suggest that the Savage were a little behind the M4 in most benchmarks but still a decent performer at 800x600 resolutions for most games made before 2001. For me the Savage was more valued for decent compatibility with DOS VESA games.

Savage IX: Thanks for the link. I should have read it first before doing my own research today. He is right with the 100/100 MHz core/mem clock - that's what I've identified years ago using low-level tests. Now I see that Savage IX doesn't support multitexturing so it seems to be more like Savage3D. However, it supports S3DT (contrary to what he said). I've checked my notes and there are all relevant extensions:

S3 Savage/IX OpenGL

HP Omnibook XE3 / P3 Celeron 850MHz (8.5x100) / 256MB PC133 RAM
Windows 2000 SP4

S3 Savage/IX+MV (86c294) with 8MB RAM
AGP 2x @ 2x

100 MHz core/mem
ROP:TMU = 1:1

8MB 64bit SDR

GL_VENDOR: S3 Graphics, Incorporated
GL_RENDERER: SavageMX
GL_VERSION: 1.1 2.10.77

GL_EXTENSIONS:
GL_ARB_texture_compression GL_EXT_texture_compression_s3tc GL_EXT_abgr GL_EXT_bgra GL_EXT_clip_volume_hint GL_EXT_compiled_vertex_array GL_EXT_fog_coord GL_EXT_packed_pixels GL_EXT_point_parameters GL_EXT_paletted_texture GL_EXT_separate_specular_color GL_EXT_shared_texture_palette GL_EXT_texture_lod_bias GL_EXT_vertex_array GL_KTX_buffer_region GL_S3_s3tc GL_WIN_swap_hint

I assume that they just fixed easy bugs and used the old 3D core to save as much transistors as possible - although the fastest mobile chip, Savage/IX was pretty efficient. It would be interesting to check if video features are on the same level as Savage4 or Savage3D.

Btw DELL Latitude C600 has just Mobility M3 (low-cost version with just 64bit 8MB video ram embedded). This is the same as was used in one version of PowerBook G3. This version was heavily limited by the 64-bit data bus and hi-res multi-texturing textures or hi-res single-texturing+blending slowed down the chip to the Savage/IX level. I have C600 and also C800. The hi-end C800 has Mobility M4 (16MB 128-bit) and offers much better performance in games. It's sad that the 128-bit versions were available only in workstation-class laptops. 128-bit interface was necessary to fully utilize both pixel pipelines of the chip.

RageLT and Mac: I think ATI was not very small during the mid- to late-90s. They had big ads in magazines saying something like "every third computer graphics chip (sold) is made by ATI" back then. I don't know what was behind the deal between ATI and Apple. On the other side, they didn't have much to choose from - maybe the only other choice was S3 (NVIDIA was too "young", CL didn't have good 3D accelerator and it was going down as a company, 3Dfx didn't have 2D core...).

Cobalt: Although the main concept of the "Cobalt" GPU is similar with "Crime" (SGI O2), Cobalt is a different / better / faster design. Cobalt uses two pixel pipelines running on 100 MHz (instead of just one on 66 MHz). As a rasteriser, Cobalt has 3-5x better performance than Crime. Fill rate was better than on Indigo2 Maximum Impact / Octane with MXI/MXE. Even the geometry performance was better than in Octane, until Octane2 was released with VPro V6/V8 (early 2000). In comparison with VPro V6/V8, fill-rate and geometry performance is 40% lower on Cobalt. Still good for a system that was so much cheaper (SGI 320 was cheaper than O2 with the slow R5200).

Sadly, upgrading O2 from R5K to R10K/R12K didn't help much with the 3D performance. It helped only in cases where the rendering was draw call limited. Geometry performance was increased by no more than 50% and that was still just third of Cobalt. Unlike with O2, where geometry was calculated using R5K+ vector units, Cobalt ASIC handles all geometry processing and provides steady performance regardless the CPU speed (I tested this on multiple SGI 320 machines). Geometry acceleration is mentioned even in the SGI 320 official white paper.

I would not said that SGI 320 was for niche market. They marketed wrong/niche features (video textures and similar stuff) and together with the sales force, who didn't want to sell the machine, it was doomed from the beginning. For CAD/CAM, having four CPUs was not mandatory. In general, most CAD/CAE workstations were configured with just one/two CPUs and that was true even for hi-end software packages. SGI didn't try hard to sell the machine and didn't try hard to fix the bugs (like the one with OpenGL&VSync) or improve the driver performance. Even with all the flaws, SGI VW 320 was the fastest sub $5000 workstation in the world for anyone who needed to handle extremely complex 3D models in CAD/CAM/CAE software.

Attachments

Vintage computers / SGI / PC and UNIX workstation OpenGL performance comparison

Reply 42 of 74, by Stiletto

User metadata
Rank l33t++
Rank
l33t++

Came across this interesting thread about the ALi Aladdin 7 integrated GPU over at Beyond3D.com:
https://forum.beyond3d.com/threads/the-fabled … n-7-igpu.61051/

"I see a little silhouette-o of a man, Scaramouche, Scaramouche, will you
do the Fandango!" - Queen

Stiletto

Reply 43 of 74, by yawetaG

User metadata
Rank Oldbie
Rank
Oldbie
dionb wrote on 2020-03-07, 21:38:

Why the focus on Intel? They were pretty late to the game. Integrated video for the PC started with the 1996-era SiS 5511+6202 chipset. Maybe "integrated" isn't quite the term, as it was still a discrete chip, but it shared system memory, which is the defining feature of integrated VGA. A year later, SiS came with the 5596, which was the first actually integrated solution, integrating the 6205 into the 5571 northbridge. This was two years before Intel's 810.

Of course performance was awful, particularly in the 5511+6202 and 5596, as shared bandwidth in an EDO system left the CPU (and VGA core for that matter) completely starved. Intels i810 had a significantly better core, but it suffered just as much from having to share bandwidth with CPU, all the more so when the i810 was paired with 133MHz FSB, but only allowed 100MHz memory - and then halved that.

And by "awful", dionb actually means "truly, truly horrible".

5596 is horribly slow even for simple tasks such as redrawing the screen in the Windows 9x shutdown sequence (+ not handling the "dimming" of the screen well at all). The graphics memory can be set to various sizes (IIRC, 1, 2 or 4 megabytes), with little difference in actual performance because the shared memory implementation is very primitive and somewhat unstable on the graphics side. It is supposed to support early versions of Direct X, but given the horrid standard performance I don't see that going right...ever.

Anyone who complains about early Intel integrated graphics being awful should try out one of those SiS chipsets.
Just don't get a laptop with it because you'll want to upgrade the graphics ASAP.

Reply 44 of 74, by 386SX

User metadata
Rank l33t
Rank
l33t

My experiences with iGPUs are mostly sad even if I always admired the fact of having so low power integrated modules inside with sometimes high end or still modern features.
I had:

NeoMagic 256AV
ATi Rage Mobility (I don't remember which model) on a Dell notebook
ATi Radeon Mobility 9700
Intel GMA950 on the N270 netbook
Intel GMA3150 on the N450 netbook
Intel GMA3600.... on the mini-itx Atom D2x00
AMD Radeon HD 6310 on AMD E350 notebook

From the Intel ones I always felt like performances even with older games weren't the best but I had good memories of compatibility at least but the GMA3600 that as tested a lot lately it's a different gpu and different complex story. The ATi ones were probably my favourites, having something like the Radeon Mobility 9700 back in those days it looked like a powerful machine that probably increased the possibility the whole notebook temperature went up a lot and that one I had suffered commonly with soldering joints problems as I think early X360 console did.
But also the Rage Mobility was a good one, I remember playing Unreal and various old games on that quite well.
The Neomagic was installed into one of those early subnotebook high end for its time but was just a good 2D for the purpose of that Windows 98 SE GUI and no more. I remember myths about some D3D acceleration that I never got to enable and here a user confirmed me it was a myth that came from a driver problem.

Reply 45 of 74, by diagon_swarm

User metadata
Rank Newbie
Rank
Newbie
yawetaG wrote on 2020-09-12, 18:26:

Just don't get a laptop with it because you'll want to upgrade the graphics ASAP.

I don't think that these early IGP chipsets were ever used in laptops. IGPs started to be a real thing in the mobile segment in ~2001 with SiS 630 / Trident CyberBlade / i830MG /VIA ProSavage.

Vintage computers / SGI / PC and UNIX workstation OpenGL performance comparison

Reply 46 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
diagon_swarm wrote on 2020-03-23, 20:59:
Savage IX: Thanks for the link. I should have read it first before doing my own research today. He is right with the 100/100 MHz […]
Show full quote

Savage IX: Thanks for the link. I should have read it first before doing my own research today. He is right with the 100/100 MHz core/mem clock - that's what I've identified years ago using low-level tests. Now I see that Savage IX doesn't support multitexturing so it seems to be more like Savage3D. However, it supports S3DT (contrary to what he said). I've checked my notes and there are all relevant extensions:

I assume that they just fixed easy bugs and used the old 3D core to save as much transistors as possible - although the fastest mobile chip, Savage/IX was pretty efficient. It would be interesting to check if video features are on the same level as Savage4 or Savage3D.

Btw DELL Latitude C600 has just Mobility M3 (low-cost version with just 64bit 8MB video ram embedded). This is the same as was used in one version of PowerBook G3. This version was heavily limited by the 64-bit data bus and hi-res multi-texturing textures or hi-res single-texturing+blending slowed down the chip to the Savage/IX level. I have C600 and also C800. The hi-end C800 has Mobility M4 (16MB 128-bit) and offers much better performance in games. It's sad that the 128-bit versions were available only in workstation-class laptops. 128-bit interface was necessary to fully utilize both pixel pipelines of the chip.

RageLT and Mac: I think ATI was not very small during the mid- to late-90s. They had big ads in magazines saying something like "every third computer graphics chip (sold) is made by ATI" back then. I don't know what was behind the deal between ATI and Apple. On the other side, they didn't have much to choose from - maybe the only other choice was S3 (NVIDIA was too "young", CL didn't have good 3D accelerator and it was going down as a company, 3Dfx didn't have 2D core...).

Cobalt: Although the main concept of the "Cobalt" GPU is similar with "Crime" (SGI O2), Cobalt is a different / better / faster design. Cobalt uses two pixel pipelines running on 100 MHz (instead of just one on 66 MHz). As a rasteriser, Cobalt has 3-5x better performance than Crime. Fill rate was better than on Indigo2 Maximum Impact / Octane with MXI/MXE. Even the geometry performance was better than in Octane, until Octane2 was released with VPro V6/V8 (early 2000). In comparison with VPro V6/V8, fill-rate and geometry performance is 40% lower on Cobalt. Still good for a system that was so much cheaper (SGI 320 was cheaper than O2 with the slow R5200).

Sadly, upgrading O2 from R5K to R10K/R12K didn't help much with the 3D performance. It helped only in cases where the rendering was draw call limited. Geometry performance was increased by no more than 50% and that was still just third of Cobalt. Unlike with O2, where geometry was calculated using R5K+ vector units, Cobalt ASIC handles all geometry processing and provides steady performance regardless the CPU speed (I tested this on multiple SGI 320 machines). Geometry acceleration is mentioned even in the SGI 320 official white paper.

I would not said that SGI 320 was for niche market. They marketed wrong/niche features (video textures and similar stuff) and together with the sales force, who didn't want to sell the machine, it was doomed from the beginning. For CAD/CAM, having four CPUs was not mandatory. In general, most CAD/CAE workstations were configured with just one/two CPUs and that was true even for hi-end software packages. SGI didn't try hard to sell the machine and didn't try hard to fix the bugs (like the one with OpenGL&VSync) or improve the driver performance. Even with all the flaws, SGI VW 320 was the fastest sub $5000 workstation in the world for anyone who needed to handle extremely complex 3D models in CAD/CAM/CAE software.

Okay, so several updates to the points -

a) Yeah, the Savage/IX did not do multi-texturing, but S3TC was there. Oddly enough, their DOS/VESA performance were excellent - similar to the M3/M4 in stock config, but once you run fastvid on it, it seems to really improve in performance - that and the fact that the Savage seems to work well with existing S3 drivers - no such luck with the M3 - using both Mach32/Mach64 VESA drivers in DOS games (Rowan software flight sims or MSFS5) tend to crash or cause glitches. If it's Direct3D/OpenGL I want, I'll gun for the M3/M4. If it's DOS/VESA, it's the S3. The MeTaL support isn't bad either.

b) As for the M3, it depends on the VRAM. ATi seemed to have marketed several M3 variants - the base M3 has 8MB embedded in-die (64 bit datapath), another M3 variant has 8MB embedded (64 bit) but with provisions for another 8MB (also 64 bit), which gives it 128 bits. There's one more base variant with 16MB embedded on-die with 128 bit datapath. Mine has the 16MB VRAM so it's not VRAM bandwidth constricted.
The major difference between the M3 and the M4 is that the M4 can do AGP4x, which is what is found in some machines with the i815EP chipset. From what I remember, the only M4 sold by Dell has 32MB of VRAM connected via a 128 Bit datapath.

c) I don't think Apple has ever dealt with S3 as a MacOS GPU supplier. I think one of the DOS/Windows cards they sold back in the clone wars era had an S3 Trio, but that's about it. It might just be lack of reputation with Cupertino.

d) Yeah, SGI by that point were pretty much in a state of implosion - having all those cheap 3D hardware kicking their MIPS based machines all over the place probably did not make them too happy about further engineering the instruments of their upcoming demise. Oh well, nVidia did end up taking over a large chunk of their engineering talent...

Reply 47 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
386SX wrote on 2020-09-13, 10:21:
My experiences with iGPUs are mostly sad even if I always admired the fact of having so low power integrated modules inside with […]
Show full quote

My experiences with iGPUs are mostly sad even if I always admired the fact of having so low power integrated modules inside with sometimes high end or still modern features.
I had:

NeoMagic 256AV
ATi Rage Mobility (I don't remember which model) on a Dell notebook
ATi Radeon Mobility 9700
Intel GMA950 on the N270 netbook
Intel GMA3150 on the N450 netbook
Intel GMA3600.... on the mini-itx Atom D2x00
AMD Radeon HD 6310 on AMD E350 notebook

From the Intel ones I always felt like performances even with older games weren't the best but I had good memories of compatibility at least but the GMA3600 that as tested a lot lately it's a different gpu and different complex story. The ATi ones were probably my favourites, having something like the Radeon Mobility 9700 back in those days it looked like a powerful machine that probably increased the possibility the whole notebook temperature went up a lot and that one I had suffered commonly with soldering joints problems as I think early X360 console did.
But also the Rage Mobility was a good one, I remember playing Unreal and various old games on that quite well.
The Neomagic was installed into one of those early subnotebook high end for its time but was just a good 2D for the purpose of that Windows 98 SE GUI and no more. I remember myths about some D3D acceleration that I never got to enable and here a user confirmed me it was a myth that came from a driver problem.

Well, GPU performance is dependent on 2 things - the strength of the GPU itself (how many gigaflops, how many things can it offload to dedicated ASICs and whether it is optimized for speed or power efficiency), and how efficient it is managing system RAM bandwidth. Integrated GPUs do not have VRAM of their own and must rely on system memory. Overall, the memory bandwidth available to a integrated GPU lags about 7 to 10 years from the bandwidth available to a discrete GPU. For example, a modern Ryzen 5 2400GE (by no means a new APU) with dual channel DDR4-2400 RAM can give you up to 38.4GB/sec memory bandwidth for its GCN 5th Gen based RX Vega 11. That's around the same internal bandwidth as, say, a Radeon X1800 (remember that one?) or an nVidia GTX 730...or below that of a GDDR5 equipped nVidia GT 1030 (48GB/sec). Of course, this assumes that the GPU is not contending with the rest of the system for bandwidth, and the GPU is optimized for the task. The Intel UHD620s found in modern Intel "Lake" CPUs are also connected to dual channel DDR4, and they don't perform too great in OpenGL/Direct3D against the Vega 11 (of course, when they are doing transcoding the intel will thrash the Vega thanks to Quicksync). Partly because Intel is Intel, and partly because they optimize the GPU for cool running/battery runtime than speed. Intel would embed a large eDRAM cache (64-128MB) in their higher end integrated offerings to improve memory bandwidth and overall performance - it started with the Crystalwell Haswells as the Iris Pro, and it's found today in the UHD640/650 GPUs sold in the 2020 Intel Macs.

The Neomagic MagicMedia 256AV (NM2200), Radeon Mobility 9700 and the Rage Mobility were not integrated GPUs - they were integrated onto the motherboards, but they are discrete GPUs. The Neomagic had 2.5MB of VRAM, and It's a 2D only GPU. Neomagic's 15 minute claim to fame was that the company founders figured out a way to include the graphics ASIC into the same die as the VRAM economically, hence the name Neomagic, or "NEO Memory And LoGIC". Technically the NM2200 was both a GPU and an AC97 codec, so it can do audio and video at the same time...which also explain the "AV" part - think of it as like a laptop version of the NV1. The 2D acceleration were kinda meh with the 2200, and their sound? About the same. Some Neomagic consumers assumed that since it can work with DirectX that it provides hardware acceleration. Well, the 2D ASIC accelerated the software rendering codepath, but there’s no dedicated 3D hardware until the very end of the product line. As far as I remember, Neomagic only had a very rudimentary 3D accelerator (performs supposedly worse than the Virge MX) in its last product, the Neomagic 256XL+ (NM2380). That was not a popular chip at all...only some Sony Vaios use it. I would like to get my hands on one to test it, but they are not easy to come by. It's kind of amazing that at one point, Neomagic owns 40% of the laptop GPU market - once nVidia got into the mobile market Neomagic bailed out quickly and went into the mobile hardware space to shrill tablet ASICs...until nVidia ate their lunch with the Tegras. They are now an e-Commerce platform provider for Latin America doing rather indifferently on the Nasdaq Pink-sheets/OTC market.

The Radeon Mo 9700 has 64 to 128MB and was one of the stronger mobile GPUs of that era - there's one in my Powerbook PBG4/15/1.5GHz. Good GPU, let down by the PowerPC G4 and its MPX (edit: not 60x) front side bus.

The original Rage Mobility had between 4 to 8 MB of VRAM embedded on-die, dependent on what the laptop vendor picked. It was...okay. As one of the first Mobile GPUs with dedicated (but weak) 3D hardware, it wasn’t something you really wanted but it’s a freebie, so to speak. Performance is similar or slightly inferior to a Riva128 for the most part.

Last edited by ragefury32 on 2021-01-11, 05:03. Edited 2 times in total.

Reply 48 of 74, by Standard Def Steve

User metadata
Rank Oldbie
Rank
Oldbie

Jeez, did the Powerbooks actually use 60x right up until the Intel switch? That's terribad if true. I used to own a 17" G4 and always thought that it felt a little sluggish in Leopard, despite having a 9700 Pro. The CPU being stuck on the 60x bus would definitely explain it.

IIRC on the desktop side, the Yikes/PCI G4 was the last machine to use 60x; all of the AGP machines used the much faster MPX bus.

94 MHz NEC VR4300 | SGI Reality CoPro | 8MB RDRAM | Each game gets its own SSD - nooice!

Reply 49 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
Standard Def Steve wrote on 2021-01-02, 18:56:

Jeez, did the Powerbooks actually use 60x right up until the Intel switch? That's terribad if true. I used to own a 17" G4 and always thought that it felt a little sluggish in Leopard, despite having a 9700 Pro. The CPU being stuck on the 60x bus would definitely explain it.

IIRC on the desktop side, the Yikes/PCI G4 was the last machine to use 60x; all of the AGP machines used the much faster MPX bus.

Whoops - I confused the 60x with the MPx. The Aluminum PowerBooks all use the Intrepid/KeyLargo setup, so they are 133MHz FSB machines with DDR RAM. Even with the MPx they are not exactly keeping up with the 266MHz FSB on the Athlon Bartons, much less the 400/533/800 MHz on the Netburst/Pentium-M - at that stage it’s like trying to bring a Tualatin to 2005 - it’s got some good points but definitely showing its age.

Reply 50 of 74, by 386SX

User metadata
Rank l33t
Rank
l33t
ragefury32 wrote on 2021-01-02, 06:41:
Well, GPU performance is dependent on 2 things - the strength of the GPU itself (how many gigaflops, how many things can it offl […]
Show full quote

Well, GPU performance is dependent on 2 things - the strength of the GPU itself (how many gigaflops, how many things can it offload to dedicated ASICs and whether it is optimized for speed or power efficiency), and how efficient it is managing system RAM bandwidth. Integrated GPUs do not have VRAM of their own and must rely on system memory. Overall, the memory bandwidth available to a integrated GPU lags about 7 to 10 years from the bandwidth available to a discrete GPU. For example, a modern Ryzen 5 2400GE (by no means a new APU) with dual channel DDR4-2400 RAM can give you up to 38.4GB/sec memory bandwidth for its GCN 5th Gen based RX Vega 11. That's around the same internal bandwidth as, say, a Radeon X1800 (remember that one?) or an nVidia GTX 730...or below that of a GDDR5 equipped nVidia GT 1030 (48GB/sec). Of course, this assumes that the GPU is not contending with the rest of the system for bandwidth, and the GPU is optimized for the task. The Intel UHD620s found in modern Intel "Lake" CPUs are also connected to dual channel DDR4, and they don't perform too great in OpenGL/Direct3D against the Vega 11 (of course, when they are doing transcoding the intel will thrash the Vega thanks to Quicksync). Partly because Intel is Intel, and partly because they optimize the GPU for cool running/battery runtime than speed. Intel would embed a large eDRAM cache (64-128MB) in their higher end integrated offerings to improve memory bandwidth and overall performance - it started with the Crystalwell Haswells as the Iris Pro, and it's found today in the UHD640/650 GPUs sold in the 2020 Intel Macs.

The Neomagic MagicMedia 256AV (NM2200), Radeon Mobility 9700 and the Rage Mobility were not integrated GPUs - they were integrated onto the motherboards, but they are discrete GPUs. The Neomagic had 2.5MB of VRAM, and It's a 2D only GPU whose claim to fame was that the original company founders figured out a way to include the graphics ASIC into the same die as the VRAM economically, hence the name Neomagic, or "NEO Memory And LoGIC". Technically the NM2200 was both a GPU and an AC97 codec, so it can do audio and video at the same time...which also explain the "AV" part - think of it as like a laptop version of the NV1. The 2D acceleration were kinda meh with the 2200, and their sound? About the same. Some Neomagic consumers assumed that since it can work with DirectX that it provides hardware acceleration. Well, the 2D ASIX accelerated the software rendering codepath, but there’s no dedicated 3D hardware until the very end of the product line. As far as I remember, Neomagic only had a very rudimentary 3D accelerator (performs supposedly worse than the Virge MX) in its last product, the Neomagic 256XL+ (NM2380). That was not a popular chip at all...only some Sony Vaios use it. I would like to get my hands on one to test it, but they are not easy to come by. It's kind of amazing that at one point, Neomagic owns 40% of the laptop GPU market - once nVidia got into the mobile market Neomagic bailed out quickly and went into the mobile hardware space to shrill tablet ASICs...until nVidia ate their lunch with the Tegras. They are now an e-Commerce platform provider for Latin America doing rather indifferently on the Nasdaq Pink-sheets/OTC market.

The Radeon Mo 9700 has 64 to 128MB and was one of the stronger mobile GPUs of that era - there's one in my Powerbook PBG4/15/1.5GHz. Good GPU, let down by the PowerPC G4 and its MPX (edit: not 60x) front side bus.

The original Rage Mobility had between 4 to 8 MB of VRAM embedded on-die, dependent on what the laptop vendor picked. It was...okay. As one of the first Mobile GPUs with dedicated (but weak) 3D hardware, it wasn’t something you really wanted but it’s a freebie, so to speak. Performance is similar or slightly inferior to a Riva128 for the most part.

Thanks for the explanation. I have acceptable memories on the subnotebook Sony that had that NeoMagic 256AV and remember trying anything and everything to enable that myth about Direct3D acceleration but at the end it was a myth after all. At that time I didn't liked software rendered games so I hoped to see accelerated games running on that subnotebook. About the various Radeon and Rage Mobility I was quite happy with them and they gave much compatibility with most of the classic games I usually played like Quake1/2, Unreal, etc...
Lately I am probably pushing the most from the GMA3600-PowerVr SGX545 of the Atom D2x00 cpu. People nowdays would still run from this iGPU cause had a long story of driver problems, linux support, 32bit only installation, few updates and issues even with the only o.s. supported. But still I like how a "smartphone" gpu was running x86 desktop games or at least trying to. Imho most of the problem always were been the drivers but the 3 or 4 watts iGPU did its best. I am playing Thief Gold on it and found a way patch after patch to make it compatible and faster than supposed to cause this gpu and Win 8.x seem to not like old Directx6 games. Also ran Doom3, Far Cry, Thief 2 (but windowed mode), GTA IV too... But the iGPU core had all the high end features of that moment.. theorical Dx 10.x compatibility (in reality drivers up to Dx9.0c), H264 60fps decoding.. I always imagined the support for this iGPU has never been taken seriously from the start.. I suppose it might have been pushed more. The GMA3650 pushed its core clock but I suppose there were not many differences. Probably one of the many alternative complicated gpu in history.

Reply 51 of 74, by Warlord

User metadata
Rank l33t
Rank
l33t
Shagittarius wrote on 2020-03-06, 00:30:
rmay635703 wrote on 2020-03-06, 00:26:
Shagittarius wrote on 2020-03-06, 00:19:

Honda powered cars have won le mans multiple times

=)

Honda Insight

https://m.youtube.com/watch?v=McJJeukIWSA

That's like what happens when most integrated chips try to run current games.

more like thats what happens when somone thinks front wheel drive cars are good for racing, or going fast.

Reply 52 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
Warlord wrote on 2021-01-03, 01:31:
Shagittarius wrote on 2020-03-06, 00:30:
rmay635703 wrote on 2020-03-06, 00:26:

That's like what happens when most integrated chips try to run current games.

more like thats what happens when somone thinks front wheel drive cars are good for racing, or going fast.

So what was the point of that video? A talented tuner like Brian Gillespie hit a rough spot on a K20 tuned Honda Insight Mk. 1 (one of the most aerodynamic chassis ever made in a production car) and flipped it...and the roll cage allowed him to survive?

And he's back again trying to push 250 mph this time? Never mind that he did 200 mph on that same car the day before...

http://vtec.academy/insights-into-speed/

May I also remind you that all current game consoles are running on integrated graphics - it's just that they have tons of memory bandwidth compared to run-of-the-mill machines - and even then, you can run AAA titles on integrated graphics. Here's Cypherpunk 2077 on a Ryzen 5 3400G...once again, not the newest APU out there, and you will need to scale some stuff back, but it's totally do-able ->

https://www.youtube.com/watch?v=0DOqi8Bke5Q

Last edited by ragefury32 on 2021-01-03, 06:43. Edited 1 time in total.

Reply 53 of 74, by Warlord

User metadata
Rank l33t
Rank
l33t

That might be somthing in that class of car though Idk that much about subcompacts, its really not a eco car anymore though. Under normal circumstances 200mph is really not breaking any records. Pretty sure a tesla if you want to be eco friendly will do 200 mph out the box without any mods or pollution 🤣. But anyways aerodynamics are nice, but with no down force to keep the tires on the ground, things like that can happen. at 200mph or higher on a car like that it will literally lift off the ground like a boat hydroplaning, so its really no mystery how that can happen.

Reply 54 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
Warlord wrote on 2021-01-03, 04:57:

That might be somthing in that class of car though Idk that much about subcompacts, its really not a eco car anymore though. Under normal circumstances 200mph is really not breaking any records. Pretty sure a tesla if you want to be eco friendly will do 200 mph out the box without any mods or pollution 🤣. But anyways aerodynamics are nice, but with no down force to keep the tires on the ground, things like that can happen. at 200mph or higher on a car like that it will literally lift off the ground like a boat hydroplaning, so its really no mystery how that can happen.

So what exactly was your point, then? Integrated graphics will act like a first generation Honda Insight (which is basically like a CRX with skinny tires) with a K20 engine inside, and without any improvements to its road keeping abilities, will flip over spectacularly during a 200 mph speed run?
Whatever attempt at an analogy is broken severely here...

Reply 55 of 74, by Warlord

User metadata
Rank l33t
Rank
l33t

🤣 i was just commenting on the lawnmower that crashed trying to do 200. The integrated graphics in the current consoles are great. This tread reminds me of that perfect gaming laptop thread. The first time integrated graphics were good was when Microsoft put nvida graphics into the 1st xbox. ATI had some decent graphics in early laptops think 9600 in the thinkpad maybe even the 7500, but from a retro gaming point of view nvidia is where its at. Everything else is inferior unless its 3dfx. Which was integrated into a msi ms-6168 thats probably the 1st time integrated graphics didn't suck, unless you count the early tseng and s3 on board solutions that were fine in their day as dos machines.

Reply 56 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
Warlord wrote on 2021-01-03, 21:45:

🤣 i was just commenting on the lawnmower that crashed trying to do 200. The integrated graphics in the current consoles are great. This tread reminds me of that perfect gaming laptop thread. The first time integrated graphics were good was when Microsoft put nvida graphics into the 1st xbox. ATI had some decent graphics in early laptops think 9600 in the thinkpad maybe even the 7500, but from a retro gaming point of view nvidia is where its at. Everything else is inferior unless its 3dfx. Which was integrated into a msi ms-6168 thats probably the 1st time integrated graphics didn't suck, unless you count the early tseng and s3 on board solutions that were fine in their day as dos machines.

Uh, yeah. That same lawnmower with a K20 also did 200 mph the previous day, so it's definitely a feat of automotive engineering. If Bill Gillespie actually improved the suspension it'll probably survive those speed runs.

nVidia didn't make "integrated graphics" good by putting it onto the original Xbox, it also sold TNT silicon to ALi (Acer Labs) back in 1999 which created the M1621 Aladdin TNT2 northbridge, and they were released before the X-Box - think of it as like nVidia TNT but in integrated graphics form. It was Acer Lab's competitor to the KM133 ProSavage integrated video setup on Socket A/Super 370 machines - it's sold either with a frame buffer cache or as an entirely unified memory architecture (UMA) setup. It was actually fairly competitive versus the ProSavage back then, flipping places against it in Q3A and UT, and made integrated graphics gaming do-able back then -> https://www.anandtech.com/show/700/13

As for the MSI MS-6168 - IT'S NOT AN INTEGRATED GRAPHICS SETUP. The manual pointed out where the dedicated VRAM was located on the motherboard for the Voodoo3. If it has dedicated VRAM, it's not integrated. It's just a Voodoo3 embedded onto the board with none of the cost-cutting/bandwidth constraints of an integrated graphics setup.

Look - here's the difference between embedded graphics, and integrated graphics. They are not the same thing.
Embedded means that the GPU and its own VRAM is embedded onto the device where it is being used. It's also known as discrete graphics. If you have a Dell XPS15 with an nVidia GTX1050 Mobile with 4GB VRAM, that's embedded/discrete graphics.
Integrated graphics means that the GPU has no dedicated VRAM and most "borrow" main memory for all operations. If you have Intel GMA4500HD or an nVidia MCP79 graphics, that's integrated graphics.

Also, which perfect gaming laptop thread?

Last edited by ragefury32 on 2021-01-04, 06:08. Edited 4 times in total.

Reply 57 of 74, by Warlord

User metadata
Rank l33t
Rank
l33t
ragefury32 wrote:

so it's definitely a feat of automotive engineering.

sure you don't mean GTR? or some other real car like a Ford 🤣 Or if you prefer the ALI chipset of nforce2 approach. Savage graphics might of been ok in synthetic, but from everything I have seen they have all kinds of bugs in real world gaming.

This ford is nvidia.
https://www.youtube.com/watch?v=fpL10WIYRBQ

Reply 58 of 74, by ragefury32

User metadata
Rank Oldbie
Rank
Oldbie
Warlord wrote on 2021-01-03, 23:59:
sure you don't mean GTR? or some other real car like a Ford lol Or if you prefer the ALI chipset of nforce2 approach. Savage […]
Show full quote
ragefury32 wrote:

so it's definitely a feat of automotive engineering.

sure you don't mean GTR? or some other real car like a Ford 🤣 Or if you prefer the ALI chipset of nforce2 approach. Savage graphics might of been ok in synthetic, but from everything I have seen they have all kinds of bugs in real world gaming.

This ford is nvidia.
https://www.youtube.com/watch?v=fpL10WIYRBQ

Several things -

The Ali M1631 is not the nForce2. The nForce2 came out nearest 2 years later and based on a completely different Geforce2MX core. My point is that nVidia has been in integrated graphics for much longer than before the nForce/NV2A on the X-Box. The original nVidia NV1 PCI board was a GPU, APU and a game controller all embedded onto the same card (from what I remember the 4MB of VRAM is a frame buffer, not dedicated VRAM for housing shaders/command queue and textures), and if nVidia didn’t screw up a major demo at Sega, they would’ve been inside the Dreamcast (Sega threw a crap load of money at NVidia early in its history to develop the NV1 since they want something that can do quads like the Saturn...but less of a trainwreck to work with. What they got was an even bigger trainwreck ).

S3 Savage has been “just fine” if it’s used for things that it is designed for, I.e. DirectX 6 stuff. The “all kinds of bugs” in real world gaming depends on which driver versions and which variant of the Savage family you had...it could be the original Savage3D/MX/IX with no multi texturing, it could be the Savage4 with the multi texturing and some bug fixes, or it could be the Savage2000, which was just fine as long as you don’t turn on their glitchy T&L unit (which requires a machine with DirectX 7+ installed and registry hacks to enable that code path). I ran Savage4s and SavageMX and I never ran into issues (at least not one that are not known like the missing fog on Savage cards in the first level of Shogo).
The same kind of issues (tearing, artifacts, missing effects) are also seen on Intel, Matrox and ATi cards.

So the nVidia is now compared to, what, a heavily modified Fox body Mustang, and is therefore overly expensive, over-powered, have little practical applications outside of the field it is targeted for, and serves as an engineering equivalent of a big-dick measuring contest and will be beaten by someone who is even more willing to throw money into the problem...like this guy with a modified Ford GT?

https://youtu.be/ZM3nH-IYUeU

That seems apt for anyone who has a GTX1080 getting beaten down by a GTX2070, which is now being beaten down by those who threw money on a GTX3070 or whatever the heck nVidia is calling it these days. Contrary to popular belief, most people with discrete video are probably on GTX1050/1060s, which is probably more akin to people who drive M3 or Focus RS. Not everyone thinks mounting the biggest engine you can find on a light chassis and blasting it down a straight line is a real motorsport. For one, there is this thing in front of the driver called a steering wheel, and it is used to steer the car around these things called curves and corners.

And no, getting the piece of shit econobox Honda Insight to 190+ takes engineering prowess, much like how nVidia, AMD and Intel had to come up with better drivers and silicon engineering to get the GeForce 9300s, Radeon R5s and the Iris Pros to reasonable performance levels despite being hamstrung by silicon and energy budgets. Of course, doing it right means not flipping the car and doing it consistently every time - that’s more something like this, tuning a bunch of French shopping carts to corner faster than the Miatas or the M3s out there on a track day. They are still getting passed but a race is a race, and they are certainly having fun while on a budget...which is what integrated graphics is all about.

https://youtu.be/tXQgA7ni3gs

Reply 59 of 74, by Standard Def Steve

User metadata
Rank Oldbie
Rank
Oldbie
ragefury32 wrote on 2021-01-02, 20:23:
Standard Def Steve wrote on 2021-01-02, 18:56:

Jeez, did the Powerbooks actually use 60x right up until the Intel switch? That's terribad if true. I used to own a 17" G4 and always thought that it felt a little sluggish in Leopard, despite having a 9700 Pro. The CPU being stuck on the 60x bus would definitely explain it.

IIRC on the desktop side, the Yikes/PCI G4 was the last machine to use 60x; all of the AGP machines used the much faster MPX bus.

Whoops - I confused the 60x with the MPx. The Aluminum PowerBooks all use the Intrepid/KeyLargo setup, so they are 133MHz FSB machines with DDR RAM. Even with the MPx they are not exactly keeping up with the 266MHz FSB on the Athlon Bartons, much less the 400/533/800 MHz on the Netburst/Pentium-M - at that stage it’s like trying to bring a Tualatin to 2005 - it’s got some good points but definitely showing its age.

Yeah, Pentium M was probably the chip that convinced Apple to finally ditch PPC. I mean, in 2005 you could pick up a cheap Dothan-equipped Inspiron that would run circles around a $2500 Powerbook G4 while using less power. It was definitely not a good look for Apple.

Adding some L3 cache to the Powerbooks would've helped alleviate some of the memory bandwidth deficit without sucking down too much extra juice. There were quite a few instances where the giant 2MB L3 in the PowerMacs really helped with CPU performance, so I'm still kinda surprised that Apple didn't go that route with the high end 15" and 17" Powerbooks. But hey - at least they didn't further starve the G4 of memory bandwidth with integrated video!

94 MHz NEC VR4300 | SGI Reality CoPro | 8MB RDRAM | Each game gets its own SSD - nooice!