VOGONS


Cheap but well performing PCI 3D video cards

Topic actions

Reply 120 of 147, by zyga64

User metadata
Rank Oldbie
Rank
Oldbie
Ozzuneoj wrote on 2023-01-03, 14:13:

Like, who runs Mario Shareware on their retro PC, really? 😀

Especially since there is Mario Freeware (aka Mario & Luigi) from the same author, which does not have this problem.
https://wieringsoftware.nl/mario/

1) VLSI SCAMP /286@20 /4M /CL-GD5422 /CMI8330
2) i420EX /486DX33 /16M /TGUI9440 /GUS+ALS100+MT32PI
3) i430FX /K6-2@400 /64M /Rage Pro PCI /ES1370+YMF718
4) i440BX /P!!!750 /256M /MX440 /SBLive!
5) iB75 /3470s /4G /HD7750 /HDA

Reply 121 of 147, by Keatah

User metadata
Rank Member
Rank
Member

I'm looking to pair a Pentium M 1.7GHz (or faster), i915GV & ICH6 (ASUS PGTV-DM) with a PCI graphics board. It only supports onboard graphics and the one PCI slot. I'm gonna run XP on this. And the goal is to have a silent single low-rpm fan.

So far I'm looking at a Nvidia 6200/512MB board. Not sure if there is anything higher. I'm more interested in later OpenGL standards than silky smooth high-speed gaming. To that end I hear there's Nvidia 610GT, but I haven't seen anything other than cheap chinese-branded boards. I don't fully trust the specs. Ebay has one, but Nvidia's site doesn't say anything about PCI compatibility.

So I'm a little undecided. Help!

edit: found this from Zotac, 1GB PCI .. Any negatives or no-go's here?

Attachments

Reply 122 of 147, by Sphere478

User metadata
Rank l33t++
Rank
l33t++

A geforce 720 or 610 might work on a board in the ghz range.

Can try and find out 😀

Idk of that card has xp drivers though..?

Sphere's PCB projects.
-
Sphere’s socket 5/7 cpu collection.
-
SUCCESSFUL K6-2+ to K6-3+ Full Cache Enable Mod
-
Tyan S1564S to S1564D single to dual processor conversion (also s1563 and s1562)

Reply 124 of 147, by Sphere478

User metadata
Rank l33t++
Rank
l33t++
Keatah wrote on 2023-01-10, 02:45:

It does and the .PDF lists it.

Nvidia seems to support up to the GTX 960 and GTX Titan with XP drivers.

Awesome! Let’s hope the processor has all the instructions that the driver needs. And the pci version is advanced enough to support the card. That will be your next hurdle.

Sphere's PCB projects.
-
Sphere’s socket 5/7 cpu collection.
-
SUCCESSFUL K6-2+ to K6-3+ Full Cache Enable Mod
-
Tyan S1564S to S1564D single to dual processor conversion (also s1563 and s1562)

Reply 125 of 147, by Ozzuneoj

User metadata
Rank l33t
Rank
l33t

I will say, I haven't seen too many builds here using a Pentium M in a desktop board, but I remember seeing those back in the day.

Should be interesting to have basically a Pentium III but with SSE2 instructions. It should run significantly more modern software than a Pentium III would have.

The GT 610 is sadly not Kepler based, but it is at least Fermi based so its likely to be the most "current" GPU you'll find that works in a PCI slot. Performance will probably be really bad because of the terrible bandwidth bottleneck, but I'll be curious to see what it's capable of. This is probably one of the oldest setups you could put together that would run at least some DX12 software if you install Windows 10. Pair it with an SSD and the absolute most RAM you can squeeze into that board and it should at least be somewhat usable. Might be fun to find any DX12 games that have extremely low requirements to see if they're playable.

I wonder how the old Pentium M compares to something like a Celeron 900 (Penryn-based single core, 2.2Ghz).

Now for some blitting from the back buffer.

Reply 126 of 147, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Keatah wrote on 2023-01-09, 23:46:
I'm looking to pair a Pentium M 1.7GHz (or faster), i915GV & ICH6 (ASUS PGTV-DM) with a PCI graphics board. It only supports onb […]
Show full quote

I'm looking to pair a Pentium M 1.7GHz (or faster), i915GV & ICH6 (ASUS PGTV-DM) with a PCI graphics board. It only supports onboard graphics and the one PCI slot. I'm gonna run XP on this. And the goal is to have a silent single low-rpm fan.

So far I'm looking at a Nvidia 6200/512MB board. Not sure if there is anything higher. I'm more interested in later OpenGL standards than silky smooth high-speed gaming. To that end I hear there's Nvidia 610GT, but I haven't seen anything other than cheap chinese-branded boards. I don't fully trust the specs. Ebay has one, but Nvidia's site doesn't say anything about PCI compatibility.

So I'm a little undecided. Help!

edit: found this from Zotac, 1GB PCI .. Any negatives or no-go's here?

Avoid the 610, get either a GT520 1gb in PCI or grab a GT430 PCI, the GT430 PCI is listed as being the single most powerful PCI gpu from nVidia since its a ungimped FERMI GPU with all its Shaders and TMUs, the GT520 is a close second.

I own a GT430 PCI and its more than capable, if you need more than 512mb then the 1gb GT520 is a good buy, the GT610 is a pile of trash much like its GT710 sibling and both shouldn't exist.

Reply 127 of 147, by Ozzuneoj

User metadata
Rank l33t
Rank
l33t
TrashPanda wrote on 2023-01-10, 04:28:

if you need more than 512mb then the 1gb GT520 is a good buy, the GT610 is a pile of trash much like its GT710 sibling and both shouldn't exist.

Can you explain why you feel this way?

The GT 520 and GT 610 are both Fermi GF119 chips with the same core configuration and clock speeds. It looks like the Zotac GT 610 PCI has 1333Mhz memory, but some GT 520s (EVGA for example) only have 1000Mhz memory. I'm sure some are higher though. The GT 705 is also exactly the same GPU, but apparently with slower memory... thankfully these seem pretty uncommon and I don't think they're available in PCI.

The GT 710 is usually (at least for PCI-E) Kepler based, and when they are they're significantly faster than either of the above Fermi models. Memory bandwidth should be about as good as the best GT 520 and GT 610 models while pixel\texel throughput is about twice as high. The 710 also runs much cooler. However, there are some GT 710 models that are, again, just rebadged GF119 cards with the same specs as the GT 520 and GT 610. I don't know how common these are, but I would bet that any PCI cards labeled as GT 520, 610 or 710 are all likely Fermi cards and are all going to perform about the same unless one particular model has really slow memory.

The GT 730 is kind of in the same boat... the Kepler models are solid cards, but some geniuses felt it was a good idea to reuse some GT 430 Fermi GPUs, couple them with slow memory (usually on a 64bit bus) and call them GT 730s, and they are terrible compared to the Kepler versions. Being gimped GT430s, these are actually much slower than a proper Kepler based GT 710.

... since we're talking PCI cards though, I'm not sure how much of a difference it really makes in the end, but an actual GT 430 with 128bit memory would likely be the best you could get.

Now for some blitting from the back buffer.

Reply 128 of 147, by Warlord

User metadata
Rank l33t
Rank
l33t
Ozzuneoj wrote on 2023-01-10, 03:29:
I will say, I haven't seen too many builds here using a Pentium M in a desktop board, but I remember seeing those back in the da […]
Show full quote

I will say, I haven't seen too many builds here using a Pentium M in a desktop board, but I remember seeing those back in the day.

Should be interesting to have basically a Pentium III but with SSE2 instructions. It should run significantly more modern software than a Pentium III would have.

The GT 610 is sadly not Kepler based, but it is at least Fermi based so its likely to be the most "current" GPU you'll find that works in a PCI slot. Performance will probably be really bad because of the terrible bandwidth bottleneck, but I'll be curious to see what it's capable of. This is probably one of the oldest setups you could put together that would run at least some DX12 software if you install Windows 10. Pair it with an SSD and the absolute most RAM you can squeeze into that board and it should at least be somewhat usable. Might be fun to find any DX12 games that have extremely low requirements to see if they're playable.

I wonder how the old Pentium M compares to something like a Celeron 900 (Penryn-based single core, 2.2Ghz).

P-M is roughly equal to a P4 clocked at 1200Mhz faster or so, depending on DDR clock and FSB of course. and more responsive due to is shorter pipeline. P-M at 2000mhz is roughly equal to 3.2ghz P4. And running at only 27w tdp vs 84 W tdp

Reply 129 of 147, by mkarcher

User metadata
Rank l33t
Rank
l33t
Ozzuneoj wrote on 2022-12-22, 22:34:
mkarcher wrote on 2022-12-22, 22:13:

Yeah, I fully understand that. I'm experimenting on a "too much of everything" themed 486 system together with a friend, and this is just about probing the limits of what's technically possible. The real thing to put into a 486 computer for gaming is a Voodoo 1 or something like that.

I figured it was something like that. I guess we'll find out if it's possible at least?

Well, I got it working, at least somehow, in a UMC 8881/8886 based system in Windows 2000. Needed to hack things both on the hardware (3.3V supply) and software (NTOSKRNL.EXE) level. More detailed report incoming after some sleep. 3DMark 99 (with the usual patches for 486-class systems and Windows 2000) is currently executing. dxdiag (Dx7) shows the spinning cube. OpenGL applications seem to lock up on initialization.

EDIT: Just found out that 3DMark 99 reports invalid figures on processors that do not support RDTSC (that mode is meant for Cyrix 6x86 processors), so my detailed post will be delayed until I found out how to fix that.

Last edited by mkarcher on 2023-02-05, 14:27. Edited 1 time in total.

Reply 130 of 147, by Disruptor

User metadata
Rank Oldbie
Rank
Oldbie
mkarcher wrote on 2023-02-05, 00:29:

Well, I got it working, at least somehow, in a UMC 8881/8886 based system in Windows 2000. Needed to hack things both on the hardware (3.3V supply) and software (NTOSKRNL.EXE) level. More detailed report incoming after some sleep. 3DMark 99 (with the usual patches for 486-class systems and Windows 2000) is currently executing. dxdiag (Dx7) shows the spinning cube. OpenGL applications seem to lock up on initialization.

How fascinating.
A GeForce 5200 FX on Windows 2000 with working DirectX 7.
And on a 486 !

Great work.

Reply 131 of 147, by mkarcher

User metadata
Rank l33t
Rank
l33t

Getting a ZOTAC GeForce FX 5200 PCI (with 256MB DDR RAM, 128-bit memory width) to work in a Shuttle HOT-433 board in Windows 2000 turned out to be challenging, but at least for DirectX, it seems to work at least basically now. 3DMark 99 runs fine on it. Let's start with the most obvious and easily solvable issue: No 3.3V supply on the PCI bus. This specific FX5200 needs a 3.3V supply. The 3.3V supply is mandatory since PCI 2.2, even at 5V signalling voltage. Typical 486 boards implement PCI 2.1, use 5V signalling and omit the 3.3V supply which is optional in that configuration. There is a VOGONs thread on adding 3.3V using a "bodge PCB" to inject 3.3V, but this requires soldering on the mainboard. Instead I decided for this one-off test project to solder on the graphics card instead and came up with this ugly contraption:

GF5200 3.3V.jpg
Filename
GF5200 3.3V.jpg
File size
390.09 KiB
Views
1090 views
File comment
Foreign 3.3V supply on GeForce FX graphics card
File license
Public domain

I'm using DirectX 7 on Windows 2000. DirectX 8 requires a Pentium Processor, so DirectX 7 is the last possible version. DXDiag supplied with DX7 for Windows 2000 contains a bug that it doesn't start on a 5x86 processor. It contains the common mistake to assume that every processor that supports CPUID is a Pentium processor and has RDTSC available. I patched DXDiag to bypass this check and not use RDTSC. This is a straightforward patch. The offending piece of code is easily identifiable by either searching for the typical pushf/popf sequences for CPU type detection, or by looking into the DrWatson log that contains the ILLEGAL_INSTRUCTION-Exception and a pointer to the offending instruction. Let's not spend more time on this issue. Testing Direct 3D worked perfectly for software rendering (not that surprising), at awful performance (again, not that surprising on a 486 processor), but for hardware rendering, DXDiag failed with "Step 18: CreateDevice failed with HRESULT 887602eb" (or a similar message).

I took some time to analyze the root cause of this problem, which took a tour through all the layers of the Windows 2000 graphics stack. The error code is caused at the step where DXDiag already obtained a "primary surface for full-screen use, double-buffering and 3D rendering capability" and asks for a hardware accelerated 3D driver for it. It turns out that DXDiag didn't specify whether it wants that surface in video memory or system memory. In that case, DirectDraw tries to allocate the surface in video memory first, and failing that, it retries allocation in system memory. On that system, allocation in video memory failed. DirectDraw calls the GDI CanCreateSurface system call to ask the NVidia graphics driver whether it can create that surface in video memory. That call fails with the status code DDERR_OUTOFMEMORY. Yet, DXDiag reports 128MB of graphics memory (which is only half of the 256MB, but I don't care about that problem [yet], I wanted 128-bit memory access, the size of memory is secondary) which should be good enough for a double-buffered surface at 640x480 with 16bpp.

The driver errors out, because some call to the VDD using EngDeviceIoControl returns -1. The IO Control code used to call the VDD is a vendor-specific code, so there is no documentation about the stuff that call is supposed to do. It is called from code in the graphics driver where it tries to initialize "some stuff" that needs to be initialized before surfaces can be managed.

The VDD errors out because the kernel rejects a MmMapLockedPagesSpecifyCache call. The kernel driver tries to map a 6MB buffer into kernel address space after locking it to allow busmaster DMA to it. The root issue is that the kernel runs out of "system PTEs". The kernel has a limited amount of address space to map buffers into it. The management structure to tell the processor what pages is mapped to what virtual address is a "page table" containing of "page table entries". The page table structures for that part of kernel address space is allocated at a fixed size during boot. The allocated size depends on the amount of system RAM, can be overridden by a registry entry, and is clamped to "sensible values": No matter how much RAM you have and what the registry says, you will never get more than 50.000 pages of address space to map buffers, and you will never get less than 7000 pages. 7000 pages is 28MB, 50.000 pages is 200MB. The GeForce FX driver maps the 16MB MMIO area of the graphics card into kernel space, as well as 128MB of the graphics RAM. This by itself uses 144MB of address space. No only the NVidia driver uses buffer mappings, other drivers do so, too. As the computer has 256MB of RAM, Windows 2000 already decided to allocate a lot of page-table entries. Furthermore, the registry entry was set to an insanely high value (maybe by some graphics driver installer, maybe by SP4, or the rollup 1, or another update), so the kernel buffer mapping space already was maxed out at 200MB. I adapted the kernel to raise the upper clamp limit to 200.000 pages, and set up 70.000 pages in the registry in the value SystemPages in the key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management. The upper limit of pages is set using "MOV ECX, 50000" in the kernel, which is "B9 50 C3 00 00". You can patch this for example to 131072 pages by replacing it by "B9 00 00 02 00". There is only one hit for this byte sequence in Windows 2000 NTOSKRNL.EXE. When you patch kernel mode modules (drivers or the kernel itself), don't forget to update the checksum as well, because loading a kernel module with a bad checksum will result in a blue screen. I used https://www.coderforlife.com/projects/utilities/#PEChecksum to adjust the checksum.

After a reboot, there were enough free pages to DXDiag to start the Direct3D tests, which worked perfectly.

To see how much (or how little) of the performance of the card can be used, I tried to start with 3DMark 99 Max (which is available for free). It turned out that 3DMark 99 does not report any kind of sensible results on that machine, see Re: 3dmark99 MegaThread . Finally after fixing 3DMark, I obtained around 285 3DMarks at 4*33MHz. At 4*40MHz (PCI clocked at 40MHz), the system is unstable with the GeForce FX5200, but it works without problems with a Matrox G450. The stability issues might be thermal, so I will try adding a fan to try to get a 160MHz benchmark value as well.

Reply 132 of 147, by mkarcher

User metadata
Rank l33t
Rank
l33t

I was contacted by a user on a different forum about the 5V compatibility of PCI graphics card like this one. As I understand it, the card is indeed 5V compatible, but the way it was achieved is non-trivial. There are two 24-bit bus switches that connect the PCI data/control lines between the PCI slot at the graphics chip. These chips are Pericom PI5C16211. Those are common both on nVidia and ATI PCI graphics cards, and they are not specified as level translators, just as switches. So possibly they might pass 5V signals to the graphics chip, potentially destroying it? As these are switches, it is easy to guess that they are used to switch the PCI interface off if there are dangerous voltages, so in case of missing 3.3V or 5V I/O voltage, the switches might isolate the vulnerable graphics chip from dangerous voltage. This is neither the case on a cheap OEM Radeon 9250 PCI, nor on the ZOTAC Geforce FX 5200 PCI: On both cards, these chips are permanently enabled: The /OE inputs are directly connected to a ground plane.

It turns out that some vendors, like TI, sell two variants of the 16211 chips, and one of the variants is actually specified to do 3.3V-to-5V level translation, although "level translation" is a bit exaggregated. TI has the SN74CBT16211 and the SN74CBTD16211, with the D variant chips being specified for 3.3V-to-5V interfacing. Their application manual for bus switches is quite instructive on page 71 of the PDF document (printed page number: 1-23). They show an input-to-output voltage diagram both for the CBT16211 and the CBTD16211, and they show the difference between these two chips: The CBTD has a dropper diode in the positive supply, limiting the output voltage. These chips do not "translate" anything, but they put an upper limit on the voltage that gets passed through them. That's good enough for the purpose at hand: Clamping high to 3.3V is enough to make 3.3V CMOS logic compatible with 5V TTL logic.

On the graphics cards I have at hand, they use a CBT clone, not a CBTD clone, but added a voltage dropper diode as discrete component on the graphics card between +5V and the supply voltage pin of the permanently enabled bus switch, essentially converting the PI5C16211, a 74CBT16211 clone into a 74CBTD16211 clone. So no need to worry about damaging the card in a 5V PCI slot.

The option to use an external diode with a CBT family chip (instead of using a CBTD chip) is explicitly specified by TI, see application note https://www.ti.com/lit/an/scea035a/scea035a.pdf figure 10 on page 13. The Geforce FX PCI card does contain the loading resistor called "R" with a value of 2K (SMD code 202).

Reply 133 of 147, by zyga64

User metadata
Rank Oldbie
Rank
Oldbie

You probably meant PI5C16211 or SN74CBT16211. Not 221. (picture of gf2mx found on internet).
Actual level converters (sometimes used) are i.e. (Philips/NXP) GTL2000DGG. Not that I have knowledge on the subject, but I'm interested in !
As always excellent explanation, thank you !

Attachments

1) VLSI SCAMP /286@20 /4M /CL-GD5422 /CMI8330
2) i420EX /486DX33 /16M /TGUI9440 /GUS+ALS100+MT32PI
3) i430FX /K6-2@400 /64M /Rage Pro PCI /ES1370+YMF718
4) i440BX /P!!!750 /256M /MX440 /SBLive!
5) iB75 /3470s /4G /HD7750 /HDA

Reply 135 of 147, by mkarcher

User metadata
Rank l33t
Rank
l33t
zyga64 wrote on 2023-02-11, 16:17:

You probably meant PI5C16211 or SN74CBT16211. Not 221. (picture of gf2mx found on internet).

Yes, I do. I noticed that mistake in my post and edited it before you posted your answer. Sorry for any confusion this might have caused.

Reply 136 of 147, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Maxx79 wrote on 2023-02-11, 20:48:

Will the pentium 1 200mmx (overclocked to 233mhz) bottleneck the riva tnt2 m64 in late DOS and WIN95 games?

Or is Riva 128 a better choice?

Yup it will, the 128 is a closer choice but image quality isn’t as good. The M64 would still be a great pick if only because it’s image quality is superior.

Reply 137 of 147, by Maxx79

User metadata
Rank Newbie
Rank
Newbie

Would Creative Labs nvidia Vanta Riva TNT2 (32mb ) for 25 euro/$

https://www.electromyne.de/Graphics-Cards-PCI … age-CT6950.html

be too much for a pentium 200 mmx (overclocked to 233mhz)?

It is the only stronger PCI graphics card I can find.....even on ebay there is no PCI Riva TNT (16 mb) or riva 128....I currently have a matrox mystique 4mb, it gives a great picture and it's great for 2D and (few) first 3D games....but I'm trying to find some stronger PCI graphics for slightly newer 3D Games and in a higher resolution.......voodoo 2 or 3 are wasted as a possibility, because it is impossible to find it without the prices being crazy

Reply 138 of 147, by Sphere478

User metadata
Rank l33t++
Rank
l33t++
Maxx79 wrote on 2023-02-28, 11:12:
Would Creative Labs nvidia Vanta Riva TNT2 (32mb ) for 25 euro/$ […]
Show full quote

Would Creative Labs nvidia Vanta Riva TNT2 (32mb ) for 25 euro/$

https://www.electromyne.de/Graphics-Cards-PCI … age-CT6950.html

be too much for a pentium 200 mmx (overclocked to 233mhz)?

It is the only stronger PCI graphics card I can find.....even on ebay there is no PCI Riva TNT (16 mb) or riva 128....I currently have a matrox mystique 4mb, it gives a great picture and it's great for 2D and (few) first 3D games....but I'm trying to find some stronger PCI graphics for slightly newer 3D Games and in a higher resolution.......voodoo 2 or 3 are wasted as a possibility, because it is impossible to find it without the prices being crazy

Radeon 9200/9250/7500/7000 work good on pmmx 233

Sphere's PCB projects.
-
Sphere’s socket 5/7 cpu collection.
-
SUCCESSFUL K6-2+ to K6-3+ Full Cache Enable Mod
-
Tyan S1564S to S1564D single to dual processor conversion (also s1563 and s1562)

Reply 139 of 147, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Maxx79 wrote on 2023-02-28, 11:12:
Would Creative Labs nvidia Vanta Riva TNT2 (32mb ) for 25 euro/$ […]
Show full quote

Would Creative Labs nvidia Vanta Riva TNT2 (32mb ) for 25 euro/$

https://www.electromyne.de/Graphics-Cards-PCI … age-CT6950.html

be too much for a pentium 200 mmx (overclocked to 233mhz)?

It is the only stronger PCI graphics card I can find.....even on ebay there is no PCI Riva TNT (16 mb) or riva 128....I currently have a matrox mystique 4mb, it gives a great picture and it's great for 2D and (few) first 3D games....but I'm trying to find some stronger PCI graphics for slightly newer 3D Games and in a higher resolution.......voodoo 2 or 3 are wasted as a possibility, because it is impossible to find it without the prices being crazy

It ll be bottlenecked by the 200MMX but that's not actually a problem here as it would be what you want the reverse situation is much worse, itll also work perfectly fine and play every game you can throw at it that the 200MMX can handle. Should you upgrade to a faster CPU later then you wont have to worry about the GPU.

Best part here is that its cheap, I would grab it.