VOGONS


First post, by TwistedSoul21967

User metadata
Rank Newbie
Rank
Newbie

Hi everyone,

I have two PCs running with 10 Mbps ISA network cards:

  • AST Advantage! 624, Windows 95, Intel P120, D-Link DE-220 ISA card.
  • Epson Endeavor 4DX2-66 L, Windows 3.11 WFW, Cyrix FasCache DX2-V, RTL8019 based card.

Both are using 16-bit ISA slots.

I have a PureFTPd server configure that is able to saturate Gigabit Ethernet as my test source.

My Windows 95 machine when downloading over FTP manages to get 3.6 Mbps (450 KB/s) when using FileZilla and my 3.11 machine can barely reach 1.12 Mbps (140 KB/s).

From various topics I've read, I've seen people claim in excess of 7.2 MBps (900 KB/s) on 10 Mbps ISA cards, and I've read that ISA caps out at about 60 Mbps in a perfect world.

The switch they're connected to is "dumb" but it does show that it negotiated 10Mbps Full-Duplex (single green LED, if it were orange it would be Half-Duplex, Dual Green LED for GbE).

The 8019 card when using RSET8019 shows that it's configured for FD and the connection medium is set to auto since it supports BNC too, if I try to force any other mode (10B-2, 10B-5, 10B-T) it disables FD.

On the DE-220, there's nothing to configure, it's all "auto" and PnP.

So my question is: are these cards held back by the processors since I these don't use DMA and thus is having to rely on pure PIO grunt?

AFAIK they're running at the normal bus speeds, they're not running any sort of OC. The Epson claims 8.3 MHz.
The AST has the jumpers set to decouple the ISA bus from the CPU FSB so as not to OC the cards.

Are there any tips for getting these cards to perform better on these machines or should I just be happy that they even run at those speeds?

Twisted.

My garage, 15 PCs from 1990 to 2020, 486 to 5900X - https://www.thecodecache.net

Reply 1 of 28, by st31276a

User metadata
Rank Newbie
Rank
Newbie

I get around 4.5Mbps on a 3c509 / 386DX33 on linux, kernel 2.4.18 over http using wget.

I can NAT close to wire speed through a pair of smc elite ultra 8216T’s on a 486Dx4-100 on linux, kernel 2.0.38.

Linux networking is quite good, very efficient stack. Perhaps less optimal software will not fare quite as well, but this is at least a hint towards what the hardware is capable of.

On the other hand, I get absolutely depressing speeds on a P3-550 with a 3c905 vortex card under windows 98. Go figure.

Reply 2 of 28, by Grzyb

User metadata
Rank Oldbie
Rank
Oldbie

In theory, 10 Mbps = 1250 KB/s

FTP should achieve at least 1100 KB/s - but only in Full Duplex!
Majority of 10 Mbps cards lacks FD support.
Those that do support FD, lack NWAY auto-negotiation, so FD must be manually configured at *BOTH* sides.

In Half Duplex, you can expect 900..1000 KB/s.

I'm not sure about 486DX2-66, but Pentium 120 is definitely fast enough to use 10 Mbps Ethernet to the max.

So, 450 KB/s is bad!

First, I wouldn't trust the switch is working in FD.
If it's unmanaged, there's no way to manually set it to FD, and it can't auto-negotiate it with a non-NWAY NIC.
It really hurts performance when one side is FD, and the other HD.

Second, how exactly do you measure the throughput?
If you download a file onto the HDD, the HDD may be the bottleneck.
My preferred procedure is to use the command-line FTP client, and download to NUL, eg.:

ftp 192.168.0.1
bin
get test.bin nul

When using mTCP, it's important to set "MTU 1500" for best performance on Ethernet.

Nie tylko, jak widzicie, w tym trudność, że nie zdołacie wejść na moją górę, lecz i w tym, że ja do was cały zejść nie mogę, gdyż schodząc, gubię po drodze to, co miałem donieść.

Reply 3 of 28, by Grzyb

User metadata
Rank Oldbie
Rank
Oldbie
st31276a wrote on 2024-01-27, 18:49:

On the other hand, I get absolutely depressing speeds on a P3-550 with a 3c905 vortex card under windows 98. Go figure.

Yes, it's a well-known fact that Windows 9x can't use 100 Mbps Ethernet to the max.
Supposedly there's some utility to fix that, but I've never tried it.

On the other hand, Windows 9x works very well with 10 Mbps Ethernet - no problem hitting the hardware limit here.

Nie tylko, jak widzicie, w tym trudność, że nie zdołacie wejść na moją górę, lecz i w tym, że ja do was cały zejść nie mogę, gdyż schodząc, gubię po drodze to, co miałem donieść.

Reply 5 of 28, by Grzyb

User metadata
Rank Oldbie
Rank
Oldbie
kingcake wrote on 2024-01-27, 21:51:

You're not going to get anywhere near 10 megabits with an ISA NIC. Lucky if you approach half that.

Tell it to my ISA NICs, as they obviously don't know about that, and work about as fast as 10 Mbps PCI ones 🤣

Nie tylko, jak widzicie, w tym trudność, że nie zdołacie wejść na moją górę, lecz i w tym, że ja do was cały zejść nie mogę, gdyż schodząc, gubię po drodze to, co miałem donieść.

Reply 6 of 28, by kingcake

User metadata
Rank Oldbie
Rank
Oldbie
Grzyb wrote on 2024-01-27, 22:29:
kingcake wrote on 2024-01-27, 21:51:

You're not going to get anywhere near 10 megabits with an ISA NIC. Lucky if you approach half that.

Tell it to my ISA NICs, as they obviously don't know about that, and work about as fast as 10 Mbps PCI ones 🤣

Your numbers are bogus. You're quoting figures that are impossible to achieve due to protocol overhead.

Reply 7 of 28, by Grzyb

User metadata
Rank Oldbie
Rank
Oldbie
kingcake wrote on 2024-01-27, 22:47:

Your numbers are bogus. You're quoting figures that are impossible to achieve due to protocol overhead.

Re: Ethernet on VLB

Nie tylko, jak widzicie, w tym trudność, że nie zdołacie wejść na moją górę, lecz i w tym, że ja do was cały zejść nie mogę, gdyż schodząc, gubię po drodze to, co miałem donieść.

Reply 8 of 28, by acl

User metadata
Rank Oldbie
Rank
Oldbie

You need to take the protocol overhead into account. Your ftp client only counts the actual file content payload, not the global network throughput.

Ftp is running over TCP and both are adding headers and control commands that are not actual data.
TCP also requires acknowledgement for each packet. Depending on how the network stack is implemented this could mean that the system could stop sending data because he still wait for previous ACKs. There are also retransmissions if a packet is lost, not correctly decoded or simply if the other system cannot acknowledge fast enough.

If you're using a hub (I know you probably not) you can even have Ethernet collisions because every system connected to a hub are sharing the same collision domain. This leads the Ethernet stage of the network stack to stop sending data until no one is using the link (CSMA/CD). Reducing the overall speed.

There are a lot of factor to take into account. Transfering a file over ftp is not a method I would recommended if you need accurate results.

Iperf/Iperf3 are software actually made for this tests (I used them professionally to mesure speeds up to 100Gbps over thousands of kilometres 🌍)

Unfortunately they are probably not available for retro systems.

Transfering things using a raw UDP socket through a crossover cable between the two systems with a fast CPU should get you close to the theorical Max network speed achievable with an ISA network interface controller.

"Hello, my friend. Stay awhile and listen..."
My collection (not up to date)

Reply 9 of 28, by mbbrutman

User metadata
Rank Member
Rank
Member
kingcake wrote on 2024-01-27, 21:51:

You're not going to get anywhere near 10 megabits with an ISA NIC. Lucky if you approach half that.

Sorry, you are flat out wrong here.

See http://brutmanlabs.org/mTCP/mTCP_Performance.html for some data points.

I have a Compaq 4/33 that I've measured at 900KB/sec receiving and 1000KB/sec receiving using a TCP/IP socket. The only thing that it doesn't do that FTP does is do the disk I/O, which is fine because we're talking about TCP/IP and ISA Ethernet performance here, not disk throughput.

Even my 386-40 can throw down respectable numbers with an ISA Ethernet card.

Assuming an MTU of 1500 and no IP options in the IP header, you have 1460 bytes of payload in every TCP/IP packet. There are 14 bytes required for the Ethernet header, 4 bytes for the CRC at the end, some bytes for the Ethernet preamble, and some dead time on the wire but still you can get very high utilization even with the protocol overhead of TCP/IP. Assuming you have the MTU set correctly.

Reply 11 of 28, by acl

User metadata
Rank Oldbie
Rank
Oldbie
mbbrutman wrote on 2024-01-27, 23:36:
Sorry, you are flat out wrong here. […]
Show full quote
kingcake wrote on 2024-01-27, 21:51:

You're not going to get anywhere near 10 megabits with an ISA NIC. Lucky if you approach half that.

Sorry, you are flat out wrong here.

See http://brutmanlabs.org/mTCP/mTCP_Performance.html for some data points.

I have a Compaq 4/33 that I've measured at 900KB/sec receiving and 1000KB/sec receiving using a TCP/IP socket. The only thing that it doesn't do that FTP does is do the disk I/O, which is fine because we're talking about TCP/IP and ISA Ethernet performance here, not disk throughput.

Even my 386-40 can throw down respectable numbers with an ISA Ethernet card.

Assuming an MTU of 1500 and no IP options in the IP header, you have 1460 bytes of payload in every TCP/IP packet. There are 14 bytes required for the Ethernet header, 4 bytes for the CRC at the end, some bytes for the Ethernet preamble, and some dead time on the wire but still you can get very high utilization even with the protocol overhead of TCP/IP. Assuming you have the MTU set correctly.

Totally forgot about the disk IO overhead.
OP could try to save the files to a RAMDisk to mitigate that.

Also, larger MTU values can be used on LANs... not sure that these old NICs/OSes can support MTU9000 "jumbo frames" but could be a way to improve the results

"Hello, my friend. Stay awhile and listen..."
My collection (not up to date)

Reply 12 of 28, by Grzyb

User metadata
Rank Oldbie
Rank
Oldbie
acl wrote on 2024-01-28, 00:32:

Also, larger MTU values can be used on LANs... not sure that these old NICs/OSes can support MTU9000 "jumbo frames" but could be a way to improve the results

Jumbo frames originally appeared in Gigabit Ethernet, and may also be supported by some later Fast Ethernet hardware.
But definitely not by ISA NICs.

Nie tylko, jak widzicie, w tym trudność, że nie zdołacie wejść na moją górę, lecz i w tym, że ja do was cały zejść nie mogę, gdyż schodząc, gubię po drodze to, co miałem donieść.

Reply 13 of 28, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

You want MTU at 1500 I think, later OS would set that by default, maybe 98 up, but older OS and driver installers would have something about a third that, yay, three times the overhead.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 14 of 28, by st31276a

User metadata
Rank Newbie
Rank
Newbie
Grzyb wrote on 2024-01-27, 22:29:
kingcake wrote on 2024-01-27, 21:51:

You're not going to get anywhere near 10 megabits with an ISA NIC. Lucky if you approach half that.

Tell it to my ISA NICs, as they obviously don't know about that, and work about as fast as 10 Mbps PCI ones 🤣

Agree. Mine also did not get the memo.

I have seen half duplex 10Mbps do 1000KB/s many times. 10Mbps would be 1250KB/s, the difference is due to interframe gaps, mac headers, ip headers, tcp headers, ack packets and the fact that it is half duplex.

Reply 15 of 28, by TwistedSoul21967

User metadata
Rank Newbie
Rank
Newbie

Hi everyone, thanks for your input!

With regards to the storage subsystems,
The Windows 95 machine has a SCSI Ultra 2 card with a Quantum Atlas V attached, which should be more than capable of sustaining 10 Mbits of writes.
Though as for ISA to CPU and then CPU to PCI, maybe there's too much contention?

The W3.11 machine was indeed using a RAM disk.

Interestingly, on the W3.11 machine, I switched to using WS_FTP and the rate went up to about 3.68 Mbits (460KB/s) which is a really tidy boost, which now rivals the W95 machine but still only 1/3 of the max
I'm aware of the overheads and such (lead software dev for a fintech company) so I never expect the full amount.

As some of you have mentioned, my dumb switch maybe part of the cause, I will connect them both directly to my fully managed switch (Dell PowerConnect 5324) and force the ports to FD to see if that helps any.

I've seen a lot of mention of mTCP, So I'll see if can test that on both machines to see if it's the networking stack that is having issues.

Thanks for your input so far. Let me run some more tests and I'll get back to you all

My garage, 15 PCs from 1990 to 2020, 486 to 5900X - https://www.thecodecache.net

Reply 16 of 28, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++
st31276a wrote on 2024-01-28, 10:06:
Grzyb wrote on 2024-01-27, 22:29:
kingcake wrote on 2024-01-27, 21:51:

You're not going to get anywhere near 10 megabits with an ISA NIC. Lucky if you approach half that.

Tell it to my ISA NICs, as they obviously don't know about that, and work about as fast as 10 Mbps PCI ones 🤣

Agree. Mine also did not get the memo.

I have seen half duplex 10Mbps do 1000KB/s many times. 10Mbps would be 1250KB/s, the difference is due to interframe gaps, mac headers, ip headers, tcp headers, ack packets and the fact that it is half duplex.

You tend to notice the most difference between full and half duplex if there's something just a little wrong with the wiring, or it's picking up interference, the requests to resend packets keep stopping the transfer and then the missed packets have to be resent also, the full duplex then looks like 98% expected xfer rate, the half duplex is at 45% ... though error free they might be pretty close. That's on an otherwise "quiet" network segment.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 17 of 28, by Grzyb

User metadata
Rank Oldbie
Rank
Oldbie

Oh yeah, there may be great difference in actual performance of 10 Mbps Ethernet depending of topology and type of devices...

Bus topology (10base2, 10base5), and even star topology (10baseT) using hubs are naturally prone to collisions, which greatly reduces performance.
In a quiet network they also achieve over 1000 KB/s, but *much* worse when the network is busy.

Using switches instead of hubs eliminates collisions.
In busy networks, however, there may still be the problem of broadcast packets.

Nie tylko, jak widzicie, w tym trudność, że nie zdołacie wejść na moją górę, lecz i w tym, że ja do was cały zejść nie mogę, gdyż schodząc, gubię po drodze to, co miałem donieść.

Reply 18 of 28, by acl

User metadata
Rank Oldbie
Rank
Oldbie

Another thing that came into my mind is that the modern ftp server OS might be running with "not retro friendly" settings regarding TCP congestion and/or flow control.

If it's running on a 1gbps link, the server's network stack could be assuming it is operating with fast clients. So it will start by sending a big chunk of data. This data will overwhelme the 10Mbps client and most of the packets will be dropped (and not acknowledged ). So the server network stack will know it sent way too much data.
This could, in turn, make the server apply aggressive throttling on this connection by applying flow control or congestion.

I've witnessed that at work. When applying a small rate policy on a fast link (ex: limit to 1Gbps on a 25Gbps capable link). In that case, this resulted in a real throughput worse than 1Gbps because the adaptative flow contol kind of over-reacted.

Forcing the ftp **server** network interface to 10Mbps instead of 1Gbps can actually *improve* the speed. You can force that directly in the OS, or on the switch itself (if it's a managable one)

"Hello, my friend. Stay awhile and listen..."
My collection (not up to date)

Reply 19 of 28, by TwistedSoul21967

User metadata
Rank Newbie
Rank
Newbie

Quick update,

So my machines are all connected using 10Base-T and 100Base-TX.
Testing another of my machines which is a P2 233, which has a 3C 3905B-TX (100/FD) using mTCP FTP I get 73 Mbps (9.2 MB/s) with my FTP server.
The retro machines are all on the same LAN segment, same switch.

My FTP server is behind my main FM switch.

Museum PCs -> 10/100 Mbps -> UM Switch -> 1 Gbe -> FM Switch ->  2 x 1 Gbe LAG -> Server 

It could be true what you say about the FTP config,

With my AST P120 machine, I used the DE-220 SETUP program to disable Auto-Negotiate and force 10 Mbps and Full-Duplex on UTP.

Going direct via my fully managed switch (configured for 10/FD) I was able to raise the download speed with MS FTP CLI to 6 Mbps (750 KB/s),
And under DOS using mTCP FTP I got 6.8 Mbps (850 KB/s), which is definitely better, nearly a 2x gain.
However, going back to the unmanaged yielded the previous result of 3.9Mbps (490KB/s) which is very odd considering auto detect is off so it should still use 10/FD.

I get exactly the same numbers when I test between the two machines and my P2 233 when running mTCP spdtest in listen/send modes.
And I know that the P2-233 can push nearly 100 Mbps so it should easily be able to saturate the other two.
So I don't think the FTP server is the issue, but rather, either poor drivers, bad NICs or just bad bus architecture in these Epson and AST machines.
Considering they're both on the "budget" end of the scale, this wouldn't surprise me.
I know that the AST is using a PCI to ISA bridge at least so there could be some performance loss there.

Next I'm going to see if I can find a proper RTL8019 Packet Driver rather than the NE2000 Crynwr driver to see if that changes anything.

Back to testing I go!

Last edited by TwistedSoul21967 on 2024-01-28, 18:28. Edited 5 times in total.

My garage, 15 PCs from 1990 to 2020, 486 to 5900X - https://www.thecodecache.net