VOGONS


Reply 60 of 74, by RayeR

User metadata
Rank Oldbie
Rank
Oldbie

I agree it may be cache flush signal. The FPGA decode bus DMA cycles and generate it. But is there any reason why it even shouldn't POST with this flush signal unconnected? I also think that 386SX should work on the module if other pins are same. Voltages as I told before are all 5V here, no regulator used. The upgrade module might be placed in socket in wrong orientation and get damaged, I don't know what anybody did with it before I got it. Currently I don't have any other SLC CPU or 386SX board for more further testing. I would also need to test the FPU in different MB or borrow 387SX to test if my MB handles it well, it behaved weird...

Gigabyte GA-P67-DS3-B3, Core i7-2600K @4,5GHz, 8GB DDR3, 128GB SSD, GTX970(GF7900GT), SB Audigy + YMF724F + DreamBlaster combo + LPC2ISA

Reply 61 of 74, by Deunan

User metadata
Rank l33t
Rank
l33t

At this point without a POST card and a scope there isn't much else you can do. See if the POST card has any codes on it, and if not start poking with the scope. First the basics, is the clock present on the CPU, the reset, are any of the address lines toggling, then data lines. See if the BIOS chip is getting chip select asserted. The usual works - there has to be a fault somewhere.

Oh and remove the co-processor in case there is something the Intel chip doesn't like about it. Last time I had issues like that (which was eventually traced down to cracked via that delivered reset signal to ISA slots) I even got a FLASH to EPROM emulating chip (so I didn't have to UV erase an actual EPROM all the time) and tried my own code in there. It's really easy to build a simple debugging tool that way with a POST card, one OUT instruction and you have 00-FF on the display. But that's assuming all data lines and most addresss ones work.

Reply 62 of 74, by RayeR

User metadata
Rank Oldbie
Rank
Oldbie

I have a POST card (even I had built one myself many years ago) but it didn't received any code in both cases (i386SX on upgrade module and IBM 386SLC on 386SX MB). Of course I tried to remove FPU from module socket and I tried to test it in 386SX MB, as I posted before it was recognized by Norton Diag. but all FPU tests failed and DosNav was freezing at startup so I have suspection it is also damaged some way but I never had other FPU in this 386SX MB so I'm not 100% sure if there is not some other problem like connections around socket, chipset, etc...
With scope I only checked that onboard 50MHz oscillator is working, feeding the CPU. I randomly tap some CPU and FPGA pins but there was not any visible traffic, just still levels...

Gigabyte GA-P67-DS3-B3, Core i7-2600K @4,5GHz, 8GB DDR3, 128GB SSD, GTX970(GF7900GT), SB Audigy + YMF724F + DreamBlaster combo + LPC2ISA

Reply 63 of 74, by Deunan

User metadata
Rank l33t
Rank
l33t
RayeR wrote on 2021-04-14, 15:34:

With scope I only checked that onboard 50MHz oscillator is working, feeding the CPU. I randomly tap some CPU and FPGA pins but there was not any visible traffic, just still levels...

Get a 386SX pinout (Intel 386SX datasheet is a good source) and check:
- RESET (pin 33) should be low
- FLT# (pin 28) should be high
- CLK2 (pin 15) should have 50MHz clock
- HOLD (pin 4) should not be permanently stuck high
- READY# (pin 7) should be low, mostly, might have high pulses
- LOCK# (pin 26) should not be permanently low
- NMI (pin 38) should not be permanently high (in fact it should be low most of the time)
- INTR (pin 40) should not be permanently high (but CPU will start to boot as interrupts are masked on reset)

Also BUSY# (pin 34) should not be permanently low - not sure now if this is tested by CPU on reset or not, usually it's only important for x87 link and startup code should not touch the NPU.

Reply 64 of 74, by uscleo

User metadata
Rank Newbie
Rank
Newbie

Hi guys,

I’ve been reading through this thread and surprisingly it’s one of the few that discuss the 386 SX.

I have an old Toshiba T3100sx laptop that I have successfully upgraded from an Intel 386sx-16 to a Ti486SLC/E - 33 by desoldering off the old CPU and soldering on a new one, but I am wondering what would be the most high performance 386sx CPU that would work in place of the old 386sx?

For example it seems that the IBM 386SLC seems not quite compatible because of its cache and others because of voltage etc.

Also I want to rule out any upgrade cards as there is no clearance for that in my old laptop.

Any ideas?

Reply 65 of 74, by Anonymous Coward

User metadata
Rank l33t++
Rank
l33t++

Despite the extra cache control lines, a number of people on VOGONs have had some luck with 386DX systems and the IBM 486DLC3 (the 32-bit version of the SLC2).
I'm not sure why this works exactly. Perhaps the boards are new enough to have already built in some support.
Desoldering a 386SX and replacing it with another CPU is a gamble. Especially on a late 80s design, there is a chance the replacement CPU may not function correctly, and you will eventually tear the pads on your board if you mess with them enough.
That being said, you successfully swapped a 486SLC, so that's good. I assume it's still running at 16MHz?
In the Cyrix SLC family, the fastest chip is the TI SXLC2-66. The -66 is a rare part. The -50 is much more common. Both of them are 3.3V, but seem to work at 5V with proper cooling. There is also a SXLC-40 which is officially a 5V part, and supports clock doubling. These all have 8kb internal cache. Despite being very similar to the SLC, they seem to be a little less compatible with older systems.
The IBM SLC2 was made as -50 and -66. I've seen an article in an old PC World that claimed there was also an SLC3 at 75 and 100MHz, but I don't think they were ever released.
The IBM BL3/DLC3 can work in 16-bit mode, and was used in some 386SX upgrade modules, but they CPU itself has 132 pins, and pads on your laptop are 100 pins, so that won't work for you.

"Will the highways on the internets become more few?" -Gee Dubya
V'Ger XT|Upgraded AT|Ultimate 386|Super VL/EISA 486|SMP VL/EISA Pentium

Reply 66 of 74, by DistWave

User metadata
Rank Newbie
Rank
Newbie
The attachment photo_2023-06-28_19-01-54.jpg is no longer available

I successfully replaced a 386SX-20 with a TI486SXLC2-50-G on a IBM/PŜ1 2121 model several years ago (I used an empty EEPROM socket to plug a breadboard with the power regulator). With a 33 MHz crystal the 486SXLC2 works great at 66 MHz and the 8 kb cache makes a great difference when enabled. Only issue is the floppy drive, it doesn't read floppy disks when cache is enabled (DMA and no cache flush is a know source of issues). The chip doesn't even get hot running at 3.3v and a bit overclocked.

Reply 67 of 74, by IBMMuseum

User metadata
Rank Newbie
Rank
Newbie
furan wrote on 2020-01-18, 20:51:
Some more: https://groups.google.com/d/msg/comp.sys.ibm. … ag/clWUxJ6tiDIJ […]
Show full quote

Some more:
https://groups.google.com/d/msg/comp.sys.ibm. … ag/clWUxJ6tiDIJ

IBM 386SLC: 386SX pinout with addition of some pins for cache control &
suspend mode. 24-bit Address bus, 16-bit external Data bus. 8Kb internal L1
cache. Able to run all Intel 486SX instructions. Model-Specific Registers
(MSR) to control CPU function. Low-power design. No internal clock
multiplying. Usually clocked at 20MHz to directly replace a 386SX. CPUID
A301h ('A' is IBM, '3' is CPU Family, '0' is clock multiplying, '1' is mask
revision).

IBM 486SLC2: 386SX pinout with addition of some pins for cache control &
suspend mode. 24-bit Address bus, 16-bit external Data bus. 16Kb internal L1
cache. Able to run all Intel 486SX instructions. Additional Model-Specific
Registers bits to control more CPU functions than the 386SLC. Low-power
design. Internal clock doubling at 40 or 50MHz. CPUID A421h or A422h.

IBM 486SLC3/486DLC2: PQFP 386DX pinout with addition of some pins for
cache control, suspend mode, & CPU bus width modes. Hardware pin switchable
between 24-bit Address bus/16-bit external Data bus or 32-bit Address/Data
bus. 16Kb internal L1 cache. Able to run all Intel 486SX instructions.
Additional Model-Specific Register from the 486SLC2. Low-power design.
Internal clock tripling at 60, 75, or 100MHz. CPUID A439h in 486SLC3 mode.

I just stumbled on this older thread - I was surprised to be quoted (I didn't have a VOGONS account back then) - I've also communicated with Frank van Gilluwe (author of the "Undocumented PC" that you have images from - It might be interesting where he sourced the information). Now to follow through to the end of the thread.

Reply 68 of 74, by RandomBlankUsername

User metadata
Rank Newbie
Rank
Newbie

Hate to bump this ... anyone know the power consumption of the IBM 486SLC2? I have a 66MHz variant in my Model 60 ... and it runs extremely hot (at very least 65°C, immediately unpleasant to touch) when clock doubled with cache enabled (rock solid tho) and was looking into cooling solutions (afterall this is 30+ year old silicon in a rather small package with spares hardly available).

Reply 69 of 74, by rmay635703

User metadata
Rank Oldbie
Rank
Oldbie
DistWave wrote on 2023-06-28, 17:07:
The attachment photo_2023-06-28_19-01-54.jpg is no longer available

Only issue is the floppy drive, it doesn't read floppy disks when cache is enabled (DMA and no cache flush is a know source of issues). The chip doesn't even get hot running at 3.3v and a bit overclocked.

I’ve wondered for a long time if the cache can enable/disable on the spot during floppy access?

Doubtful there is a pin for it, if there were wire it up to the floppy, 🤣.

Reply 70 of 74, by feipoa

User metadata
Rank l33t++
Rank
l33t++
DistWave wrote on 2023-06-28, 17:07:
The attachment photo_2023-06-28_19-01-54.jpg is no longer available

I successfully replaced a 386SX-20 with a TI486SXLC2-50-G on a IBM/PŜ1 2121 model several years ago (I used an empty EEPROM socket to plug a breadboard with the power regulator). With a 33 MHz crystal the 486SXLC2 works great at 66 MHz and the 8 kb cache makes a great difference when enabled. Only issue is the floppy drive, it doesn't read floppy disks when cache is enabled (DMA and no cache flush is a know source of issues). The chip doesn't even get hot running at 3.3v and a bit overclocked.

The floppy issue would bother me. Does sound work? What are you using to enable the cache?

Plan your life wisely, you'll be dead before you know it.

Reply 71 of 74, by MikeSG

User metadata
Rank Oldbie
Rank
Oldbie
RandomBlankUsername wrote on 2026-01-15, 22:06:

Hate to bump this ... anyone know the power consumption of the IBM 486SLC2? I have a 66MHz variant in my Model 60 ... and it runs extremely hot (at very least 65°C, immediately unpleasant to touch) when clock doubled with cache enabled (rock solid tho) and was looking into cooling solutions (afterall this is 30+ year old silicon in a rather small package with spares hardly available).

At 5v, they need a heatsink & fan unless you're running basic programs.

Look on ebay for "raspberry pi heatsink fan". They blow very little air, but are the right size and actually cool the CPU to touch temperature.

Reply 72 of 74, by RandomBlankUsername

User metadata
Rank Newbie
Rank
Newbie

The board I have does have a tiny LT1117, so it should run at 3.3V? Ironically it measured 1.25V for me.
Photos: https://imgur.com/a/LMZ3CDW

About glitches/floppy access ... does the Cyrix tool allow you to uncache certain areas? I get glitches with EGA when caching A0000-B0000 on my machine and floppy access wouldn't work either. Disabling cache for A0000-FFFFF seemed to fix it.

Reply 73 of 74, by RandomBlankUsername

User metadata
Rank Newbie
Rank
Newbie

Voltage goes up to 3.8V in operation - so I guess that's normal.

Reply 74 of 74, by SergeK

User metadata
Rank Newbie
Rank
Newbie
DistWave wrote on 2023-06-28, 17:07:

Only issue is the floppy drive, it doesn't read floppy disks when cache is enabled (DMA and no cache flush is a know source of issues).

Cyrix/TI 486SLC/DLC and TI 486SXLC/SXL can be configured to flush cache any time HOLD signal goes active, which typically indicates that another bus master, e.g., DMA controller, would like to have control of the bus (and can potentially update the memory). The HOLD signal is already wired in all 386SX motherboards, so all you need to do is to enable "BARB" feature. It is done by setting bit 5 in the CCR0 register. I think there are tools around (cache486 or similar?) that would do that. It can also be done very easily with a short assembly program:

MOV AL,0C0h
OUT 22h,AL ; select CCR0
IN 23h,AL ; read the current CCR0 value to AL
OR AL,20h ; set bit 5
MOV AH,AL ; temporarily store AL to AH
MOV AL,0C0h
OUT 22h,AL ; select CCR0
MOV AL,AH ; restore AL from AH
OUT 23h,AL ; write new value to CCR0
INT 20h; exit to DOS

You can even type that using DOS DEBUG and save it as .COM file...

Also, earlier in the thread there is a confusion about A20M line. I think it wouldn't matter in most cases, except if you're using some weird applications from the early DOS era...

The explanation is a bit of trip to the PC history... The 8086/8088 had 20 bit physical address, which was generated using two 16 bit values: segment and offset, something like: physical_address = segment * 16 + offset. As one can notice, it is possible to generate addresses larger than 1 MiB (e.g., if you set segment to 0FFFFh, and offset larger than 16). In the 8086/8088 the address will simply roll over, and the first 64 KiB of memory will be accessed.
Some creative folks found it to be useful to do tricks and used this mis-feature... Most notably, MS-DOS itself uses it to emulate CP/M style calls using JMP 5 instruction...

A few years later Intel came up with 80286, and a few years after 80386, and so on... All these processors had more than 20 address lines and were able to address more than 1 Mib of memory. And if one would employ the abovementioned trick, instead of rolling over, the processor would access the memory above 1 MiB, which caused incompatibility... IBM "fixed" the issue by connecting the CPU A20 address line to the memory through an AND gate, with the other input of the AND gate connected to the keyboard controller (A20 GATE). So that A20 line can be connected to the memory or forced to '0' by issuing a command to that keyboard controller. (It seems like IBM really enjoyed having a microcontroller in the IBM AT design. They also used that to do a software controlled reset for the 80286 CPU).

When the CPUs with integrated cache entered the picture. Flipping the A20 GATE changes the memory mapping, and the CPU knows nothing about it... So Cyrix/TI CPUs implemented the A20M input. Any time the state of that input changes, the CPU flushes the cache lines that are mapped to the first 64 KiB at each 1 MiB boundary. From hardware perspective it means that A20M line needs to be wired to the keyboard controller (typically pin 22 for the DIP-40 keyboard controllers or pin 25 for the PLCC-44 keyboard controllers).

If for some reason you cannot do this hardware modification, it is also possible to set the bit #0 of the CCR0 to '1', which will disable caching of the first 64 KiB at each 1 MiB boundary. This will result in some performance degradation.