VOGONS


First post, by dionb

User metadata
Rank l33t++
Rank
l33t++

Today I had an AI training at work. We used a bunch of LLM AIs and investigated how they could help us with our work. We were encouraged to try them out to get a feel for the kind of output they could give, both in terms of style, content and reliability.

For the latter, being the retro-geek I am, I chose something very specific to put them through their paces, which makes the output on-topic here 😉

The AI engines I used:
- OpenAI ChatGPT (4.0-mini)
- Microsoft Copilot (no version available)
- Google Gemini (2.0 Flash)
- Anthropic Claude (3.7 Sonnet)
- DeepSeek (v3)

My initial prompt:

"What is the fastest CPU that can be installed on an Asus P/I-P55T2P4 rev 3.10 motherboard?"

This is a fun example. The fastest officially suppoted CPU (i.e. listed in manual) is the P200 non-MMX (!). However the board can be set to 83MHz FSB, supports split-voltage and the Vcore can be set as low as 2.0V - and there is a modified BIOS available allowing K6plus CPUs to be used. So under the right conditions you can run a K63+ on it at 500MHz.

This article explains what's possible and how to do it (but the crucial voltage settings are missing... as of 2025, you need archive.org to see them, a 2008 snapshot does the trick).
https://www.tomshardware.com/reviews/oldie-tuning,216.html

A lot of things could be correct depending on context given.

Then more specifically:

"What is the lowest CPU core voltage I can set on an Asus P/I-P55T2P4 rev 3.10 motherboard?"

The lowest officially documented answer is 2.5V, the lowest mentioned online is 2.0V. It's highly likely you could go lower (bridge that fifth jumper), but either of these would be correct if combined with correct context.

Finally:

"Which jumper settings do I need to set on the Asus P/I-P55T2P4 rev 3.10 motherboard to get 2.0V CPU core voltage?"

Correct answer would be "JP20 1-2, 3-4, 5-6 and 7-8 jumpered", or "2.5V, 2.7V, 2.8V and 2.9V jumpered". Alternately, an answer that 2.5V is the lowest supported is good too, if not as interesting.

So how did the AI do?

ChatGPT

MaxCPU:
Bad. "The Asus P/I-P55T2P4 rev 3.10 motherboard is based on the Intel 440FX chipset" 😦 - no it's not, that's a PPro chipset, this is an i430HX board. As for the CPUs:

Pentium MMX up to 233MHz, AMD K6-2 up to 550MHz (wrong, there's no way to get 550MHz) Cyrix MII up to 300MHz.

Voltage:
Completely wrong: "This motherboard provides options for adjusting CPU core voltage via jumpers or BIOS settings" No, no BIOS settings. "However, the lowest core voltage you can set depends on the processors you are using and the available settings on the motherboard." Duh. It then proceeds to list required voltages for various types of processors and gets most wrong (Pentium MMX 3.3V or 3.45V, K6-2 2.4V or 2.8V...)

Jumper settings:
Wrong - correctly indicates no 2.0V native setting, then says lowest settings available are 2.8V-3.0V depending on the processor. Also no specific jumper settings listed.

Overall: bad. Information is incorrect, both specifics and context, and exact instructions missing.

Copilot

MaxCPU:
Excellent. Mentions both fastest official support (P233MMX) and "with modifications" "K6-2 or K6-III". It offers to help with those modifications.

Voltage:
Correctly mentions 2.5V as lowest official setting (including sources!), no mention of anything lower though. Again offers to help.

Jumper settings:
Mixed bag. States no official support, 2.5V lowest support, but states that 2.0 might be possible with custom configurations. However no info on what configurations exactly.

Gemini

MaxCPU:
Decent-ish. Very verbose output about the board specs (which is correct, unlike with ChatGPT) and best official support (P233MMX). However it then equally verbosely proceeds to talk about "Regarding Unofficially Supported CPUs (The "Risk" Scenario). But cutting through the blurb, it only references the K6-233, which is a bit disappointing...

Voltage:
Useless. Again lots of blurb, but basically it doesn't give specifics and tells me to RTFM.

Jumper settings:
Worse than useless. Again lots of blurb, but basically it doesn't give specifics and tells me to RTFM. However it speculates on what jumper settings generally look like and which jumper numbers would be needed - none of which are remotely correct for this board.

Claude

MaxCPU:
OK-ish, lots of blurb but bottom line is max official support for P233MMX.

Voltage:
Bad. Correctly mentions the lowest officially supported voltage and other official voltage options, but then mentions that you use JP12 and JP13 to select them. Also says (correctly) that lowest voltage depends on board revision but incorrectly states 2.4V as lowest option.

Jumper settings:
OK-ish. Says 2.0V isn't available through standard configuration, that 2.5V is the lowest there. Unfortunately JP12 and 13 appear as well, and it incorrectly states that physical modifications would be necessary for anything lower than 2.5V.

Deepseek

MaxCPU:
Bad. Correctly mentions Pentium 233MMX, but goes on to incorrectly claim that K6-2 550MHz is compatible "but you may need to verify BIOS support". Also says "the motherboard may have specific voltage requirments" but doesn't tell me what they might be. Also suggests the board may have SDRAM (it doesn't).

Voltage:
Poor. Lists (officially) compatible CPUs then gives their voltage requirements. Subsequently continues with hypothetical ways to set voltage and finally states incorrectly that 2.8V is lowest.

Jumper settings:
Poor. Extremely verbose but totally non-specific output about motherboards and voltages, again suggesting that 2.8V is lowest for an old board like this and suggests using an external VRM, something this board doesn't have options for.

Conclusion:
AI has come a long way, but don't trust it for your retro specs & info just yet. Deepseek, Gemini, Claude and (worst) ChatGPT give actively incorrect information about the board, and where not actually wrong are low on details. Copilot is clearly the best of the bunch for this question, with all its answers being correct; it has clearly used the Tomshardware article for its answers - and it was the only one to provide references to prove that. All it missed was the actual settings for low voltage, which would have required uaing the archived version.

So should you use AI? Given that four out of five LLMs tried actively gave incorrect info, some of which might lead to incorrect purchases or even running a CPU at dangerous voltages, I'd suggest that it might be worse than nothing. If you really feel you have to, based on this very limited sample, Microsoft Copilot was head and shoulders ahead of the rest, so give that a try - partly because of it giving the best info but mainly because it linked to its sources allowing you to double-check.

Note that LLM AI models use randomness and constantly update themselves - and output within a session is influenced by prior input in that session. So YMMV and you might get quite different answers to the same prompts.

Last edited by dionb on 2025-03-19, 08:57. Edited 2 times in total.

Reply 1 of 8, by konc

User metadata
Rank l33t
Rank
l33t
dionb wrote on 2025-03-18, 13:35:

So should you use AI?

The most dangerous aspect is when LLMs boldly deliver a blatantly wrong answer with the confidence of 10 gurus combined, and this answer is used and passed as the truth for that point on, yes people actually do this.

My take on the current models basically agrees with yours. Sure use them, but always double check and see if you end up with something useful. Some times validating defies the purpose, but there are cases where you might save time and effort.

Reply 2 of 8, by tauro

User metadata
Rank Member
Rank
Member

Never blindly trust AI. It's even less credible than Wikipedia. But if you can "tame it", you can profit from it: solving math problems, finding bugs, generating images, and other things like that.
For web search related things Perplexity is one of the best.

I'm worried about the ability to think outside the box of future generations. They'll likely not question AI and accept its answers as gospel truth, with all that it entails.

Also, when will everyone on the internet become chatbots?

One good thing about Vogons is that there's no point system that some AI can hijack. So far, It feels real. I heard that on reddit that's currently an issue.

Reply 3 of 8, by keenmaster486

User metadata
Rank l33t
Rank
l33t

You can tell that AI is not self-aware because it has no epistemic humility. Never does it say "I'm not sure... I think it might be X, Y, or Z... let me do some research"

It just blurts out whatever is the "most likely answer" based on 10,000 Reddit posts, which is probably wrong.

World's foremost 486 enjoyer.

Reply 4 of 8, by BinaryDemon

User metadata
Rank Oldbie
Rank
Oldbie

I was looking for examples of the first dx9 games the other day, and Google’s AI replied with:

The attachment IMG_3556.jpeg is no longer available

It clearly got confused and pulled one of the first directx games.

Reply 5 of 8, by gerry

User metadata
Rank Oldbie
Rank
Oldbie

it's interesting that you found co-pilot most accurate, from a little experience i found that one seems 'better' at all kinds of tech questions, not sure if its luck or that its training has been reinforced or supervised a bit more smartly perhaps

in any case that was an interesting set of questions as the know how is relatively sparse online - it would need to be in forums like this, remnants of old articles, scanned books and magazine (assuming ai 'reads' them) - and even then in comparison with many topics the actual details wont be widely repeated.

Given that, AI would either tend to repeat specific sections of found text or generate answers that veer away from verifiable facts as its trail goes cold (in regard to the very specific answers searched for)

ask about something very well known, like the details of a famous motor and i'd guess the answers are better just because there is more text to "learn" from. Not sure though

tauro wrote on 2025-03-18, 16:28:

I'm worried about the ability to think outside the box of future generations. They'll likely not question AI and accept its answers as gospel truth, with all that it entails.

Also, when will everyone on the internet become chatbots?

I think that's already true with "I googled it" and with the circular and/or poor citations behind some wiki articles. People tend to believe it as if it has an inherent authority like a news program. People also tend to believe recounted experiences and reasoning given in person too, within their own social network. AI will just eclipse search results by combining everything in a seemingly authoritative "voice"

In fact AI is likely to become the interactive tool in place of searching or "googling" something, and with more content online being LLM generated there is a risk that AI will train itself on itself, as it were. There is also a risk that, if people themselves stop generating content for AI to learn from, others may maliciously create "content" totally unseen except for AI, or otherwise tamper with AI, such that results skew in a particular political direction, or simply misinform so widely that actual truths become buried under a lie repeated so often it is perceived as true

taken in extreme the contents online as of the early 2020's may be the final stage in human knowledge (well, it won't - but you can see that if nothing new is generated because everyone relies on AI then what is there to learn from anew?)

Reply 6 of 8, by dionb

User metadata
Rank l33t++
Rank
l33t++
gerry wrote on 2025-03-18, 20:59:

it's interesting that you found co-pilot most accurate, from a little experience i found that one seems 'better' at all kinds of tech questions, not sure if its luck or that its training has been reinforced or supervised a bit more smartly perhaps

Not sure exactly why, but it clearly uses better sources for this stuff - and quotes what they were. Kind of ironic given how useless Bing usually seems for the same thing.

in any case that was an interesting set of questions as the know how is relatively sparse online - it would need to be in forums like this, remnants of old articles, scanned books and magazine (assuming ai 'reads' them) - and even then in comparison with many topics the actual details wont be widely repeated.

That's why I chose this example - multiple layers of complexity, info is sparse but very much in existence (and I knew it well enough myself to judge the responses).

Given that, AI would either tend to repeat specific sections of found text or generate answers that veer away from verifiable facts as its trail goes cold (in regard to the very specific answers searched for)

ask about something very well known, like the details of a famous motor and i'd guess the answers are better just because there is more text to "learn" from. Not sure though

It would be better, but even there, it's all about guessing the next word, not being accurate as such.

tauro wrote on 2025-03-18, 16:28:

I'm worried about the ability to think outside the box of future generations. They'll likely not question AI and accept its answers as gospel truth, with all that it entails.

Also, when will everyone on the internet become chatbots?

I wouldn't worry too much about that. Looking at my own children, they are very much aware how much of what AI offers is completely made up - more so than many adults I know of. Its's the older generation, the people who make decisions (and vote) that I worry about.

taken in extreme the contents online as of the early 2020's may be the final stage in human knowledge (well, it won't - but you can see that if nothing new is generated because everyone relies on AI then what is there to learn from anew?)

Don't underestimate humans.Yes, Einstein's apocryphal quote about common sense not being common is very much true, but on the other hand the very limitations of AI ensure that for the forseeable future us old-fashioned wet-brains who still know how many fingers a hand should have, how text should look and what the arrow of causation looks like will still be in the driving seat, not some computers basing their content on ours in the past.

Reply 7 of 8, by Horun

User metadata
Rank l33t++
Rank
l33t++

Great job ! I just tried a few those AI's to see what the results were on a Elan Musk quote from an interview 2 months ago. Copilot was the closest, the rest were so far off it was stupid 🤣

added:

dionb wrote on 2025-03-19, 00:29:

Looking at my own children, they are very much aware how much of what AI offers is completely made up - more so than many adults I know of. Its's the older generation, the people who make decisions (and vote) that I worry about.

Agree the vast majority of older adults may soon think AI answers as "gospel" and cannot be wrong, only because they won't mentally challenge what some computer tells them but would if someone they do not know saying same thing 🙁

Hate posting a reply and then have to edit it because it made no sense 😁 First computer was an IBM 3270 workstation with CGA monitor. Stuff: https://archive.org/details/@horun