VOGONS


Reply 60 of 66, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Yesterday, 15:54:

What do you mean they have no way of checking? How is not even one way available to AI? I'd assume you generally need three things: access to the content, knowledge of the topic, and the ability to reason fairly. I don't think AI's particularly missing any of these, least of all if the baseline is a person who hasn't much interest in verifying anything.

I posted an example earlier, of someone posting an LLM list of "OpenGL games with MIDI". The LLM had listed games that were released BEFORE ogl was even available. From the 3 you mentioned, it doesn't have the second and third. LLM isn't AI (certainly not the kind that people expect it to be). It just knows which words, images, content, etc. are related and how they are ordered. It has no understanding of the content itself.

Reply 61 of 66, by StriderTR

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Yesterday, 10:09:

Do you mean that in the true sense.

More like a literal sense. They search or ask a question, they get a response, they assume that response is correct. They trust it to be correct. They rarely bother to verify or question the response. It reminds me of the phrase "I seen it on the internet, so it must be true". The subject matter in question is often of a non-technical nature in relation to my co-workers and friends. They are curious about something, they search it, and trust what the bot says is correct.

Honestly, it's kinda of a weird mix. Many of the younger people seem to trust AI responses much more than the older ones, but many of the older ones distrust the technology, but some still assume the info it spits out to be correct. Though more of the older generation "trust, but verify" the information than the younger ones do. Often by running the answer by other people or digging a bit deeper online when time permits.

For more technical information, like what I use, I've used both ChatGPT and Grok, and both have been amazingly accurate and huge time savers for me, I've learned a lot. I've recently been using Grok more as it seems to give me slightly better results to match my requests when I'm working on a piece of code for a project.

DOS, Win9x, General "Retro" Enthusiast. Professional Tinkerer. Technology Hobbyist. Expert at Nothing! Build, Create, Repair, Repeat!
This Old Man's Builds, Projects, and Other Retro Goodness: https://theclassicgeek.blogspot.com/

Reply 62 of 66, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
eddman wrote on Yesterday, 16:11:
vvbee wrote on Yesterday, 15:54:

What do you mean they have no way of checking? How is not even one way available to AI? I'd assume you generally need three things: access to the content, knowledge of the topic, and the ability to reason fairly. I don't think AI's particularly missing any of these, least of all if the baseline is a person who hasn't much interest in verifying anything.

I posted an example earlier, of someone posting an LLM list of "OpenGL games with MIDI". The LLM had listed games that were released BEFORE ogl was even available. From the 3 you mentioned, it doesn't have the second and third. LLM isn't AI (certainly not the kind that people expect it to be). It just knows which words, images, content, etc. are related and how they are ordered. It has no understanding of the content itself.

I tested a couple models and they didn't list pre-GL games. What's a more general test that would convince you that AI has knowledge and can reason? Or is there a test that would convince you?

StriderTR wrote on Yesterday, 18:00:
vvbee wrote on Yesterday, 10:09:

Do you mean that in the true sense.

More like a literal sense. They search or ask a question, they get a response, they assume that response is correct. They trust it to be correct. They rarely bother to verify or question the response. It reminds me of the phrase "I seen it on the internet, so it must be true". The subject matter in question is often of a non-technical nature in relation to my co-workers and friends. They are curious about something, they search it, and trust what the bot says is correct.

Doesn't sound too bad, it's throwaway knowledge.

Reply 63 of 66, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Yesterday, 19:30:

What's a more general test that would convince you that AI has knowledge and can reason? Or is there a test that would convince you?

There is no test that can prove it; a few correct results doesn't invalidate the incorrect ones. Just read up on LLMs. There is a reason the term AGI was invented, because of how the term "AI" got corrupted by equating LLMs to our former understanding of the word.

LLM is like a person that memorized an entire language's shape and sound, every single word, every single possible combination of sentences, which word combinations are a response to other combinations, but yet doesn't understand the meaning of any of them. It's basically advanced mimicry.

It's not the perfect example but is the best one I can come up with right now.

EDIT: or perhaps this. I don't know german but memorize the german pronunciation of all the works of a german writer; I can recite the books from start to finish when asked, or a given line in a page, and yet I have no idea what any of that means.

Reply 64 of 66, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

In this case I didn't limit it to LLMs specifically nor to the AI of today. If you don't have a test then it's just a belief. They're fine to have but you can't do anything with them.

Of course in this case we were talking about practical uses, so the test of letting AI act and then evaluating the results is valid. Earlier in the thread it picked out cognitive biases from posts, now it was able to detect fake OpenGL + MIDI games in a list. So in that sense we can say it can verify the veracity of content. Which then means either it has the three things or whatever's required that I mentioned before, or that to verify content AI only needs the one thing, access to the content. Which is what I was saying earlier, either AI reasoned about the posts to find their biases or the posts weren't actually expressing anything but were just statements of biases.

Reply 65 of 66, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Yesterday, 23:30:

In this case I didn't limit it to LLMs specifically nor to the AI of today. If you don't have a test then it's just a belief. They're fine to have but you can't do anything with them.

AFAIK the public ones you interact with are all LLMs. The proper definition of AI is having intelligence (or the one that was used up until a few years ago), and LLMs don't have it. It's not a matter of "belief" as we already know how LLMs work.

Again, if it had intelligence it wouldn't have made the mistakes in the first place. This reasoning you're talking about does not equate intelligence, unless you're using a much simpler definition. These are algorithms that operate upon the relations in the data. If it presents the wrong result, then is challenged by the user, the algorithm could input the challenge as new data and modify the result. If enough people challenge it in a different direction, it could modify the result again. As I've stated before, it has no understanding of the content itself. Being right or wrong is provided by outside factors, and not from an inherent ability of an LLM to "think" (perhaps there's a better word for that).

I'm sure one day there will be proper AI, but what we have now is not it.

We are seemingly talking about different things, or it's just my english getting in the way as it isn't my first. In any case, there isn't really anything else to add to our conversation.

EDIT: To clarify, I'm not saying they are useless, just that we should know their nature and how to properly use them.

Last edited by eddman on 2025-12-17, 00:40. Edited 2 times in total.

Reply 66 of 66, by Big Pink

User metadata
Rank Member
Rank
Member
eddman wrote on Yesterday, 20:28:

LLM is like a person that memorized an entire language's shape and sound, every single word, every single possible combination of sentences, which word combinations are a response to other combinations, but yet doesn't understand the meaning of any of them. It's basically advanced mimicry.

It's Markov Chains. The only innovation this time is the unprecedented computing power being thrown at it and the vast resources being consumed to power that number crunching. Fundamentally it's a calculator if it ate the Sun instead of running off a dinky solar cell. Someone on Slashdot put it very well:

Computing used to be measured in the number of computations per second, the storage available or the number of bits in and out of a data centre. In other words, its output. Now it seems to be measured in the power consumed, its inputs. Is this because the output is so pointless it's not worth measuring?

I thought IBM was born with the world