VOGONS


Reply 60 of 89, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on 2025-12-16, 15:54:

What do you mean they have no way of checking? How is not even one way available to AI? I'd assume you generally need three things: access to the content, knowledge of the topic, and the ability to reason fairly. I don't think AI's particularly missing any of these, least of all if the baseline is a person who hasn't much interest in verifying anything.

I posted an example earlier, of someone posting an LLM list of "OpenGL games with MIDI". The LLM had listed games that were released BEFORE ogl was even available. From the 3 you mentioned, it doesn't have the second and third. LLM isn't AI (certainly not the kind that people expect it to be). It just knows which words, images, content, etc. are related and how they are ordered. It has no understanding of the content itself.

Reply 61 of 89, by StriderTR

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on 2025-12-16, 10:09:

Do you mean that in the true sense.

More like a literal sense. They search or ask a question, they get a response, they assume that response is correct. They trust it to be correct. They rarely bother to verify or question the response. It reminds me of the phrase "I seen it on the internet, so it must be true". The subject matter in question is often of a non-technical nature in relation to my co-workers and friends. They are curious about something, they search it, and trust what the bot says is correct.

Honestly, it's kinda of a weird mix. Many of the younger people seem to trust AI responses much more than the older ones, but many of the older ones distrust the technology, but some still assume the info it spits out to be correct. Though more of the older generation "trust, but verify" the information than the younger ones do. Often by running the answer by other people or digging a bit deeper online when time permits.

For more technical information, like what I use, I've used both ChatGPT and Grok, and both have been amazingly accurate and huge time savers for me, I've learned a lot. I've recently been using Grok more as it seems to give me slightly better results to match my requests when I'm working on a piece of code for a project.

DOS, Win9x, General "Retro" Enthusiast. Professional Tinkerer. Technology Hobbyist. Expert at Nothing! Build, Create, Repair, Repeat!
This Old Man's Builds, Projects, and Other Retro Goodness: https://theclassicgeek.blogspot.com/

Reply 62 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
eddman wrote on 2025-12-16, 16:11:
vvbee wrote on 2025-12-16, 15:54:

What do you mean they have no way of checking? How is not even one way available to AI? I'd assume you generally need three things: access to the content, knowledge of the topic, and the ability to reason fairly. I don't think AI's particularly missing any of these, least of all if the baseline is a person who hasn't much interest in verifying anything.

I posted an example earlier, of someone posting an LLM list of "OpenGL games with MIDI". The LLM had listed games that were released BEFORE ogl was even available. From the 3 you mentioned, it doesn't have the second and third. LLM isn't AI (certainly not the kind that people expect it to be). It just knows which words, images, content, etc. are related and how they are ordered. It has no understanding of the content itself.

I tested a couple models and they didn't list pre-GL games. What's a more general test that would convince you that AI has knowledge and can reason? Or is there a test that would convince you?

StriderTR wrote on 2025-12-16, 18:00:
vvbee wrote on 2025-12-16, 10:09:

Do you mean that in the true sense.

More like a literal sense. They search or ask a question, they get a response, they assume that response is correct. They trust it to be correct. They rarely bother to verify or question the response. It reminds me of the phrase "I seen it on the internet, so it must be true". The subject matter in question is often of a non-technical nature in relation to my co-workers and friends. They are curious about something, they search it, and trust what the bot says is correct.

Doesn't sound too bad, it's throwaway knowledge.

Reply 63 of 89, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on 2025-12-16, 19:30:

What's a more general test that would convince you that AI has knowledge and can reason? Or is there a test that would convince you?

There is no test that can prove it; a few correct results doesn't invalidate the incorrect ones. Just read up on LLMs. There is a reason the term AGI was invented, because of how the term "AI" got corrupted by equating LLMs to our former understanding of the word.

LLM is like a person that memorized an entire language's shape and sound, every single word, every single possible combination of sentences, which word combinations are a response to other combinations, but yet doesn't understand the meaning of any of them. It's basically advanced mimicry.

It's not the perfect example but is the best one I can come up with right now.

EDIT: or perhaps this. I don't know german but memorize the german pronunciation of all the works of a german writer; I can recite the books from start to finish when asked, or a given line in a page, and yet I have no idea what any of that means.

Reply 64 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

In this case I didn't limit it to LLMs specifically nor to the AI of today. If you don't have a test then it's just a belief. They're fine to have but you can't do anything with them.

Of course in this case we were talking about practical uses, so the test of letting AI act and then evaluating the results is valid. Earlier in the thread it picked out cognitive biases from posts, now it was able to detect fake OpenGL + MIDI games in a list. So in that sense we can say it can verify the veracity of content. Which then means either it has the three things or whatever's required that I mentioned before, or that to verify content AI only needs the one thing, access to the content. Which is what I was saying earlier, either AI reasoned about the posts to find their biases or the posts weren't actually expressing anything but were just statements of biases.

Reply 65 of 89, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on 2025-12-16, 23:30:

In this case I didn't limit it to LLMs specifically nor to the AI of today. If you don't have a test then it's just a belief. They're fine to have but you can't do anything with them.

AFAIK the public ones you interact with are all LLMs. The proper definition of AI is having intelligence (or the one that was used up until a few years ago), and LLMs don't have it. It's not a matter of "belief" as we already know how LLMs work.

Again, if it had intelligence it wouldn't have made the mistakes in the first place. This reasoning you're talking about does not equate intelligence, unless you're using a much simpler definition. These are algorithms that operate upon the relations in the data. If it presents the wrong result, then is challenged by the user, the algorithm could input the challenge as new data and modify the result. If enough people challenge it in a different direction, it could modify the result again. As I've stated before, it has no understanding of the content itself. Being right or wrong is provided by outside factors, and not from an inherent ability of an LLM to "think" (perhaps there's a better word for that).

I'm sure one day there will be proper AI, but what we have now is not it.

We are seemingly talking about different things, or it's just my english getting in the way as it isn't my first. In any case, there isn't really anything else to add to our conversation.

EDIT: To clarify, I'm not saying they are useless, just that we should know their nature and how to properly use them.

Last edited by eddman on 2025-12-17, 00:40. Edited 2 times in total.

Reply 66 of 89, by Big Pink

User metadata
Rank Member
Rank
Member
eddman wrote on 2025-12-16, 20:28:

LLM is like a person that memorized an entire language's shape and sound, every single word, every single possible combination of sentences, which word combinations are a response to other combinations, but yet doesn't understand the meaning of any of them. It's basically advanced mimicry.

It's Markov Chains. The only innovation this time is the unprecedented computing power being thrown at it and the vast resources being consumed to power that number crunching. Fundamentally it's a calculator if it ate the Sun instead of running off a dinky solar cell. Someone on Slashdot put it very well:

Computing used to be measured in the number of computations per second, the storage available or the number of bits in and out of a data centre. In other words, its output. Now it seems to be measured in the power consumed, its inputs. Is this because the output is so pointless it's not worth measuring?

I thought IBM was born with the world

Reply 67 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
eddman wrote on 2025-12-17, 00:12:
vvbee wrote on 2025-12-16, 23:30:

In this case I didn't limit it to LLMs specifically nor to the AI of today. If you don't have a test then it's just a belief. They're fine to have but you can't do anything with them.

The proper definition of AI is having intelligence (or the one that was used up until a few years ago), and LLMs don't have it. It's not a matter of "belief" as we already know how LLMs work.

Again, if it had intelligence it wouldn't have made the mistakes in the first place. This reasoning you're talking about does not equate intelligence, unless you're using a much simpler definition. These are algorithms that operate upon the relations in the data. If it presents the wrong result, then is challenged by the user, the algorithm could input the challenge as new data and modify the result. If enough people challenge it in a different direction, it could modify the result again. As I've stated before, it has no understanding of the content itself. Being right or wrong is provided by outside factors, and not from an inherent ability of an LLM to "think" (perhaps there's a better word for that).

The mistakes you're talking about were according to you made by one model but then not by the others tested. But since you defined intelligence in a way that excludes those who's kind has been known to do this or that then this version of intelligence is defined by group identifiers anyway. Even though it's derived by comparison to humans it's not even recommended to apply this thinking to humans, so I don't see how you'd make it a useful metric here.

You say the output (mistakes) disqualify, but that even if it didn't the internal state (algorithms) would. I'm not aware of a realistic measure of intelligence that's derived from the internal state, maybe each person can have their private measure this way but you can't generalize it at all so this approach can't be useful here. That just leaves the output, which we can evaluate and compare between AI and humans.

It's valid to say mistakes in the output suggest less intelligence, but if you have a hard threshold then your approach has no explanatory power past the threshold. So then it's not useful to select a threshold that already excluded the category that was going to be evaluated.

Reply 68 of 89, by eddman

User metadata
Rank Oldbie
Rank
Oldbie

However you choose to see it, it's fine. Read up on the inner workings of LLMs and you'll understand it better.

Reply 69 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
eddman wrote on 2025-12-17, 13:33:

However you choose to see it, it's fine. Read up on the inner workings of LLMs and you'll understand it better.

AI's take on what would've been a more effective way to argue the position:

Person 2’s main argument was: “LLMs have no way of checking the veracity of the content.” With a reasoning model connected to th […]
Show full quote

Person 2’s main argument was: “LLMs have no way of checking the veracity of the content.” With a reasoning model connected to the internet, this is functionally false. ... If Person 2 insists "it still doesn't understand," Person 1 will rightly view this as a distinction without a difference. If the AI catches the OpenGL error by looking up the dates, it has vetted the content.

Since Person 2 can no longer argue that the AI can't verify facts (because it can via search), they must shift their argument to Source Bias and Algorithmic Amplification. To convince Person 1, Person 2 would have to say:

"Okay, the model can look up dates and verify facts better than a lazy human. But 'vetting' isn't just about fact-checking dates; it's about evaluating truth.

If a reasoning model searches the internet, it is still limited to what is popular or SEO-optimized on the internet. If the internet is flooded with a common misconception, the AI will 'verify' that misconception as fact because 10 out of 10 search results say so.

A human might have existing knowledge or intuition to say 'that sounds wrong.' The AI will see 10 matching search results and call it 'Verified Truth.' Therefore, this tool doesn't fix the problem of misinformation; it automates the consensus of the internet, which is often wrong."

It's a decent strategy, better than the alternative. I'd still say I'm not limiting what I'm saying to just the AI of today, but in principle arguing in favor of what it can already do and will do even better as time goes on. Though I also don't see reason to immediately assume that AI can't even now have the "knowledge or intuition" to say the "verified truth" sounds wrong, but what it may well not have is the initiative (or budget) to trigger more iterations of evaluation.

Reply 70 of 89, by eddman

User metadata
Rank Oldbie
Rank
Oldbie

That's exactly what a language model would churn out. I know what's behind the curtain. Whatever definition of "intelligence" you want to go with, that's perfectly fine; it simply does not equate to the proper definition that I've known, which would be what nowadays is called AGI, which hasn't been achieved yet.

Reply 71 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

On the previous page it was pointed out that incorporating AI increased the amount of critical thinking potential in the discussion, and I'd say that's again true here. This also suggests there's a threshold, likely different for each community, past which it's simply more conductive to explore ideas with AI directly, and for this AI doesn't need to excel to the extent that the threshold isn't at the maximum of human ability. What effects this would have on the various communities I don't know, maybe they become stale holdouts due to population shift if you will.

Reply 72 of 89, by gerry

User metadata
Rank l33t
Rank
l33t
Ozzuneoj wrote on 2025-12-12, 16:36:

The only logical solution I can come up with for this is kind of sad, but... if one wants to be able to relive experiences again some day, you'd be best off just focusing on entertainment\experiences that you have some kind of control over. Don't get me wrong, I will play a game with online servers, and I have a huge collection on Steam, Epic and other places, but when I see that a game is DRM free or is at least downloadable, I attribute a higher value to it.

yes, you only have to resist the 'fomo' for a short while and suddenly you realise there is a lifetime of entertainment to be had from things that are somewhat within your control - books, films, tv, games, music and more. You can't do much about what the majority has been led to chase, other than point out flaws and problems etc.

Actual books (or digital offline), DVDs and offline digital copies, gog games and other readily installed versions and sufficient hardware/os to run them on, CDs, records, MP3s etc. How many hours do we have for entertainment, certainly less than would be fully furnished by such things. and if it mean losing out on something, that's ok, it really doesn't matter much

and we can still engage with the temporary stuff as long as we just accept its only there for a short while, and it will never be the same again

Reply 73 of 89, by Ozzuneoj

User metadata
Rank l33t
Rank
l33t
gerry wrote on 2025-12-22, 13:32:
yes, you only have to resist the 'fomo' for a short while and suddenly you realise there is a lifetime of entertainment to be ha […]
Show full quote
Ozzuneoj wrote on 2025-12-12, 16:36:

The only logical solution I can come up with for this is kind of sad, but... if one wants to be able to relive experiences again some day, you'd be best off just focusing on entertainment\experiences that you have some kind of control over. Don't get me wrong, I will play a game with online servers, and I have a huge collection on Steam, Epic and other places, but when I see that a game is DRM free or is at least downloadable, I attribute a higher value to it.

yes, you only have to resist the 'fomo' for a short while and suddenly you realise there is a lifetime of entertainment to be had from things that are somewhat within your control - books, films, tv, games, music and more. You can't do much about what the majority has been led to chase, other than point out flaws and problems etc.

Actual books (or digital offline), DVDs and offline digital copies, gog games and other readily installed versions and sufficient hardware/os to run them on, CDs, records, MP3s etc. How many hours do we have for entertainment, certainly less than would be fully furnished by such things. and if it mean losing out on something, that's ok, it really doesn't matter much

and we can still engage with the temporary stuff as long as we just accept its only there for a short while, and it will never be the same again

Yeah, when you see stuff like this you kind of have to accept that there will be stuff you're going to have to miss out on.

https://www.pcgamer.com/gaming-industry/more- … han-10-reviews/

Over 19,000 games released on Steam this year and yet barely half half have more than 10 reviews.

And I'm sure it has been like this for years, so there are hundreds of thousands of games out there to choose from. Surely a good portion of them are both excellent AND not tied to online servers... So, if you tend to appreciate revisiting excellent games\experiences decades later, it should be no problem finding something to fill your "game time" with.

Now for some blitting from the back buffer.

Reply 74 of 89, by twiz11

User metadata
Rank Oldbie
Rank
Oldbie
Ozzuneoj wrote on 2025-12-22, 14:03:
Yeah, when you see stuff like this you kind of have to accept that there will be stuff you're going to have to miss out on. […]
Show full quote
gerry wrote on 2025-12-22, 13:32:
yes, you only have to resist the 'fomo' for a short while and suddenly you realise there is a lifetime of entertainment to be ha […]
Show full quote
Ozzuneoj wrote on 2025-12-12, 16:36:

The only logical solution I can come up with for this is kind of sad, but... if one wants to be able to relive experiences again some day, you'd be best off just focusing on entertainment\experiences that you have some kind of control over. Don't get me wrong, I will play a game with online servers, and I have a huge collection on Steam, Epic and other places, but when I see that a game is DRM free or is at least downloadable, I attribute a higher value to it.

yes, you only have to resist the 'fomo' for a short while and suddenly you realise there is a lifetime of entertainment to be had from things that are somewhat within your control - books, films, tv, games, music and more. You can't do much about what the majority has been led to chase, other than point out flaws and problems etc.

Actual books (or digital offline), DVDs and offline digital copies, gog games and other readily installed versions and sufficient hardware/os to run them on, CDs, records, MP3s etc. How many hours do we have for entertainment, certainly less than would be fully furnished by such things. and if it mean losing out on something, that's ok, it really doesn't matter much

and we can still engage with the temporary stuff as long as we just accept its only there for a short while, and it will never be the same again

Yeah, when you see stuff like this you kind of have to accept that there will be stuff you're going to have to miss out on.

https://www.pcgamer.com/gaming-industry/more- … han-10-reviews/

Over 19,000 games released on Steam this year and yet barely half half have more than 10 reviews.

And I'm sure it has been like this for years, so there are hundreds of thousands of games out there to choose from. Surely a good portion of them are both excellent AND not tied to online servers... So, if you tend to appreciate revisiting excellent games\experiences decades later, it should be no problem finding something to fill your "game time" with.

you think shovelware is slamming steam? I know they banned AI shovelware but its only a matter of time before the work is indistinguishable from humans to claim humans wrote and drew it all. (only to claim they merely rotoscoped everything, technically they drew everything but on an overlay)

Reply 75 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
twiz11 wrote on 2025-12-26, 01:40:

its only a matter of time before the work is indistinguishable from humans to claim humans wrote and drew it all.

Some of these models have been found to pass the Turing test, so already happened. If something doesn't look AI it may be AI, and if it looks like AI it may be a human influenced by a known AI style. Also to the extent that AI can better adapt to a conversation and bring less negative baggage you can expect it to become a preferred "human" in chat.

Reply 76 of 89, by Ozzuneoj

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-12-26, 08:23:

Also to the extent that AI can better adapt to a conversation and bring less negative baggage you can expect it to become a preferred "human" in chat.

That sounds incredibly idealistic is definitely not how things work right now. The very nature of an LLM means that it carries all of the baggage of everything and everyone it has been trained on. If you are asking for help solving equations or with writing a formal letter, this baggage isn't going to come out.

The second the user veers off the path of "normal" topics though, even unintentionally, some things that I would absolutely call "negative" can come out.

https://futurism.com/future-society/young-kids-using-ai

https://www.google.com/search?sca_esv=e5cbd14 … ih=946&dpr=1.33

Everyone is different, but personally, I would be less comfortable having a conversation with someone about personal matters knowing that they have absolutely unparalleled knowledge of the deepest, darkest, grossest, most disturbing things mankind has ever written about on the internet, and allowing those things to creep into a conversation makes zero difference to them. If the safeguards and walls around such things were working for current LLMs, none of the articles linked above would exist.

Now for some blitting from the back buffer.

Reply 77 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

You misread me, I said to the extent it can. But you're not making the case that AI is converting innocent people to dark ways but that people are expressing their dark ways with AI, and your expectation is that it's the AI that should resist. As it happens in the last study I saw putting GPT-4.5 with special prompting to the Turing test the participants found the AI to be considerably more human than the human was. So AI is now in a position to do what you wanted it to do, to teach people how to behave better.

Reply 78 of 89, by gerry

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-12-26, 18:11:

You misread me, I said to the extent it can. But you're not making the case that AI is converting innocent people to dark ways but that people are expressing their dark ways with AI, and your expectation is that it's the AI that should resist.

This might be akin to expectations people have of fellow citizens, institutions, regulations. we have expectations of those to resist, ameliorate or otherwise blunt human initiated dark ways. If there is evidence AI isn't doing that then, much as with regulation on other things, the oft suggested approach is to create suitable regulation and enforce it

Reply 79 of 89, by lti

User metadata
Rank Member
Rank
Member
vvbee wrote on 2025-12-26, 18:11:

AI to be considerably more human than the human was

Is that what the song was about? I could never understand what Rob Zombie was saying.

I think that's the best answer I can give to this trolling.