VOGONS


Reply 40 of 63, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
keenmaster486 wrote on 2025-12-12, 21:14:

AI is like taking the type of guy who isn't actually very smart but thinks he's smart because he can regurgitate a lot of stuff he read online, duplicating him millions of times, and spamming his posts all over the whole internet.

We all know someone like this. Read every book in the world, understands every language, skilled in art and music, diagnoses your X-rays and debugs your legacy codebases while solving open math problems, etc. He's not that smart but we humor him.

Reply 41 of 63, by keenmaster486

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-12-13, 06:27:

We all know someone like this. Read every book in the world, understands every language, skilled in art and music, diagnoses your X-rays and debugs your legacy codebases while solving open math problems, etc. He's not that smart but we humor him.

Lmao

World's foremost 486 enjoyer.

Reply 42 of 63, by zyzzle

User metadata
Rank Member
Rank
Member
Big Pink wrote on 2025-12-13, 00:11:
chinny22 wrote on 2025-12-10, 01:06:

The other annoying thing is I know a few people that will pad out work emails or the like with AI generated text. which is somewhat amusing as on the other end people use AI to summarise the email (so take out the padding)

The world wonders.

Here's the thing about the 'generative arts': the act of self-expression requires introspection. If you are incapable or unwilling to engage in that, aren't the NPC memes true? Homo Sapiens becomes Homo Sloppians.

Did you ever see Larry Cohen's excellent film "The Stuff" from 40 years ago? It was prescient -- far too prescient. About "stuff" eating and devouring us from within.

"Are you eating it, or is it eating you?" = AI rotting away our sense of introspection, common sense, and questioning with critical thinking what is spit out at us on a screen.

It won't end well. This consumption from within started 20 years ago with smartphones and social media. It will be completed by AI and human laziness. The top 1% shall truly inherit the world, the rest will flounder.

Reply 43 of 63, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
zyzzle wrote on 2025-12-13, 20:38:

"Are you eating it, or is it eating you?" = AI rotting away our sense of introspection, common sense, and questioning with critical thinking what is spit out at us on a screen.

You say "our" but you mean "some people's". Some other people it makes even more capable than they were. If you consider that most innovation comes from a small percentage of the population then the population as a whole can benefit even if the majority becomes more passive, so long as the cogs can be kept turning.

I don't know to what extent AI exhibits bikeshedding but that's another way it could improve the human process. Now that everyone at least in theory has access to very capable AI assistants maybe their threshold of mental exhaustion gets pushed forward a bit and they gravitate toward more complex topics. Of course it's also pretty clear that many people aren't using AI in a way that would set them up to benefit most from it, but in theory this could be possible.

Reply 44 of 63, by StriderTR

User metadata
Rank Oldbie
Rank
Oldbie

That loss of "critical thinking" is a huge one for me. That skill is already in decreasing supply, and for most people, I see our reliance on AI only adding to that shortage.

I've said this before, but AI isn't intelligent. It doesn't really "think" like we do. No more than the device you reading this on does. AI is rebranded machine learning. It's an advanced, very fast, search engine / Wikipedia combo that can also perform tasks. You ask it, "Check my code", it looks at it, and based on what's it's been programmed, found by searching, it compares and gives and response. It shares what it's "learned" from what and where we've told it to look. It brings no new real information to the table.

However, just like humans, it can be told, and learn, incorrect information. Even when it appears to come up with something original, it's really just following a sort of "compare and combine" problem. Take all this information and extrapolate a response. It's still drawing on what we have told it, how we've programmed it. This means it may come up with "ideas" we haven't yet, but they're not original. Those ideas were already there, they just weren't realized yet, or perhaps scattered about, they weren't had by the right people, or put online or otherwise published.

Someday, we may have real "critical thinking" AI, not sure we'll survive that, but as of now it's definitely not that. I see it as a helper, another tool we can use to help us. It's excellent at modeling and carrying out long complex calculations. That "compare and combine" aspect is pretty useful in science, medicine, security, and many other fields. As long as there are still humans verifying everything. 😜

Last edited by StriderTR on 2025-12-14, 19:35. Edited 2 times in total.

DOS, Win9x, General "Retro" Enthusiast. Professional Tinkerer. Technology Hobbyist. Expert at Nothing! Build, Create, Repair, Repeat!
This Old Man's Builds, Projects, and Other Retro Goodness: https://theclassicgeek.blogspot.com/

Reply 45 of 63, by lizard78

User metadata
Rank Newbie
Rank
Newbie

It's sad how much AI slop is around now. What is even more disturbing is how many are easily fooled by it and engage with AI content. I can't fathom how people can't immediately spot ChatGPT by now - it has such a distinct writing style. It injects so many little tells / artifacts in whatever it generates making it easy to spot (weird unicode characters, emojis, specific words that aren't typically used in normal conversation).

Reply 46 of 63, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie
shamino wrote on 2025-12-09, 12:28:

Going back a few years ago, I started seeing people use the phrase "in <INSERT YEAR>".
I'm not sure how many people realize that's a spambot phrase. People learned to type the current year into search engines to get more current/relevant articles about some topic, so spambot sites improved their search rank by adding "IN <year>" to all their article titles. Weirdly, real people then actually started writing like that.

Dude... I had to read this twice to confirm you literally meant what you've just said.

7fbns0.png

tbh9k2-6.png

Reply 47 of 63, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
StriderTR wrote on 2025-12-14, 18:35:

That loss of "critical thinking" is a huge one for me. That skill is already in decreasing supply, and for most people, I see our reliance on AI only adding to that shortage.

I've said this before, but AI isn't intelligent. It doesn't really "think" like we do. No more than the device you reading this on does. AI is rebranded machine learning. It's an advanced, very fast, search engine / Wikipedia combo that can also perform tasks. You ask it, "Check my code", it looks at it, and based on what's it's been programmed, found by searching, it compares and gives and response. It shares what it's "learned" from what and where we've told it to look. It brings no new real information to the table. However, just like humans, it can be told, and learn, incorrect information. Even when it appears to come up with something original, it's really just following a sort of "compare and combine" problem. Take all this information and extrapolate a response. It's still drawing on what we have told it, how we've programmed it. This means it will come up with "ideas" we haven't yet, but they're not original. Those ideas were already there, they just we're realized yet, scattered, or they we're had by the right people.

I asked AI to list the possible cognitive biases in your post:

1. Attribute Substitution: The author substitutes the complex, abstract mechanism of probabilistic token generation and neural w […]
Show full quote

1. Attribute Substitution: The author substitutes the complex, abstract mechanism of probabilistic token generation and neural weights with the familiar, easier-to-understand concept of "looking things up" in a database. This leads to the incorrect conclusion that AI simply retrieves existing sentences rather than generating new ones based on patterns.
2. Anthropocentrism: The argument posits that because the AI’s processing method differs from biological human cognition ("like we do"), it therefore cannot be classified as "thinking" or "intelligence." This defines intelligence strictly as "biological human intelligence," dismissing the possibility of "artificial intelligence" as a distinct valid category.
3. The Reductionist Fallacy: By breaking the AI down to its basic training methods ("compare and combine"), the author dismisses the emergent capabilities (reasoning, coding, creative writing) that arise from the complexity of the system. This is akin to saying a human brain is "just neurons firing" and therefore has no thoughts.
4. The Illusion of Explanatory Depth: The author speaks with absolute confidence about the technical workflow of the AI ("found by searching," "looks at it"). However, the explanation is technically incorrect regarding how Generative Pre-trained Transformers (GPT) function (they do not "search" a database to generate text; they predict the next piece of data based on internal parameters). The author overestimates their understanding of the underlying mechanics.
5. Hindsight Bias: The author claims that any new idea an AI generates isn't actually "new" because the potential for that idea already existed in the data. This is a circular argument fueled by hindsight; virtually all human invention is the realization of concepts that "were already there" (physics, math, logic), but we still consider the discovery original. The author applies a retroactive filter to deny novelty.
6. Confirmation Bias: The opening phrase suggests the author has a pre-existing stance. The subsequent arguments frame the technology specifically to support this conclusion (e.g., describing it as a "search engine" rather than a "neural network" because the former supports the "no intelligence" theory better).
7. False Equivalence: The author compares the cognitive architecture of an AI model to the hardware of a phone or monitor. While an AI runs on a device, equating the software entity (which processes language and logic) with the inert hardware leads to a logical disconnect intended to diminish the AI's perceived capability.

Now if AI produced this text without thinking, does it mean you were overtly expressing these cognitive biases, or does it mean it's the AI who's biased to frame it this way?

Reply 48 of 63, by StriderTR

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on 2025-12-14, 19:45:

Now if AI produced this text without thinking, does it mean you were overtly expressing these cognitive biases, or does it mean it's the AI who's biased to frame it this way?

Nope. I'm most definitely biased, and fully cognizant of that fact. I'm also very guilty of "oversimplification". I'm just one old silly outdated human. 🤣

Though, I did enjoy reading the rebuttal and analysis. It read and seems exactly how I would expect it to respond. 😜

I had ChatGPT also tell me a while back, when I asked if it could create orignal work that was not based on an amalgamation of known data, completely on its own like a human. Basically, asking it if it had an imagination. In a very long winded way, it said no, without actually just saying "no". Then it pointed out that humans use known data as well when creating new things. To which I responded, yes, but it's not required. We can imagine things completely devoid of any specific data on that thing. It agreed, again, in a long winded, almost defensive, way. Was a funny conversation, just an exercise in curiosity and silliness.

DOS, Win9x, General "Retro" Enthusiast. Professional Tinkerer. Technology Hobbyist. Expert at Nothing! Build, Create, Repair, Repeat!
This Old Man's Builds, Projects, and Other Retro Goodness: https://theclassicgeek.blogspot.com/

Reply 49 of 63, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

"This follow-up response is fascinating because, while the author good-naturedly accepts the 'oversimplification' label, they double down on the core philosophical distinction. In doing so, they introduce a new set of cognitive biases, particularly regarding how human creativity works versus how AI creativity works. ... The author interprets the previous analysis—which dissected their logic—not as a valid critique, but as a confirmation of their worldview ... effectively insulating their argument from the content of the critique."

I don't think AI is wrong here in a general sense. You're worried critical thinking is going away due to AI use, but in this case incorporating AI is what would increase the amount of critical thinking in the discussion, and waving it off does the opposite. You might say it's not critical thinking but just generating outputs with prompts, but in this case you had the chance to use what AI said as a springboard to evaluate your own argument, which would've effected the critical thinking.

Reply 50 of 63, by StriderTR

User metadata
Rank Oldbie
Rank
Oldbie

This is going to be a long one, it's about get deep. 🤣

I look at AI use in a somewhat similar light as people who only know how to do basic mathematics using a calculator. The calculator, like AI, is a very powerful and useful tool. However, over time, as our skill with the calculator grows, the knowledge of how to do the actual work the calculator is doing for us begins to fade. If you take away the tool, or it stops working, the number of people who know how to do the work of that tool is significantly decreased. The problem isn't necessarily with the tool, it's how we as humans decide to use it and how much reliance we place on it. We like things that make our lives easier, and are often willing to give up certain skills and knowledge in the pursuit of that goal.

That being said, I love using tools that make my like easier just like anyone else, or tools that can simply do tasks better than I can. I currently use AI for image generation and to check my code. I will never be an artist, so in that sense it's doing something I lack the ability and knowledge to do. Could I learn it? Sure, but I simply don't have the time or desire. So, AI is making my life easier by doing something I cannot do in a reasonable amount of time without help.

I can code, but I'm no expert. I make mistakes, and at times, get stuck. Again, I have two choices. Ask other humans to look it over and wait for a response, or let AI do it and get a rapid response. The difference is, I want to see what it changed, so I compare. This way, I learn from my mistakes. So AI is actually teaching me, but only becasue I choose to do it that way. I could just use the corrected code and call it done. I don't want to lose the skill, in fact, I want to improve it so I don't need to rely on a tool to do it for me.

My biggest concern is indeed the loss of critical thinking becasue we allow a tool to do that thinking for us. Critical thinking was in decline long before AI, I simply believe it will further exasperate an existing problem. If we ever lose that tool, it makes it far more difficult to do the tasks it was created to do because the skills to do them diminish over time as we rely more and more on the tool. Most skills are perishable, we as humans need to use them, or we lose them.

At the end of the day, do I dislike AI tools? Not really. I just wish people would not place so much stock in it and use this new powerful tool with a bit more caution. Take the time to learn what it does, how it works, and understand the dangers and pitfalls that can come from it. Instead, we often decide to go blindly through life, oblivious to how or why things in the world work. My brain is simply incompatible with that mindset. I like to learn. Having a basic fundamental understanding of how things work allows me to work out problems on my own more than I need to rely on something else. When I do need to ask for help, I do all I can to learn from it. I simply think AI, being as powerful and versatile as it is, will make a pre-existing very human problem worse over time.

We have so many machines to do work for us, but most people have no idea how the machine is doing that work at any level. That's just something I've never understood. Perhaps I'm just overly curious. My need to know how things work, even if only at a basic level, is a core of who I am. I guess that's why I like to tinker, create, and build so much. I learn as I go. I find it fun. I often wish more people looked at the world that way instead of always looking for the easy way or going through life with blinders on.

Last edited by StriderTR on 2025-12-16, 00:27. Edited 1 time in total.

DOS, Win9x, General "Retro" Enthusiast. Professional Tinkerer. Technology Hobbyist. Expert at Nothing! Build, Create, Repair, Repeat!
This Old Man's Builds, Projects, and Other Retro Goodness: https://theclassicgeek.blogspot.com/

Reply 51 of 63, by Joakim

User metadata
Rank Oldbie
Rank
Oldbie

As an engineer I find AI helpful for minor coding tasks. The other day I asked it to make advanced plots in Python. I can do it myself but it took 25 secs and I only needed to tell it where the data was. Its not like I would ask it to analyse the curves though.

That developers incorporates AI in every damn software will in the future make us all write the same and its just plain boring (suggestions like: this pirate joke might be offensive to amputees).

That some people ask AI for things like relationship advice is alarming as well.

Reply 52 of 63, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Yesterday, 09:46:

I don't think AI is wrong here in a general sense. You're worried critical thinking is going away due to AI use, but in this case incorporating AI is what would increase the amount of critical thinking in the discussion, and waving it off does the opposite. You might say it's not critical thinking but just generating outputs with prompts, but in this case you had the chance to use what AI said as a springboard to evaluate your own argument, which would've effected the critical thinking.

The majority of normal people don't. They take whatever an LLM spits out at face value, barely questioning it, if at all. In fact, they think it's more correct than merely reading simple search results. It's "AI" after all, intelligence is in the name; how could it be wrong. It surely researched the subject and presented sound results. They have no idea it's simply based on whatever other people have posted on the internet, regardless of it being correct or not.

Over time the people's understanding could get better, but for that to happen the individuals and companies who have vested interest in LLMs would need to start being honest about it.

Reply 53 of 63, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

If the majority of "normal people" use AI in this way then by the numbers the average AI user must be a normal person. That would mean you're saying as fact that the average AI user currently isn't aware that AI makes mistakes, which I don't buy and I don't think you do either. But then there are people who just don't want to think critically at all, they want to trust the first good looking result on Google etc. AI vetting the field for these people is in principle a positive thing. Then you have to ask whether it's good to require people who drag their feet about it to think critically anyway, when they'd rather not and the results are probably not good.

Reply 54 of 63, by StriderTR

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Today, 06:42:

If the majority of "normal people" use AI in this way then by the numbers the average AI user must be a normal person. That would mean you're saying as fact that the average AI user currently isn't aware that AI makes mistakes, which I don't buy and I don't think you do either. But then there are people who just don't want to think critically at all, they want to trust the first good looking result on Google etc. AI vetting the field for these people is in principle a positive thing. Then you have to ask whether it's good to require people who drag their feet about it to think critically anyway, when they'd rather not and the results are probably not good.

Using my own personal experience, a very small and limited sample size of co-workers and friends. A vast majority of them seem to take information received from AI (such as Google's AI search results) as factual, accurate, and and at face value. However, many also seem to assume what they read on social media and other sites is also factual and accurate. Not AI's fault, so many people today just seem to have never heard the phrase "trust, but verify". This is probably why I say critical thinking is in such short supply these days. Back in the 90's, long before AI and social media, my friends and I used to call this the "Sheeple Effect".

A quote I like to use in conversation about AI is actually loosely based on a line from Star Trek First Contact. I often say that "AI is an imperfect creation, becasue it was created by imperfect beings, us.". 🤣

DOS, Win9x, General "Retro" Enthusiast. Professional Tinkerer. Technology Hobbyist. Expert at Nothing! Build, Create, Repair, Repeat!
This Old Man's Builds, Projects, and Other Retro Goodness: https://theclassicgeek.blogspot.com/

Reply 55 of 63, by sunkindly

User metadata
Rank Member
Rank
Member

My own take: AI itself isn't the problem, it's the corporations who are ramming it down the throats of a society that's already been reeling from apathy and brainrot since the pandemic. People will be like "AI shouldn't be a substitute for human interaction" without even examining *why* people are turning to AI for therapy and friendship and even romance in the first place...well, humans have a lot of issues that existed before AI: they ghost, they judge, they harm, and more. We hardly emphasize empathy, media literacy, or even critical thinking that much in education anymore. There's also no longer any real societal incentive or reward for living honestly. So what did everyone think was gonna happen when given such a tool?

So, I don't fault or blame individuals for using AI to do their homework assignment or getting deceived by a hallucination. I blame those in power who have the resources to solve a lot of societal problems (that have become exacerbated by the AI they've chosen to release to everyone) and yet the decisions they make instead are ones that are not just vehemently anti-consumer but anti-humanity.

SUN85: NEC PC-8801mkIIMR
SUN92: Northgate Elegance | 386DX-25 | Orchid Fahrenheit 1280 | SB 1.0
SUN97: QDI Titanium IE | Pentium MMX 200MHz | Tseng ET6000 | SB 16
SUN00: ABIT BF6 | Pentium III 1.1GHz | 3dfx Voodoo3 3000 | AU8830

Reply 56 of 63, by UCyborg

User metadata
Rank Oldbie
Rank
Oldbie

Well said.

Speaking of which, ChatGPT isn't even reliable for OCR. At work, we recently tried to get a string of about 1500 characters out of PDF that was put through Microsoft to PDF. The original PDF had copyable text, the one put through Microsoft PDF "printer" while clear, was effectively converted to picture.

There were a number of characters that weren't recognized properly.

Arthur Schopenhauer wrote:

A man can be himself only so long as he is alone; and if he does not love solitude, he will not love freedom; for it is only when he is alone that he is really free.

Reply 57 of 63, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
StriderTR wrote on Today, 07:32:
vvbee wrote on Today, 06:42:

If the majority of "normal people" use AI in this way then by the numbers the average AI user must be a normal person. That would mean you're saying as fact that the average AI user currently isn't aware that AI makes mistakes, which I don't buy and I don't think you do either. But then there are people who just don't want to think critically at all, they want to trust the first good looking result on Google etc. AI vetting the field for these people is in principle a positive thing. Then you have to ask whether it's good to require people who drag their feet about it to think critically anyway, when they'd rather not and the results are probably not good.

Using my own personal experience, a very small and limited sample size of co-workers and friends. A vast majority of them seem to take information received from AI (such as Google's AI search results) as factual, accurate, and and at face value.

Do you mean that in the true sense, i.e. if AI gives them a piece of code that doesn't compile they'll contact the compiler authors, or if it gives them information from the wrong user's manual they'll insist they've been sold a mislabeled product? In any case this would require that AI's responses to them are always so good that they don't turn out to be inaccurate in practice, since these people would no longer have reason to think AI is infallible after a first incident. My impression is it's not that they consider AI infallible but rather that they find it good enough for their purpose, which goes back to the idea about AI enabling the desire to offload the mental labor that you don't want to be doing in the first place.

Reply 58 of 63, by eddman

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Today, 06:42:

If the majority of "normal people" use AI in this way then by the numbers the average AI user must be a normal person. That would mean you're saying as fact that the average AI user currently isn't aware that AI makes mistakes,

Maybe it's not the "majority" (that was hyperbolic), but it's certainly "a lot". I've encountered tons who are not aware. I've been in discussions or debates where the other person simply went "this is what chatgpt told me" or "google gemini says this" to back their claim without even checking if any of that was correct, and in many cases the results had wrong or misleading information, or for subjective matters, biased.

vvbee wrote on Today, 06:42:

which I don't buy and I don't think you do either.

I'm not selling anything; just telling what I've seen.

vvbee wrote on Today, 06:42:

AI vetting the field for these people is in principle a positive thing.

It's not by any stretch. LLMs have no way of checking the veracity of the content. The vetting should be done by the user, which as you too say, there are people who don't.

Reply 59 of 63, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
eddman wrote on Today, 12:36:
vvbee wrote on Today, 06:42:

AI vetting the field for these people is in principle a positive thing.

It's not by any stretch. LLMs have no way of checking the veracity of the content. The vetting should be done by the user, which as you too say, there are people who don't.

What do you mean they have no way of checking? How is not even one way available to AI? I'd assume you generally need three things: access to the content, knowledge of the topic, and the ability to reason fairly. I don't think AI's particularly missing any of these, least of all if the baseline is a person who hasn't much interest in verifying anything.