VOGONS


Reply 80 of 89, by twiz11

User metadata
Rank Oldbie
Rank
Oldbie
lti wrote on 2025-12-29, 19:33:
vvbee wrote on 2025-12-26, 18:11:

AI to be considerably more human than the human was

Is that what the song was about? I could never understand what Rob Zombie was saying.

I think that's the best answer I can give to this trolling.

yea commander data is more human than picard. I figured he was made for the series to bridge the gap between man and machine one day where we wouldnt discriminate AI

Reply 81 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
gerry wrote on 2025-12-29, 10:55:
vvbee wrote on 2025-12-26, 18:11:

You misread me, I said to the extent it can. But you're not making the case that AI is converting innocent people to dark ways but that people are expressing their dark ways with AI, and your expectation is that it's the AI that should resist.

This might be akin to expectations people have of fellow citizens, institutions, regulations. we have expectations of those to resist, ameliorate or otherwise blunt human initiated dark ways. If there is evidence AI isn't doing that then, much as with regulation on other things, the oft suggested approach is to create suitable regulation and enforce it

You're responding to only one half of the idea as if that was the whole idea, and now we're not talking about the same thing. If you were to quote the other half only then it would look like this:

gerry wrote on 2025-12-29, 10:55:
vvbee wrote on 2025-12-26, 18:11:

As it happens in the last study I saw putting GPT-4.5 with special prompting to the Turing test the participants found the AI to be considerably more human than the human was. So AI is now in a position to do what you wanted it to do, to teach people how to behave better.

This might be akin to expectations people have of fellow citizens, institutions, regulations. we have expectations of those to resist, ameliorate or otherwise blunt human initiated dark ways.

In this half of the quoting we're saying that AI not only could teach humans better human interaction but that humans have a duty to facilitate this. Makes sense of course. But first you'd have to admit yourself as AI's subject, and there's a bottleneck.

It comes back to what was talked about on previous pages. Some were worried about a loss of critical thinking, but then it was shown that AI was just as well capable of increasing critical thinking in those willing to engage. The willingness to engage was the first limiting factor.

Reply 82 of 89, by gerry

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-12-30, 00:42:

You're responding to only one half of the idea as if that was the whole idea, and now we're not talking about the same thing. If you were to quote the other half only then it would look like this:

When someone says “You’re responding to only one half of the idea as if that was the whole idea”, they’re not describing an objective fact. They’re interpreting the other person’s behaviour and assigning a motive or assumption to it.

let's move on from the meta conversation

It comes back to what was talked about on previous pages. Some were worried about a loss of critical thinking, but then it was shown that AI was just as well capable of increasing critical thinking in those willing to engage. The willingness to engage was the first limiting factor.

some are indeed worried by the possible loss of critical thinking
https://www.mdpi.com/2075-4698/15/1/6
https://news.harvard.edu/gazette/story/2025/1 … ling-our-minds/
https://www.bbc.co.uk/news/articles/cd6xz12j6pzo

and with confidence we can state AI is a tool that may help the development of critical thinking in those willing to engage.

Thus willingness to engage is a limiting factor.

From there, we can look at what ‘willingness to engage’ actually means in practice, and what kinds of outcomes are plausible once we factor in biology, historical patterns, cultural context and other factors. There’s no clear optimistic or pessimistic conclusion — whatever actually happens will almost certainly be more nuanced and complex than any simple summary can capture

Reply 83 of 89, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
gerry wrote on 2025-12-31, 09:37:
vvbee wrote on 2025-12-30, 00:42:

You're responding to only one half of the idea as if that was the whole idea, and now we're not talking about the same thing. If you were to quote the other half only then it would look like this:

When someone says “You’re responding to only one half of the idea as if that was the whole idea”, they’re not describing an objective fact. They’re interpreting the other person’s behaviour and assigning a motive or assumption to it. let's move on from the meta conversation

In this case it's an implication of arguing a straw man. If you were looking to formulate a genuine counterargument then this would be a showstopper rather than a "moving on" moment.

You didn't offer your views on the links you posted, so they may or may not really agree with you. They're controversial (https://julkaisufoorumi.fi/en/news/grey-area- … ournals-level-0). If you're not making an argument then it doesn't matter of course.

I eyeballed the first study. The results show people with lower attainment levels relying on AI more, and higher AI use corresponding with less personal effort to understand topics deeply. The suggestion is that less effort spent trying to understand deeply may hamper the development of critical thinking abilities, hence AI use may lead to a reduction in critical thinking ability. But this link isn't convincing, you can say lower critical thinking ability inhibits understanding more deeply, hence the results. I'd point out that since according to this study the "less attained" are adopting AI at rates proportional to the lack of their abilities it suggests AI is in a position to potentially uplift the whole population and so poor regulation would be in a position to lose this. Calling for regulation on one-sided takes wouldn't be the way to go then.

Reply 84 of 89, by keenmaster486

User metadata
Rank l33t
Rank
l33t

If you’re too dumb to think then you shouldn’t be thought for by a computer.

World's foremost 486 enjoyer.

Reply 85 of 89, by Shreddoc

User metadata
Rank Oldbie
Rank
Oldbie

I don't mean this to excuse any possible AI shortcoming. But certainly, humans have such a propensity to demonise new technology (and the people who use it). That pattern repeats over and over, we just can't help ourselves. Reach a certain age or maturity and the inevitable Scary New Thing Is Bad attitude creeps in. It's a perfectly understandable reaction.

Supporter of PicoGUS, PicoMEM, mt32-pi, WavetablePi, Throttle Blaster, Voltage Blaster, GBS-Control, GP2040-CE, RetroNAS.

Reply 86 of 89, by TheMobRules

User metadata
Rank Oldbie
Rank
Oldbie
Shreddoc wrote on 2026-01-01, 21:24:

I don't mean this to excuse any possible AI shortcoming. But certainly, humans have such a propensity to demonise new technology (and the people who use it). That pattern repeats over and over, we just can't help ourselves. Reach a certain age or maturity and the inevitable Scary New Thing Is Bad attitude creeps in. It's a perfectly understandable reaction.

Personally I don't find it scary at all, I even used RNNs for the final project of my degree (this was years before the now famous "attention" mechanism of LLMs was introduced, but still). I just think it's stupid to keep burning trillions of dollars (as well as the planet's natural resources) on something that seems to benefit only a few billionaires and the "founders" (a.k.a. grifters/cultists) that chase every dumb trend like crypto, NFT and so on.

None of these AI companies actually are making a profit (in fact, they're losing billions upon billions with no path to profitability on sight), their goals are a constantly moving target (a while back they claimed, without any scientific evidence, that AGI would magically "emerge", now they don't mention AGI anymore but they're scaremongering that it will replace all white collar work in the next ${random} months), and you have waves of LinkedIn idiots claiming their productivity increased by ridiculous amounts (10x, 100x, 1000x!) but never provide any tangible proof.

So, this stuff is not scary, it's a dumb way of burning money that could be invested in actually useful stuff (try to get any investors on something that doesn't mention "AI" and see how it goes).

Reply 87 of 89, by sunkindly

User metadata
Rank Member
Rank
Member

I don't find it *scary* but I don't think it's just another new technology or fad to be overreacting to. I don't recall anything else being forced down our throats in such a rapid manner, not to mention the backend costs to maintain it. It's one thing for there to be some new smart device you can choose to use or not, but here even if you choose not to interact with AI in any way, you're still going to run into the economical and societal impacts on your personal life (like the rising costs of things related to AI: electricity, RAM, water, etc. etc.)

SUN85: NEC PC-8801mkIIMR
SUN92: Northgate Elegance | 386DX-25 | Orchid Fahrenheit 1280 | SB 1.0
SUN97: QDI Titanium IE | Pentium MMX 200MHz | Tseng ET6000 | SB 16
SUN00: ABIT BF6 | Pentium III 1.1GHz | 3dfx Voodoo3 3000 | AU8830

Reply 88 of 89, by Ozzuneoj

User metadata
Rank l33t
Rank
l33t
Shreddoc wrote on 2026-01-01, 21:24:

I don't mean this to excuse any possible AI shortcoming. But certainly, humans have such a propensity to demonise new technology (and the people who use it). That pattern repeats over and over, we just can't help ourselves. Reach a certain age or maturity and the inevitable Scary New Thing Is Bad attitude creeps in. It's a perfectly understandable reaction.

Very true. Still, often times the fears people have about such technologies end up not only becoming a reality, they become so normal that we look back on those fears as being ridiculous, no matter how many new problems we deal with or how many studies are done that find younger generations struggling with massive problems that did not exist for society before said technology existed. People want so badly all of the addicting distractions they are used to and all the things that make life "easier" that they will ignore all the harm and risks that often come with them (for themselves or for other people).

We joke about how "stupid" people were for thinking the automobile was a bad thing, but no one can deny the tens of millions of deaths and likely billions of injuries that have occurred in vehicle related accidents... and that doesn't even take into account whatever anyone feels the environmental impact has been, all the death and destruction brought about in wars over oil, etc.

People probably hated on television when it came out, and now we've gotten to see what society turns into after multiple generations in various countries and cultures growing up with propaganda, advertising and other forms of manipulation pumped directly into their eye sockets from the moment they are old enough to hold their head up to the time they die in a nursing home with the TV on.

People demonized the internet and now we deal with even more of the carefully designed manipulative content (sorry, advertising...) that television introduced, along with widespread identity theft sucking away the savings of the innocent people (many of whom were probably old enough to say that the internet was\wasn't a bad thing in its infancy, 🤣 ), a horrific amount of predatory behavior directed at children and other vulnerable people, a level of surveillance and loss of privacy that just a couple decades ago would have been unfathomable (and is often self inflicted by people putting everything out there for others to see either through social media or questionable "smart" devices\cameras they install in their homes), it is literally used every day by militaries to kill people with the push of a button as if someone is playing a video game, social media completely altering the minds and development of generations of people starting at a very young age... and lets not forget the countless hours people waste staring at their various internet-black-hole-access-points in a given day. I'm sure all of that time could have been used for bad or for good elsewhere in the world... but I guess we'll never know.

But hey, at least we can drive our comfy cars to our therapy appointments and have something to stare at while we're in the waiting room so we don't have to talk to other humans in person.

Just because we have to take the bad with the good doesn't mean the people who called out the bad ahead of time were wrong.

(Just to be clear, I'm not trying to take a stand against all of these inventions... and I have no intention of arguing whether they are good or bad... I like my car and I enjoy computers as much as anyone here. This was just some food for thought. 😁 )

Now for some blitting from the back buffer.

Reply 89 of 89, by gerry

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-12-31, 15:28:
In this case it's an implication of arguing a straw man. If you were looking to formulate a genuine counterargument then this wo […]
Show full quote
gerry wrote on 2025-12-31, 09:37:
vvbee wrote on 2025-12-30, 00:42:

You're responding to only one half of the idea as if that was the whole idea, and now we're not talking about the same thing. If you were to quote the other half only then it would look like this:

When someone says “You’re responding to only one half of the idea as if that was the whole idea”, they’re not describing an objective fact. They’re interpreting the other person’s behaviour and assigning a motive or assumption to it. let's move on from the meta conversation

In this case it's an implication of arguing a straw man. If you were looking to formulate a genuine counterargument then this would be a showstopper rather than a "moving on" moment.

You didn't offer your views on the links you posted, so they may or may not really agree with you. They're controversial (https://julkaisufoorumi.fi/en/news/grey-area- … ournals-level-0). If you're not making an argument then it doesn't matter of course.

I eyeballed the first study. The results show people with lower attainment levels relying on AI more, and higher AI use corresponding with less personal effort to understand topics deeply. The suggestion is that less effort spent trying to understand deeply may hamper the development of critical thinking abilities, hence AI use may lead to a reduction in critical thinking ability. But this link isn't convincing, you can say lower critical thinking ability inhibits understanding more deeply, hence the results. I'd point out that since according to this study the "less attained" are adopting AI at rates proportional to the lack of their abilities it suggests AI is in a position to potentially uplift the whole population and so poor regulation would be in a position to lose this. Calling for regulation on one-sided takes wouldn't be the way to go then.

the earlier reply wasn't a counterargument, hence not requiring (nor benefitting from) a straw man, your full point remains. I posted links in support of an earlier assertion that 'some were worried', indeed some are. My own opinion is that i am unconvinced by the links. I think a more complex outcome is likely, in which both positive and negative scenarios emerge in different contexts. I have ideas on what those contexts are but without independent evidence those ideas are conjecture. Still, i don't think there is a conclusive position anyone can hold on whether AI is positive or negative for overall human capacity for critical thinking, yet, assuming such can even be defined.

Shreddoc wrote on 2026-01-01, 21:24:

I don't mean this to excuse any possible AI shortcoming. But certainly, humans have such a propensity to demonise new technology (and the people who use it). That pattern repeats over and over, we just can't help ourselves. Reach a certain age or maturity and the inevitable Scary New Thing Is Bad attitude creeps in. It's a perfectly understandable reaction.

This is something i agree with. Similar concerns existed during the industrial revolution and in a small scale even with electronic calculators. There is truth in the notion that some negative consequences will occur, there is also truth in the notion that some positive consequences will occur. In each case of evaluating the consequences of new technology there are subtleties in context that mean application of some historically similar scenario is difficult at best.

into the unknown we go, again!

TheMobRules wrote on 2026-01-02, 01:52:

Personally I don't find it scary at all, I even used RNNs for the final project of my degree (this was years before the now famous "attention" mechanism of LLMs was introduced, but still). I just think it's stupid to keep burning trillions of dollars (as well as the planet's natural resources) on something that seems to benefit only a few billionaires and the "founders" (a.k.a. grifters/cultists) that chase every dumb trend like crypto, NFT and so on.

None of these AI companies actually are making a profit (in fact, they're losing billions upon billions with no path to profitability on sight), their goals are a constantly moving target (a while back they claimed, without any scientific evidence, that AGI would magically "emerge", now they don't mention AGI anymore but they're scaremongering that it will replace all white collar work in the next ${random} months), and you have waves of LinkedIn idiots claiming their productivity increased by ridiculous amounts (10x, 100x, 1000x!) but never provide any tangible proof.

So, this stuff is not scary, it's a dumb way of burning money that could be invested in actually useful stuff (try to get any investors on something that doesn't mention "AI" and see how it goes).

I wonder if the goals of the AI companies are, partially, 1) to pump up stock prices, 2) to become dominant/pervasive and then 2a) once we're all dependent start charging more and 2b) monopolistic barriers to entry

Amazon didn't make money for years, and then it did

It's also interesting in that AI is useful because of its ability to synthesise the combined work of millions (books, online etc training data) and exposes that what you pay for is the transformation of that data not the data itself (you can still go and look at the 'raw data' yourself). this sort of invalidates the idea that 'they' are making money on the backs of others, its the transformation and presentation you pay for. Somehow that isn't satisfying as an answer though, its a sense that AI will cause shifts such that we cant realistically access the 'raw data' without being at a huge disadvantage, like saying we don't need to buy a car or train/bus/taxi service - you're free to walk, but we all know that the world is not set up to make that a realistic choice most of the time. maybe that's not the right analogy, hopefully you get the point 😀