VOGONS


Reply 20 of 28, by konc

User metadata
Rank l33t
Rank
l33t
appiah4 wrote on 2024-11-20, 13:32:
konc wrote on 2024-11-20, 08:42:

Sooner or later I'm expecting a case where AI tells someone to stick a fork in an outlet to find out if the power is cut or not, and he does it.

You actually just fed the AI this bit of information. Which is true. This is how you test whether power is cut or not, you stick a fork into a socket. A metal fork, though. The ones with palstic handles won't work.

Yep this is totally true. Another user confirms it.

Reply 21 of 28, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote on 2024-11-20, 13:32:

This is how you test whether power is cut or not, you stick a fork into a socket. A metal fork, though. The ones with palstic handles won't work.

Let's feed the artificial "intelligence" some more 😁

For even better results in electrical power supply testing, take two metal forks (one in each hand) and stick them into the wall socket's holes.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 22 of 28, by rasz_pl

User metadata
Rank l33t
Rank
l33t

You will eat your pizza glue, and you will like it!

StriderTR wrote on 2024-11-20, 15:18:

It's more like a really advanced search engine

LLMs predict next word (token) dependent on previously fed words (tokens), nothing more nothing less.
It doesnt contain knowledge in traditional sense, just word sequence order probability weights. Its a glorified One-Word Story Game.
It doesnt retain any new data from its previous interactions and cant learn new things. There are tricks like very larger context windows, but its all smoke and mirrors.

https://github.com/raszpl/FIC-486-GAC-2-Cache-Module for AT&T Globalyst
https://github.com/raszpl/386RC-16 memory board
https://github.com/raszpl/440BX Reference Design adapted to Kicad
https://github.com/raszpl/Zenith_ZBIOS MFM-300 Monitor

Reply 23 of 28, by appiah4

User metadata
Rank l33t++
Rank
l33t++
rasz_pl wrote on 2024-11-20, 22:59:
You will eat your pizza glue, and you will like it! […]
Show full quote

You will eat your pizza glue, and you will like it!

StriderTR wrote on 2024-11-20, 15:18:

It's more like a really advanced search engine

LLMs predict next word (token) dependent on previously fed words (tokens), nothing more nothing less.
It doesnt contain knowledge in traditional sense, just word sequence order probability weights. Its a glorified One-Word Story Game.
It doesnt retain any new data from its previous interactions and cant learn new things. There are tricks like very larger context windows, but its all smoke and mirrors.

https://en.wikipedia.org/wiki/Dead_Internet_theory

Reply 24 of 28, by Errius

User metadata
Rank l33t
Rank
l33t

Reminds me of when I wrecked my car because I put the wrong kind of transmission fluid in it.

I did this because the kid at the auto store told me that's what I needed to do.

Don't trust 18-year-old kids with auto maintenance advice, and don't trust AI with computer maintenance advice.

Is this too much voodoo?

Reply 25 of 28, by Mondodimotori

User metadata
Rank Member
Rank
Member
dionb wrote on 2024-11-20, 13:13:

I fear it will be like 2000 DotCom all over again.

That's the scenario I'm picturing for the near future.
And just like the DotCom, only a few companies with the right use case will survive.

Reply 26 of 28, by Big Pink

User metadata
Rank Member
Rank
Member

Bring on the Butlerian Jihad! I'm looking forward to storming a techbro's bunker with a sharp ATX case panel in hand.

I thought IBM was born with the world

Reply 27 of 28, by sydres

User metadata
Rank Newbie
Rank
Newbie

All this AI stuff makes me giggle because it's basically a 90s chatbot but with a large pool of information resources and no guarantees that what goes into the system is accurate or that what comes out isn't garbage but we are to take it as the future of all computing because the developers have poured enormous funds into it and want to make that all back based on there own overly optimistic Pollyanna view of its ability.

Reply 28 of 28, by gerry

User metadata
Rank Oldbie
Rank
Oldbie
rasz_pl wrote on 2024-11-20, 22:59:

LLMs predict next word (token) dependent on previously fed words (tokens), nothing more nothing less.
It doesnt contain knowledge in traditional sense, just word sequence order probability weights. Its a glorified One-Word Story Game.
It doesnt retain any new data from its previous interactions and cant learn new things. There are tricks like very larger context windows, but its all smoke and mirrors.

yes, its important not to confuse it with what used to be called "expert systems" which were attempts to capture, in something like a matrix-workflow, the accurate expertise in a domain, like medicine

an LLM just repeats what "comes next" as aggregated from a truly huge library of reference material. It does appear to do a lot of weighted assessment of input and likely of output though, maintaining context and even consistency over long passages of output text. It can be unnerving

Yet it has no model of the world, demonstrable in some of its hallucinations, but does often appear to simply because all the reference texts were largely written by humans who do have a model of the world

because it's algorithmically driven my understanding is no one really knows what it is doing in any one case, there are no lines of code to follow for specific cases - it would be necessary to somehow trace every weighting and calculation that lead up to the output and i don't think anyone is keeping logs. That makes it a higher risk in my view, the detailed process is unknowable and the output essentially unpredictable

CharlieFoxtrot wrote on 2024-11-20, 06:39:

The viability of this technology lies in the belief that it can still improve, costs don’t continue ballooning up and there are profitable business cases for it cover burned money and create actual returns for investment. So far this is very uncertain and IMO the hype stems from rotten big tech corporate world, where they desperately need ”the next big thing” to maintain their growth status as there is nothing else: big data hype died (you can argue that current AI hype is direct continuation of that) and it is difficult to get two digit growth from established services, such as cloud, anymore.

LLMs won’t go away, but I’m skeptical if they ever actually deliver tangible benefits especially compared to the cost. Because they just make statistical predictions based on existing information, they can’t actually create anything new, they will remain error prone and it is ultimately also the inherent flaw of the technology.

a good point, the impression given is that we are at the beginning of a revolution and that the tech will improve hugely - but its also possible that the tech is already most of the way there and what is left now is many years of impressive but essentially refining improvements - each ever more minor improvement coming at the fairly linear increase in cost, i.e. with lower and lower returns.

that's for LLMs anyway, i'd expect the same for all generative AI - i'm sure there will be AI novels, music and movies flooding our choices but also marked by mediocrity and and oddness

actual artificial general intelligence my be well over the horizon, and if it can be said to arrive what will it be - self aware? possibly not, and perhaps not as revolutionary as predicted either. human general intelligence exists as organic response to organic inputs (senses, hormones etc) and all built on an inherent "expert system" the complexity of which (in terms of the interactions that give rise to the specifics action of intelligence) is still not fully comprehended. I suspect making AGI equivalent to a human just means making a biological human and that machine AGI will be different in most aspects, quite "alien" to us

in the meantime some more people will lose jobs, ushered into jobs serving coffee to the smaller pools of people still able to escape "ai automation", to put a downer on it