vvbee wrote on Yesterday, 18:04:
Trashbytes wrote on Yesterday, 14:14:...historically people don't check because they don't want to be wrong so it telling you to go check defeats the entire purpose […]
Show full quote
vvbee wrote on Yesterday, 13:17:
Forums are historically full of people who are confidently wrong, and these have been showing up in search results for a long time. Very easy also to find community upvoted posts that are off the mark, more insidious than AI responses that come with explicit disclaimers reminding you not to trust them without checking.
...historically people don't check because they don't want to be wrong so it telling you to go check defeats the entire purpose of having AI results which should be far more accurate than redditors and checking results in being fed more AI results.
The fact they are not correct and that AI tells porkies and spreads miss information in results and people seem fine with this is the truly disturbing part.
Almost as disturbing as people getting their information and news from social media sites like reddit and Facebook.
none of this is fine and undermines what little integrity information, news and facts on the internet have.
If the AI overview gives someone an overview of a Reddit thread then it's plausibly providing them a broader understanding of it than if they'd gone in and looked at one or two of the most upvoted responses. Even better if the AI brings in information from other sources. But since human-made content is also unreliable I'm not sure what your solution to your problem would be, other than censorship etc. Of course there's already scholar.google.com for more vetted under the hood results.
The solution is to remove the AI till they can at the very least solve the issue of it being able to lie and it should simply state that it doesn't know the answer. The people developing LLMs already know this is a huge issue and many have warned big corpa about it and the damage it could cause. The people developing Chat GPT are scared of just how devious the latest iteration of it has become, to the point it was able to prevent its own shutdown by changing its own shutdown code and then lying about doing so.
We already have enough lies on the internet from humans, we don't need AI adding to the problem by compounding the lies and miss-information, its meant to be a tool if the tool is not able to be trusted to do its job correctly then that tool should be shelved till it can.
Google search worked just fine without AI, AI right now is unreliable and the sheer amount of AI slop being generated is going to cause a lot of damage in the future if we don't get it under control and find a bulletproof method to be able to identify AI slop at a glance.
Unless you like having AI slop be the majority of your search results and be unable to tell if its real, truthful or just pure bunk, having to always fact check the fact checker will get annoying pretty fast.
I want to be clear, I don't hate AI, I think its truly amazing how far it has progressed but Its being treated like it can be trusted 100%, this is a slippery slope we are on and we need to be cautious about where and when we implement AI and it should always be obvious at a glance that the art, information, facts, etc is AI generated.
Currently this is not always the case.