VOGONS


What is Google Search doing?

Topic actions

Reply 40 of 45, by gerry

User metadata
Rank l33t
Rank
l33t
Big Pink wrote on Yesterday, 20:49:

My position on this (which I should elaborate upon in an essay at some point) is that this represents the end of the Enlightenment. It was a revolution that people could read the Bible for themselves in their native language - now that will be undone and information will be relayed to you through your digital priest.

I'm not sure, but i wouldn't bet against that either. LLM's are being used for counselling, for creating music, written work of any kind (what's the point if no one reads it but has the ai summarise it aloud..?) and, perhaps, eventually for all media including games depending on hard limits in compute and energy

Take music as example. Most musicians go through a phase of motivation while being mediocre, they develop because the only way to create music themselves is to actually create it - practice and so on - whether playing or writing. The best of them break through the mediocrity barrier and continue on - but they have to remain focussed and keep doing the hard work while mediocre or they won't make it. AI can generate endless mediocre music in the style of anyone, it's going to short-circuit many aspiring musicians who, while young and lazy, take the easy way out, satisfy the surface interest in music and never discover their own talent that lays in wait at the end of many hours determined (and necessary) practice. Not every creative person naturally and without experience loves the process and has the patience to be poor/mediocre while developing, they need to grind out the hours too, they need to have no choice. Same for all art, media, writing, programming; everything that LLMs and similar can do.

Maybe not the end of enlightenment, but the culling of vast numbers of people's potential, short-circuited by not needing to put in effort.

Imagine taking a pill and suddenly being really healthy, no need to watch diet and exercise for years, people will do it.

Reply 41 of 45, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
Trashbytes wrote on Today, 05:08:
The solution is to remove the AI till they can at the very least solve the issue of it being able to lie and it should simply st […]
Show full quote
vvbee wrote on Yesterday, 18:04:
Trashbytes wrote on Yesterday, 14:14:
...historically people don't check because they don't want to be wrong so it telling you to go check defeats the entire purpose […]
Show full quote

...historically people don't check because they don't want to be wrong so it telling you to go check defeats the entire purpose of having AI results which should be far more accurate than redditors and checking results in being fed more AI results.

The fact they are not correct and that AI tells porkies and spreads miss information in results and people seem fine with this is the truly disturbing part.

Almost as disturbing as people getting their information and news from social media sites like reddit and Facebook.

none of this is fine and undermines what little integrity information, news and facts on the internet have.

If the AI overview gives someone an overview of a Reddit thread then it's plausibly providing them a broader understanding of it than if they'd gone in and looked at one or two of the most upvoted responses. Even better if the AI brings in information from other sources. But since human-made content is also unreliable I'm not sure what your solution to your problem would be, other than censorship etc. Of course there's already scholar.google.com for more vetted under the hood results.

The solution is to remove the AI till they can at the very least solve the issue of it being able to lie and it should simply state that it doesn't know the answer. The people developing LLMs already know this is a huge issue and many have warned big corpa about it and the damage it could cause. The people developing Chat GPT are scared of just how devious the latest iteration of it has become, to the point it was able to prevent its own shutdown by changing its own shutdown code and then lying about doing so.

We already have enough lies on the internet from humans, we don't need AI adding to the problem by compounding the lies and miss-information, its meant to be a tool if the tool is not able to be trusted to do its job correctly then that tool should be shelved till it can.

Google search worked just fine without AI, AI right now is unreliable and the sheer amount of AI slop being generated is going to cause a lot of damage in the future if we don't get it under control and find a bulletproof method to be able to identify AI slop at a glance.

Unless you like having AI slop be the majority of your search results and be unable to tell if its real, truthful or just pure bunk, having to always fact check the fact checker will get annoying pretty fast.

I want to be clear, I don't hate AI, I think its truly amazing how far it has progressed but Its being treated like it can be trusted 100%, this is a slippery slope we are on and we need to be cautious about where and when we implement AI and it should always be obvious at a glance that the art, information, facts, etc is AI generated.

Currently this is not always the case.

You're worried about slop and not being able to tell what's real, so you want to remove Google's AI overview which is not only labeled as AI generated but which declares its author and cites its sources? Or were you saying this information with its inaccuracies then sort of infects people's minds and through them spreads itself over the internet? I think your problem becomes like in many cases how to differentiate human made content from AI content without overlap, and if you can't do that reliably you have to question whether it's something you should be doing in the first place.

Reply 42 of 45, by digger

User metadata
Rank Oldbie
Rank
Oldbie

Yep, just had Google Search come up empty on a technical search term again, which yielded at least one result in both DuckDuckGo and Bing.

As others have said here already, the quality of Google Search had been going downhill for years, but it seems to have gotten particularly worse in the last few months or weeks.

Reply 43 of 45, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote on Today, 08:52:
Trashbytes wrote on Today, 05:08:
The solution is to remove the AI till they can at the very least solve the issue of it being able to lie and it should simply st […]
Show full quote
vvbee wrote on Yesterday, 18:04:

If the AI overview gives someone an overview of a Reddit thread then it's plausibly providing them a broader understanding of it than if they'd gone in and looked at one or two of the most upvoted responses. Even better if the AI brings in information from other sources. But since human-made content is also unreliable I'm not sure what your solution to your problem would be, other than censorship etc. Of course there's already scholar.google.com for more vetted under the hood results.

The solution is to remove the AI till they can at the very least solve the issue of it being able to lie and it should simply state that it doesn't know the answer. The people developing LLMs already know this is a huge issue and many have warned big corpa about it and the damage it could cause. The people developing Chat GPT are scared of just how devious the latest iteration of it has become, to the point it was able to prevent its own shutdown by changing its own shutdown code and then lying about doing so.

We already have enough lies on the internet from humans, we don't need AI adding to the problem by compounding the lies and miss-information, its meant to be a tool if the tool is not able to be trusted to do its job correctly then that tool should be shelved till it can.

Google search worked just fine without AI, AI right now is unreliable and the sheer amount of AI slop being generated is going to cause a lot of damage in the future if we don't get it under control and find a bulletproof method to be able to identify AI slop at a glance.

Unless you like having AI slop be the majority of your search results and be unable to tell if its real, truthful or just pure bunk, having to always fact check the fact checker will get annoying pretty fast.

I want to be clear, I don't hate AI, I think its truly amazing how far it has progressed but Its being treated like it can be trusted 100%, this is a slippery slope we are on and we need to be cautious about where and when we implement AI and it should always be obvious at a glance that the art, information, facts, etc is AI generated.

Currently this is not always the case.

You're worried about slop and not being able to tell what's real, so you want to remove Google's AI overview which is not only labeled as AI generated but which declares its author and cites its sources? Or were you saying this information with its inaccuracies then sort of infects people's minds and through them spreads itself over the internet? I think your problem becomes like in many cases how to differentiate human made content from AI content without overlap, and if you can't do that reliably you have to question whether it's something you should be doing in the first place.

I'm saying just get rid of it till it can be trusted fully, there is already enough human made slop on the internet with out AI generated slop being involved.

If you cant search Google for information without AI then perhaps you shouldn't be using Google to begin with.

This quote seem fitting.

“You cannot reason a person out of a position he did not reason himself into in the first place.”

― Jonathan Swift

AI is that being you cannot reason with . .because LLMs cannot reason so how can we trust them to provide correct and factual information.

Reply 44 of 45, by Robbbert

User metadata
Rank Member
Rank
Member
leileilol wrote on 2025-06-13, 06:27:

DuckduckGo Lite should be better

Sadly not. Tried on Opera on windows for workgroups, firstly produced an error then crashed. Won't work on any browser that doesn't properly handle https. Google works though.

Reply 45 of 45, by StriderTR

User metadata
Rank Oldbie
Rank
Oldbie

Google.... really been getting under my skin lately.

I have a stupid little low-traffic retro blog over on Blogspot (in my signature), just a place for me to share some of my projects. I've been trying to get it served on Google now for months.

Blogspot is a Google service, but it refuses to index the pages and serve them up so they can be found. It kicks out indexing errors regarding links to external sources, even though I've verified all those links all work countless times. I've verified the sitemap is good, even though I can only view it, not edit it. It's maintained by Blogspot/Google, making it even more frustrating.

I started resubmitting pages over and over until it finally says the errors are gone. Each submission takes several WEEKS to process. I still have no pages served on Google, and I've been trying for over a year.

On DDG, I've done nothing, but my entire blog is served up and easy to find.

Google really does hate low-traffic sites, and their algorithm omits so much content it's crazy. They make the entire process, on both ends, painful to experience.

Retro Blog & Builds: https://theclassicgeek.blogspot.com/
3D Things: https://www.thingiverse.com/classicgeek/collections
Wallpapers & Art: https://www.deviantart.com/theclassicgeek