Reply 100 of 107, by TechieDude
I despise "AI" with all its "features". Like, I'm thoroughly sick of it.
Computer hardware becoming unaffordable AGAIN due to their ridiculous demand (as if the previous shortage from the pandemic wasn't enough, fuck you Micron), energy, money and water being wasted on AI datacenters (don't underestimate this, all of that could be put to much better use), UI of EVERYTHING littered by AI slop, Windows becoming a total shitshow (sure, it hasn't been great for a while since win8, but c'mon), fools trusting whatever crap it spews out as gospel without checking, eyesore slop propped up as 'art', even more invasive surveillance (anyone remember Snowden?), ultra-conservative propaganda shoved down our throats, obvious slop videos (accompanied my some fool saying 'not AI'... yeah, sure thing)
No-effort crap stories about cats firing rockets at houses, working construction to support orphans, jumping from aiplanes to save babies ???
And I didn't even address this:
TheMobRules wrote on 2026-02-11, 03:42:I can think of a few things: […]
I can think of a few things:
- The tech bros are trying to push this technology everywhere as a solution for every problem, regardless of its usefulness on each application, even in the cases where it does more harm than good
- They are burning ungodly piles of money and planet resources to provide... what exactly? How has society benefited from this so far? I mean, besides the "AI artists" producing slop and former Crypto & NFT grifters that have now become vibecoders
- Pretty much all investment money (in the US at least) has been diverted to AI, even when there's no clear path to profitability (in fact, all major AI companies lose billions per month, OpenAI even loses money on the $200 subscriptions!!)
- These companies are hoarding a large part of the semiconductors being produced, which will massively impact prices of not just computers but all other kinds of appliances, cars and devices... for ever smaller improvements in the capabilities of the models
- Regardless of the usefulness or potential applications of the technology, if it's actual value has been grossly overrated (which seems very likely right now) then once major investors start pulling out it will nuke the economy
- On the other hand, if (and that's a really big IF) this technology actually delivers and fulfills its promise, huge amounts of people are going to lose their jobs and the economy will crash anyway
- I could keep adding things, such as the negative impact it has on the actual learning process, training data obtained via questionable (or outright illegal) means, and so on...
Now, the technology itself is really interesting, but it's not magic. It brute-forces natural language production by processing unfathomably large data sets, but once you start scratching the surface the cracks start to appear. Probably you'll notice that the more you know about a subject, the less precise the LLM is on that very subject. This "averaging out" of the knowledge is by design, and that's where this technology hits a wall. IMHO, rather than approaching human intelligence, the success of LLMs has proved that our language is not nearly as complex as one may think, and the actual processes by which we probe the world to acquire "new" knowledge are way deeper.
I also agree with you that having all this information on our fingertips is amazing, but we kinda had that before, it was called the Internet. The main blockers that prevented anyone to access all of it before were either legal/rights issues (that the AI companies have conveniently ignored when getting their training data) and the capabilities of search engines (Google Search was just fine until they started enshittifying it to push more ads).
What I have little patience or interest for is for the "agentic" stuff that's being pushed alongside LLMs, I consider it pure corporate Silicon Valley bullshit/grifting and refuse to interact with any of it.
Pretty much. Could it be used for something better? Probably, ultimately it's a tool