VOGONS


People underestimate hardware they have and it's annoying

Topic actions

Reply 81 of 97, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Kahenraz wrote on 2022-03-29, 04:17:

Core 2 still feels plenty fast to me.

It is for basic netbook purposes and light 720p gaming the core2quads that support 8gb of ddr3 memory fare better but neither is modern.

Not having the modern SSE instructions or AVX puts them at a huge disadvantage for modern software and HD gaming.

Having the memory controller on the northbridge is another performance hit. Lack of IPC and branch prediction enhancements also lets them down with modern software. I would also include lack of hyper threading here but it’s not an essential thing.

Reply 82 of 97, by brian105

User metadata
Rank Member
Rank
Member
songo wrote on 2022-03-29, 04:07:

Guys, plz explain why my old Core 2 Duo / 3 GB RAM PC shouldn't be consider as modern rig? I does everything fine except for post-2013 gaming. Hell, you can even install Win11 on it!

Is there a SINGLE aspect of it that isn't MODERN?

Because it's not even modern? Open Chrome on that 3gb RAM and open more than 3 tabs. I dare you. Better hope your drive is fast, because you're gonna be ass-deep in the page file.

Presario 5284: K6-2+ 550 ACZ @ 600 2v, 256MB PC133, GeForce4 MX 440SE 64MB, MVP3, Maxtor SATA/150 PCI card, 16GB Sandisk U100 SATA SSD
2007 Desktop: Athlon 64 X2 6000+, Asus M2v-MX SE, Foxconn 7950GT 512mb, 4GB DDR2 800, Audigy 2 ZS, WinME/XP

Reply 83 of 97, by darry

User metadata
Rank l33t++
Rank
l33t++

IMHO, putting aside playing newer CPU intensive games, the current "reasonable" usability threshold is around a Core 2 Quad with 8GB of RAM (SSD also recommended) for mostly browser centric use with some productivity stuff thrown in (word processing, taxes/ personal accounting, maybe light image editing, etc)

4GB of RAM might cut it for some use cases, but as most if not all Core 2 Quad based systems (except maybe some laptops, though not even sure if quads
from that gen ever made it into the laptops ) can easily and cheaply be brought up to 8GB of RAM, that does not even need to be a consideration, IMHO.

As for a using a Core2 Duo, that is getting close to running Windows 95 on a lower end 386 in terms of relative experience, IMHO . The highest clocked Duos have some life left in them, but a 1.8GHz E4300 would be painful, IMHO. An E4300 is in the same per-core performance ballpark as an Atom x5-Z8350 , except with half the cores. Having used endured a Windows 10 a tablet based on that Atom, I would not recommend anything slower.

Oh, and unless the video card used in a Core2 Quad system allows a high degree of decode offloading on a given streaming service, the experience will not be pleasant. Case in point, I have friends with a either a Q8200 or Q8300 * (2.33Ghz or 2.5GHz) and a Radeon 6450 or 6570 * ( case limits options to half-height cards) and their favorite local streaming service ( Tou.tv ) loads all cores to 75% or more . Disabling AVG realtime protection allows 1080p playback to work, otherwise only 540p (lowest) does not skip .

* I don't remember exact model as was selling and giving away some similar hw at the time I put that one together, but could check if anyone wants precise info.

Also, these folks are going to be getting an i5 3470 + 8GB RAM setup soon, which should make things more pleasant .

Reply 84 of 97, by chrismeyer6

User metadata
Rank l33t
Rank
l33t

The system I retired that my wife was using was a core2 duo E8600 with 8 gigs of RAM. While it served her well some of her software namely the Cricut design studio software was slow and complaining of missing cpu instruction sets. So I built her the X58 based Xeon x5675 system I talked about in the previous page. It was such a massive uplift in performance for her it wasn't even funny. She even help me build it and she had a blast. I'm most likely going to repurpose the core2 system for my 7 year old so he can have something more modern than his Socket A system.

Reply 85 of 97, by TrashPanda

User metadata
Rank l33t
Rank
l33t
chrismeyer6 wrote on 2022-03-29, 12:24:

The system I retired that my wife was using was a core2 duo E8600 with 8 gigs of RAM. While it served her well some of her software namely the Cricut design studio software was slow and complaining of missing cpu instruction sets. So I built her the X58 based Xeon x5675 system I talked about in the previous page. It was such a massive uplift in performance for her it wasn't even funny. She even help me build it and she had a blast. I'm most likely going to repurpose the core2 system for my 7 year old so he can have something more modern than his Socket A system.

Indeed even if it still feels rather fast the missing SSE instructions and AVX quickly make using a Core2Duo or Quad a rather unpleasant experience, however if you can get away with software that doesnt need the instructions then perhaps you could use it for a few more years . .but I cant imagine Chrome being all that great even with 8gb of ram, any more than 5 rich content tabs and you'll be hitting page file city hard, so a SSD is a great idea to buy a little more time.

That said on my current rig I have 150 tabs open across three Chrome windows and its eating 12gb of ram . .so perhaps you could get away with more than 5 tabs but I have noticed that Chrome is using a fair bit of GPU power to render the rich content pages when they are active which would also be problematic for a core2duo system that is already heavily loaded by Chrome.

Chrome isn't the greatest browser even on modern systems .. is there a browser that would work better for legacy systems that doesnt need special instructions and still offer the flexibility that Chrome has?

Reply 86 of 97, by chrismeyer6

User metadata
Rank l33t
Rank
l33t

At this point I more than got my money's worth out if the two core 2 systems I retired and we have less than 400 hundred dollars invested into the two Xeon systems that replaced the two core2 systems. As for the browser thing I have no idea between the lack of optimized code for the browsers themselves and the rolling dumpster fire that is the modern web it's honestly a crap shoot on what browser would be ideal to use.

Reply 87 of 97, by Kahenraz

User metadata
Rank l33t
Rank
l33t

A lot of the examples I see talk about web browsers and the internet, which have always kept up with technology. But if you were to set this use case aside, most older computers are still perfectly fine and usable.

Sure, Netscape Communicator can run on a Mac Classic. But it is a terrible experience and is not representative of what else it can do. Sure, running a bunch of tabs in Chrome would probably kill your Core 2, and it will struggle to render streaming video, but it's capable of more than just an internet machine.

To be honest, I actually browse the Internet more often now on my phone than sitting down on a computer, because it's more comfortable to lounge on a couch or browse an reply in short bursts when I have some downtime. If anything, I have more incentive to use the internet somewhere other than a computer, unless I'm working at one.

Last edited by Kahenraz on 2022-03-29, 14:37. Edited 1 time in total.

Reply 88 of 97, by TrashPanda

User metadata
Rank l33t
Rank
l33t
chrismeyer6 wrote on 2022-03-29, 14:17:

At this point I more than got my money's worth out if the two core 2 systems I retired and we have less than 400 hundred dollars invested into the two Xeon systems that replaced the two core2 systems. As for the browser thing I have no idea between the lack of optimized code for the browsers themselves and the rolling dumpster fire that is the modern web it's honestly a crap shoot on what browser would be ideal to use.

Thankfully modern browsers do let you filter out a lot of the garbage, I never see adds since I went the PiHole route ages ago which also sanitised all mobile games in the home but I still run addons to nab other nasties the Pihole cant stop.

Reply 89 of 97, by Kahenraz

User metadata
Rank l33t
Rank
l33t

That's a pretty big deal. I always install Adblock Plus, but it's most noticable on older machines and it needed to do anything at all in a browser. It's horrible how much garbage loads in the background. Some pages probably have Bitcoin miners embedded in JavaScript, if the location is particularly shady.

Last edited by Kahenraz on 2022-03-29, 14:40. Edited 1 time in total.

Reply 90 of 97, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Kahenraz wrote on 2022-03-29, 14:36:

A lot of the examples I see talk about web browsers and the internet, which have always kept up with technology. But if you were to set this use case aside, most older computers are still perfectly fine and usable.

Sure, Netscape Communicator can run on a Mac Classic. But it is a terrible experience and is not representative of what else it can do. Sure, running a bunch of tabs in Chrome would probably kill your Core 2, and it will struggle to render streaming video, but it's capable of more than just an internet machine.

To be honest, I actually browse the Internet more often now on my phone than sitting down on a computer, because it's more comfortable to lounge on a couch or browse an reply in short bursts when I have some downtime. If anything, I have more incentive to use the internet somewhere other than a computer, unless I'm working at one.

Thing is . .we here might use it for more than that but your average joe out there .. yup its a internet box for email and YouTube and well even office these days and you can bet Joes missus will be using it for gatcha games and shit, sometime you have to realise that outside of this bubble .. the average user is well .. a sheep.

Reply 91 of 97, by Kahenraz

User metadata
Rank l33t
Rank
l33t

My sister knows that her kids will use any computer that I pick out for her. So she always asks me if it can run Minecraft. This actually rules out a lot of computers that would otherwise be fine, even for web browsing. That game is a huge CPU an graphics hog.

Reply 92 of 97, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Kahenraz wrote on 2022-03-29, 14:39:

That's a pretty big deal. I always install Adblock Plus, but it's most noticable on older machines' ability to do anything at all in a browser. It's horrible how much garbage loads in the background. Some pages probably have Bitcoin miners embedded in JavaScript, if the location is particularly shady.

A lot of less shady sites do this ...you dont even have to place a bet on it, just press F12 in chrome and check out the source code for the page and yeah ...so much shit, there are awesome addons thankfully that kill 90% of it and if you can get Chrome running on a legacy PC along with some addons then the web becomes a much tidier and faster experience . .especially turning off them damn auto playing movies and sounds .. fuck I hate them. Pihole the ip they are served from and boom the bastards use a ever changing IP so their shit adds are always being served.

Well they can get sodded .. I just ban that entire IP range on the Pihole .. yeah lets see you serve them now. (Note this does tend to break other stuff but its fun to kill their adds)

Reply 93 of 97, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Kahenraz wrote on 2022-03-29, 14:42:

My sister knows that her kids will use any computer that I pick out for her. So she always asks me if it can run Minecraft. This actually rules out a lot of computers that would otherwise be fine, even for web browsing. That game is a huge CPU an graphics hog.

Bedrock edition is actually really good on resources but has higher OS requirements, MC Java still uses a seriously outdated Opengl library it must be 10 years old at this point and its the main cause of the terrible game performance, you can use something like Optifine and it'll make the game run buttery smooth on even a potato. (It uses a modern Opengl library which is so much faster)

Reply 94 of 97, by Error 0x7CF

User metadata
Rank Member
Rank
Member

SSE and AVX are ridiculous ISA extensions. Just look at the name of this instruction -> PUNPCKHQDQ. If you add together all the MMX,SSE1-4, AVX1,2,512 instructions, it comes out to comprising a bit more than half the total different instructions in an x86 chip (AVX1,2,512 alone are about a third). They also get miserable adoption rates for everything but scientific software (and web browsers apparently) because of their lack of forwards compatibility.

ARM and Risc-V took a completely different approach with their vector ISAs, SVE and RiscV-V. Software can take advantage of more capable hardware (and still run on less capable hardware, just not as fast) without doing the Intel thing where people recompile and increase hardware requirements every time the CPU vendor decides to double SIMD vector length again to produce a shiny new incompatible ISA. With Risc-V Vector, a single instruction set supports vector lengths from 32 bits to 65536 bits per register with no changes.

I guarantee none of the speed difference you feel between a C2Q and a quadcore i5 is related to the SIMD ISAs. On-CPU memory controller is probably more relevant, but still minor. Bigger caches, faster RAM, and beefier and beefier out-of-order superscalar cores and FPU units make most of the difference you see.

Intel has been using shiny new SIMD extensions as a trick to get people to think their CPU can do fancy new things for over 20 years now. The reality is that they get no use in normal software until 10 years later, provide no (or extremely minor) speed increases for normal software, and create new and pointless incompatibilities that make planned obsolescence happen faster. It's getting to a point that the ridiculous joke that these ISAs are is so bad that Intel couldn't make their new 12th-gen Core series CPU generation fully AVX512 compatible, the "big" cores have AVX512 but the "small" cores don't because they couldn't afford the obscene silicon/ISA/logic bloat of AVX512 for them. End result? They dropped compatibility for AVX512 from the chip series entirely, which they can do since it's an almost entirely irrelevant instruction set extension with practically no benefit for the normal user, or even the vast majority of power-users.

(If this post sounds mad, I'm not mad at anyone here, just a little bit at Intel and the march of pointless CPU incompatibility. There have been no useful ISA extensions since x86_64 besides maybe the SHA, AES-NI, and virtualization extensions, and despite that, required extensions have continued to make perfectly good hardware "obsolete" based on the MMX/SSE/AVX ISA compatibility level. I'd argue x86_64 and the aforementioned other very minor extensions were the only good ISA extensions since the Pentium Pro. SIMD instruction sets started as a marketing trick with MMX and they still are, to this day.)

Old precedes antique.

Reply 95 of 97, by javispedro1

User metadata
Rank Member
Rank
Member
Error 0x7CF wrote on 2022-03-29, 18:34:

SSE and AVX are ridiculous ISA extensions. [....] ARM and Risc-V took a completely different approach with their vector ISAs, SVE and RiscV-V.

I think this is a bit harsh. ARM has plenty of ridiculous ISA extensions. I played with the old Jazelle DBX (the one that runs java opcodes natively). I still maintain a system using the VFPv3-D16 ABI, something that is practically forgotten due to its ugliness. Same with the newer one Neon. There are even more extravagant experiments in the past and ARM doesn't even care about backwards compatibility that much, making it a million times worse than x86. Sometimes they introduced an extension in one generation only to remove it within the next.
Besides, making it "scalable" can lead to even more pronounced artificial market segmentation, not less. Now your low-end CPUs can have worse vector performance than the high-end ones even within the same generation.

Error 0x7CF wrote on 2022-03-29, 18:34:

I guarantee none of the speed difference you feel between a C2Q and a quadcore i5 is related to the SIMD ISAs. On-CPU memory controller is probably more relevant, but still minor. Bigger caches, faster RAM, and beefier and beefier out-of-order superscalar cores and FPU units make most of the difference you see.

100% agreed. You forgot to explicitly mention (albeit I see it implicit) that having beefier cores makes more pronounced effect than having a large number of them (for perceived speed, that is).

Anyway, the actual effect of new ISAs on x86 market segmentation is pretty much negligible. I don't think any software even asks for anything more than basic SSE2 these days (since, to be honest, 80x87 floating point would be a pain). I doubt you can ask for AVX since there is probably an Atom somewhere that doesn't have it, or the X86_64 emulation in some ARM tablet doesn't support it.

Reply 96 of 97, by Error 0x7CF

User metadata
Rank Member
Rank
Member
javispedro1 wrote on 2022-03-29, 22:11:

I think this is a bit harsh. ARM has plenty of ridiculous ISA extensions. I played with the old Jazelle DBX (the one that runs java opcodes natively). I still maintain a system using the VFPv3-D16 ABI, something that is practically forgotten due to its ugliness. Same with the newer one Neon. There are even more extravagant experiments in the past and ARM doesn't even care about backwards compatibility that much, making it a million times worse than x86. Sometimes they introduced an extension in one generation only to remove it within the next.

I won't defend ARM, no. Ew, Jazelle. Ew, weird thumb complications. Etc, etc.
My primary argument here is Risc-V, which has been taking an approach that while maybe will result in some increased fragmentation because of the freedom to implement whichever extensions you want, seems to be overall better thought-through.
That said, the ARM SVE extension seems to be basically in the same vein as the Risc-V Vector extension, they both seem nice. ARM also took the opportunity with ARM64 to shake out a lot of the cruft, something that x86 probably should have done a bit more thoroughly when x86_64 was created. NEON feels a bit like a joke like SSE, though.
For Risc-V, there are a handful of nasty extensions (J extension is basically Jazelle) but nobody wants them and the specifications for those are crawling along unfinished (because nobody wants them, so nobody is working to specify them).
The instruction count for the Risc-V vector ISA seems like it might end up being the majority of the CPU's instruction set when implemented, but the total number I see listed probably includes all the optional parts practically nobody wants like quad-precision floats.

javispedro1 wrote on 2022-03-29, 22:11:

Besides, making it "scalable" can lead to even more pronounced artificial market segmentation, not less. Now your low-end CPUs can have worse vector performance than the high-end ones even within the same generation.

I wouldn't mind this. Soft segmentation would be better than having hard segmentation like we do now. GPUs kinda do this already, consumer cards will do FP64 operations slower than the enterprise ones, but they'll both do them. This usually isn't relevant because games doesn't care much about FP64. CPUs also kinda already do this, Zen and Zen+ did AVX2 (256-bit) at half-speed since they only had a 128-bit ALU to do the operations with. Longer/shorter vector length or greater/lesser number of vector operations per cycle influences software speed, but not capability.

javispedro1 wrote on 2022-03-29, 22:11:

100% agreed. You forgot to explicitly mention (albeit I see it implicit) that having beefier cores makes more pronounced effect than having a large number of them (for perceived speed, that is).

Yeah, oops. On the Intel side (oof bulldozer), single-core IPC consistently soared, even at the same frequency, from the P4 up until about Skylake, where it took a nap for a bit until the "big" cores in the ix-12xxx series.
In particular I remember C2 -> first gen iseries -> 2nd gen iseries being pretty beefy IPC bumps, probably because Intel cared to do the upgrades since they were still up against K12 instead of Bulldozer 🤣
Needless to say, P4 -> C2 was an astronomical jump but that's beside the point. (watches Phil's youtube video of the slowest C2D outperforming a P4 EE again)

javispedro1 wrote on 2022-03-29, 22:11:

Anyway, the actual effect of new ISAs on x86 market segmentation is pretty much negligible. I don't think any software even asks for anything more than basic SSE2 these days (since, to be honest, 80x87 floating point would be a pain).

Chrome is due to require SSE3 quite soon, which won't lock out a lot of modern CPUs, but it will lock out plenty that could run it "okay" for not much of a good reason. Some Athlon 64s, and every Pentium 4 that isn't Prescott or later are gonna be losing Chrome support quite soon. Chrome on a Prescott isn't much of a pleasant experience as-is, but overall it's a bit rough since Steam uses Chrome for so much. Also, Windows 8 required SSE2 when it launched, and through updates, Windows 7 requires SSE2 now, while it didn't initially. There's reports on this forum of some XP updates that didn't work without SSE2.

x87 is pretty nasty. I can forgive software a bit for using SSE2 since there's no real modernized x86 FPU ISA, which Intel probably should have probably done about the time they introduced the PPro or PII, or maybe even earlier when they made the FPU standard with the Pentium. I think a redone IEEE-compliant FPU design would have been perfectly in-line with the design goals of the PPro. For the PII they could have even pitched it as a multimedia feature like they did MMX for the PMMX. Cut the float length down to IEEE double from the 80-bit and maybe add another pipeline with the saved logic, call the architectural change a snazzy name like "MegaFloat Technology", off the back of the increased MFLOPs, I dunno. x87 isn't helped by its registers overlapping with MMX/SSE.

Overall I primarily dislike the fact I feel that if the extensions were more thought-through it would have saved a lot of compatibility bellyaches. If Intel had taken a break to consider the sheer mess they were getting themselves into at any point between the time they introduced the Pentium MMX and now, I think the whole complicated mess could have been avoided from both a software and hardware perspective. Particularly at the time they introduced any one of the SSE versions, it would have been an excellent time to change tack for the next one and choose a more flexible approach. Imagine how nice it would be if "SSE1" capable software were able to take advantage of the quadrupled register widths (or greater widths, in that universe?) we have now.

Old precedes antique.

Reply 97 of 97, by TrashPanda

User metadata
Rank l33t
Rank
l33t

A lot of software does require AVX some AVX2 and almost none AVX512 since 512 is an abomination that should have been aborted, its mostly games that need basic AVX and the newer SSE3/4 instructions. Its one of the main reasons AMD picked up AVX and AVX2 compatibility. If you want some weirdness then SSE4a is the odd duck since its an AMD only variant of SSE and no Intel CPUs actually support it, why does AMD keep it around .. who knows since I doubt that any software actually uses the instructions.