VOGONS


Is Vista now Retro

Topic actions

Reply 161 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

I'm not sure if this matters in practice? I mean, if you throttle memory bandwidth, you throttle memory bandwidth. I don't think it matters whether it's done in the CPU itself, or in the chipset.

Pllease, check it on your notebook. I have never seen Core/Core2 reduced FSB (and thus memory clock) when in any power saving mode. It is constant all the time. Contrary the AMD K8 cannot do it any other way. It reduces memory speed even on desktop when powernow is active. (CPU-Z -> Memory tab -> DRAM Frequency can help here)

The additonal latency wouldn't explain why you don't get anywhere near the actual theoretical bandwidth when it is throttled to 320 MHz.

The distance between the theoretical and practical bandwidth seems to be almost proportional in any power saving state and does not differ significantly from the period correct Intel platform. At least according to the numbers.

I think it's more a shortcoming of the memory controller and HT interface itself than it is the GPU, seeing as high-latency should not be an issue with a GPU.

I have not said latency is the main problem.The point of my argument was that in the Intel's and former AMD's design the GPU and the CPU were 'equal' clients of the memory controller. The speed of the path defined by the duble/quad etc. pumped FSB was theoretically the same for the GPU as the CPU. While in the K8 design the CPU is the dominant client and the GPU has a much narrower bandwidth. I agree with you maximally that the bottleneck here is the 800Mhz hypertransport link (as I have written above).

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 163 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

Pllease, check it on your notebook. I have never seen Core/Core2 reduced FSB (and thus memory clock) when in any power saving mode.

Pretty sure it does that though.
Why wouldn't it?
Also, FSB is not directly linked to memory clock. Memory can be async to the FSB. Eg, if you have a 1066 FSB, you can still use DDR at 800 MHz and such.
See also the datasheet for my chipset: https://www.intel.com/Assets/PDF/datasheet/316273.pdf
On page 105 it says something about dynamic FSB frequency, being able to reduce it to up to half the speed.

Falcosoft wrote:

(CPU-Z -> Memory tab -> DRAM Frequency can help here)

Yea, I'll see what it does on my laptop. Pretty sure it rarely ran at full speed. My laptop has a special 'silent' button on it, where I think it locks it in a lower power state, so that the fan should not speed up.
I can test what it does when I enable that.

Falcosoft wrote:

I have not said latency is the main problem.The point of my argument was that in the Intel's and former AMD's design the GPU and the CPU were 'equal' clients of the memory controller. The speed of the path defined by the duble/quad etc. pumped FSB was theoretically the same for the GPU as the CPU. While in the K8 design the CPU is the dominant client and the GPU has a much narrower bandwidth. I agree with you maximally that the bottleneck here is the 800Mhz hypertransport link (as I have written above).

Is that true though? I mean, the HT bus was originally designed for multi-socket NUMA systems.
I mean, if I look here: https://en.wikipedia.org/wiki/HyperTransport
Even the oldest HyperTransport can do 3.2 GB/s max with 16-bit transfers. Which is more than what my GM965 is capable of. As you see, I only got 2 GB/s. So I don't see why HT should be a bandwidth bottleneck. Latency, perhaps, because it's not on the same chip as the GPU. But HT has a lot of bandwidth. That was the whole point of HT: a solution for the bottleneck that is the FSB on multi-socket systems.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 164 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

Hi,

Pretty sure it does that though.
Why wouldn't it?

Just test it, please. I have just checked a Dell notebook with intel core 2 dou T5670 1.80 GHZ and it definitely does not use this Super LFM mode under Windows 7 Aero.
The core is downclocked to 1200 Mhz simply by multiplier change but the FSB remains unchanged.

Also, FSB is not directly linked to memory clock. Memory can be async to the FSB. Eg, if you have a 1066 FSB, you can still use DDR at 800 MHz and such.

Thanks, but I know it 😀 I just mentioned simply the FSB speed as possible method on Intel to reduce memory speed. Since I have also never seen that the Memclock/FSB ratio would change at runtime e.g from 5/4 to 1/1.

Is that true though? I mean, the HT bus was originally designed for multi-socket NUMA systems.

It's true if you make some tests/experiments in practice. Otherwise I have no other explanation.
I can use clockgen.exe to increase the base clock a little from 200 to 214. The base clock in itself does not influence performance any way on K8. But this way I can get HT clock 858MHz instead of 800Mhz. Also If I fix the CPU multiplier to 9x I can get 1930 Mhz core clock and so memory clock 322Mhz (DDR2-644, CPU/6). If I compare this to a 800Mhz HT, 2000Mhz CPU, 333Mhz (DDR2-666, CPU/6) configuration I get better result in purely video oriented tests/benchmarks. Contrary to the fact that the CPU clock and also the memory clock is lower, only the HT clock is higher.
Benchmarks used: Winsat -dwm, fillratetest1.13.exe, FillrateBenchmark(tm) 2004 (they also have video memory bandwidth tests).
Here is the result of Winsat -dwm (this one shows the least difference though):

2000_333_800.jpg
Filename
2000_333_800.jpg
File size
244.27 KiB
Views
1248 views
File license
Fair use/fair dealing exception
1930_322_858.jpg
Filename
1930_322_858.jpg
File size
241.77 KiB
Views
1248 views
File license
Fair use/fair dealing exception

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 165 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:
It's true if you make some tests/experiments in practice. Otherwise I have no other explanation. I can use clockgen.exe to incre […]
Show full quote

It's true if you make some tests/experiments in practice. Otherwise I have no other explanation.
I can use clockgen.exe to increase the base clock a little from 200 to 214. The base clock in itself does not influence performance any way on K8. But this way I can get HT clock 858MHz instead of 800Mhz. Also If I fix the CPU multiplier to 9x I can get 1930 Mhz core clock and so memory clock 322Mhz (DDR2-644, CPU/6). If I compare this to a 800Mhz HT, 2000Mhz CPU, 333Mhz (DDR2-666, CPU/6) configuration I get better result in purely video oriented tests/benchmarks. Contrary to the fact that the CPU clock and also the memory clock is lower, only the HT clock is higher.
Benchmarks used: Winsat -dwm, fillratetest1.13.exe, FillrateBenchmark(tm) 2004 (they also have video memory bandwidth tests).
Here is the result of Winsat -dwm (this one shows the least difference though):

But again, bandwidth is not the explanation, is it? HT at 800 MHz should do 3.2 GB/s. Your fillrate tests get much lower scores.
So the problem is some kind of inefficiency in the HT implementation.
Yes, higher HT clock will give you more bandwidth. But it will also speed up everything else in the HT link logic. And I think that's where the bottleneck is, not in the fact that higher HT clock gives more memory bandwidth.
At the higher HT clock, you're only further from the theoretical bandwidth (858 MHz should give you 3.4 GB/s, so 7% extra, but your scores only show 2% extra bandwidth).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 166 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

At the higher HT clock, you're only further from the theoretical bandwidth (858 MHz should give you 3.4 GB/s, so 7% extra, but your scores only show 2% extra bandwidth).

It really is not linear but if you consider that the DWM test also uses CPU and memory, and both the memory and CPU speed is actually 3% lower in the case when you get the 2% extra bandwidth then it's not bad at all.

Edit: Does your notebook uses Super LFM under Aero?

HT at 800 MHz should do 3.2 GB/s.

As dual channel DDR-2 at 800 Mhz should do 12.8 GB/s bandwidth. We were nowhere from this theoretical result on either AMD or Intel platform that time. 😀
My AMD K8 can do 5600 MB/s , that's less than the half of that. Intel platforms could not do better either (At least not the Core/Core2 line). Your Notebook can do 3 GB/s at 666 Mhz DDR. I do not know if it uses dual channel or not, but practically dual channel just widen the gap between the theoretical and practical bandwidth.
Of course there is inefficiency everywhere in the implementation. Our question is: where is the 'practical' bottleneck ?

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 167 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

It really is not linear but if you consider that the DWM test also uses CPU and memory, and both the memory bandwidth and CPU speed is actually 3% lower in the case when you get the 2% extra bandwidth then it's not bad at all.

CPU usage by itself shouldn't affect performance. CPU-tasks that access memory would, but I doubt they'd run something bandwidth-hungry on the CPU at the same time.

Falcosoft wrote:

Edit: Does your notebook uses Super LFM under Aero?

I don't know yet, I don't have it here.

Falcosoft wrote:

As dual channel DDR-2 at 800 Mhz should do 12.8 GB/s bandwidth. We were nowhere from this theoretical result on either AMD or Intel platform that time. 😀

No, but HT is clearly the bottleneck here, your CPU still scores 5.5 GB/s with the same memory. HT can only provide 3.2 GB/s, so the memory controller should easily be able to sature that.

Falcosoft wrote:

Your Notebook can do 3 GB/s at 666 Mhz DDR. I do not know if it's uses dual channel or not

Yes, it's dual channel.
Thing is, your AMD laptop is clearly far more high-end than mine is:
1) Your CPU is 2 GHz instead of 1.5 GHz
2) You have an integrated controller in the CPU, I have it in the chipset
3) You have 800 MHz HT, I have 533 MHz FSB
4) You get 5.5 GB/s, I only get 3 GB/s
5) Your IGP is also considerably faster

In raw power, your laptop completely blows mine away. Mine however, is far more efficient apparently. It actually does more with way less.
I get 2 GB/s from the IGP from a total of 3 GB/s, while you get 1.7 GB/s from a total of 5.5 GB/s.

This is my CPU btw: http://ark.intel.com/products/30786/Intel-Cor … GHz-667-MHz-FSB

And I think these are good sites to compare IGP specs?
I think this is yours: https://www.techpowerup.com/gpudb/2146/radeon … xpress-1150-igp
And this is mine: https://www.techpowerup.com/gpudb/1233/i965gm

You get:
Pixel Rate: 800 MPixel/s
Vertex Rate: 200.0 MVertices/s

I get:
Pixel Rate: 500 MPixel/s
Vertex Rate: 125.0 MVertices/s
(the texture rate they give seems to be a completely bogus theoretical number of 4 GTex/s, probably based on the fact that it has '8 TMUs'... Try getting that many texels, at 4 bytes each, over a memory bus that maxes out at 2 GB/s... Maybe if all texels are the same, and they all come from the cache...)

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 168 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

CPU usage by items shouldn't affect performance. CPU-tasks that access memory would, but I doubt they'd run something bandwidth-hungry on the CPU at the same time.

I have also mentioned memory speed... so then I rephrase it this way : When the memory speed is actually 3% lower, and you still get the 2% extra bandwidth then it's not bad at all.

Yes, it's dual channel. Thing is, your AMD laptop is clearly far more high-end than mine is: 1) Your CPU is 2 GHz instead of 1.5 […]
Show full quote

Yes, it's dual channel.
Thing is, your AMD laptop is clearly far more high-end than mine is:
1) Your CPU is 2 GHz instead of 1.5 GHz
2) You have an integrated controller in the CPU, I have it in the chipset
3) You have 800 MHz HT, I have 533 MHz FSB
4) You get 5.5 GB/s, I only get 3 GB/s

Yes, we all know that. And I have tried to explain why point 2. in your list is actually a drawback of K8 when used together with an integrated video. And I tried to find real 'practical' bottlenecks that can explain this anomaly. You seem to disagree because of the theoretical numbers do not match... What is your conclusion then?

5) Your IGP is also considerably faster

[/quote]
Why? According to pure numbers it should not be:
The GMA X3100 has higher clock speed (500 vs 400 Mhz), has 4 as many pixel shaders (8 vs. 2), 4 as many TMUs (8 vs. 2). The only obvious advantage of the ATI GPU is a little higher Pixel fillrate (0.8 Gpixel/s vs 0.5 Gpixel/sec)
https://www.techpowerup.com/gpudb/1233/i965gm
https://www.techpowerup.com/gpudb/681/radeon-xpress-1150-igp

Edit: Yeah, I have found the same reference 😀
But it is useful to have a look at the 'period correct' Intel equivalent the, GMA 950:
https://www.techpowerup.com/gpudb/1232/i945gm
Theoretically it has a definitive edge over the ATI 1150 in every parameter. Yet, it is slower under Aero WEI score. Explanation?

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 169 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

Why? According to pure numbers it should not be:

Actually it should, see the links and info I added above.
You need to understand how to interpret the data:
2 ROPs: 2 pixels per cycle
Pixel Rate: 800 MPixel/s -> at 400 MHz, 2 ROPs

1 ROP: 1 pixel per cycle
Pixel Rate: 500 MPixel/s -> at 500 MHz, 1 ROP

Falcosoft wrote:

has 4 as many pixel shaders (8 vs. 2)

Doesn't work that way because:
1) The X3100 is a unified shader architecture. It has 8 total shader units, but they are shared between vertex and pixel operations. So you don't always have 8 pixel pipelines. The Radeon always has 2 pixel pipelines AND two vertex pipelines.

2) Shaders aren't shaders.
The X3100 is a scalar architecture, where the Radeon is VLIW.
In short, what this means is that a single instruction on X3100 can only operate on a single number. The Radeon can operate on vectors of 4 (or was it 5?) elements.
In other words, if you want to add two ARGB pixels:
pixc = pixa + pixb

Then the Radeon executes that as a single instruction:
vadd pixc, pixa, pixb

The X3100 can only do scalar operations, so it decomposes the vectors into scalars:
pixc.A = pixa.A + pixb.A
pixc.R = pixa.R + pixb.R
pixc.G = pixa.G + pixb.G
pixc.B = pixa.B + pixb.B

So it actually becomes 4 instructions.
So yes, you may have more shader units in parallel, but you don't have implicit parallelism inside the shaders, where the Radeon has 4-way (or 5-way?) implicit parallelism inside the shaders.

Which explains why the Radeon actually outperforms the X3100 in most games, with only 1/4th of the pixel pipelines 'on paper'.

Falcosoft wrote:

4 as many TMUs (8 vs. 2).

This is debatable... Some architectures have a single TMU that they can share between pipelines. So you get a usages of '1 TMU per cycle', but in 8 cycles, you can still do 8 TMU fetches with that single unit. Since most of the time, not all pipelines are fetching in the same cycle, this TMU sharing is very effective.
Nevertheless, you have 2 pixel pipelines, so both architectures have a 1:1 mapping between TMU and shaders.

And that concludes GPU technology 101 for today.

Last edited by Scali on 2017-06-02, 13:18. Edited 3 times in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 170 of 249, by dr_st

User metadata
Rank l33t
Rank
l33t
Scali wrote:

We're not hijacking the thread, Vista is still the topic. You however started talking about linux, and the only remotely Vista-related stuff is that you try to blame Microsoft for all the world's problems, apparently.

Last time I checked, "Is Vista Retro" was the topic, not "The intricacies of the Aero UI and implications on contemporary video card drivers from a variety of manufacturers, with benchmarks". 😀

You brought Linux up, I just responded because I felt (and still feel) you were making wrong statements about it. And as to me blaming Microsoft for all the problems of the world, well, it was already pointed out to you that you apparently like strawman arguments.

Scali wrote:

Oh really now? That's just sad....
EAX was a proprietary Creative solution. OpenAL was originally developed by Loki, but was later acquired by Creative as well.
Microsoft has nothing to do with either.

Whatever the terminology is - the audio driver model changed between XP and Vista, AFAIR, and old ones were not compatible. I find it interesting that you ignored my other two examples. I guess they were valid? And you found the only one where I used wrong terminology (even though I think the point is still correct), just so you can focus on that and make me look ignorant. I do love the way you debate.

Scali wrote:

You see... stopping support does not change the interface. Everyone expects the interface not to be supported in new products.

And if you want your existing hardware product to continue working on the new OS?

Scali wrote:

1) Interfaces regularly change, without any prior warning or notification.
2) These changes generally break existing code, requiring patches and recompilation.

That's just how the Linux world works. It's by design, whether you like it or not.

If someone thinks that the interface needs to be changed, he goes and changes it, and then it is also his responsibility to adapt all the existing drivers for the kernel to compile and work. The patch will not be accepted otherwise, obviously, since it would break everything.

If you have a third party out-of-tree driver, then it has to be updated separately - it is obviously impossible to hunt all the existing out-of-tree drivers everywhere. Generally, the community encourages drivers to be submitted into the upstream kernel, to avoid such issues, but it cannot always be done. So, either the driver owner will have to update it, or if the owner cannot be bothered, it falls onto the user. But it can be done, if the source is available.

The only exception is if you have closed-source binary drivers. Well, the warning is there, and it's taught to everyone in every advanced Linux class: Binary driver releases are bad in the Linux world. They go against the system's philosophy, and present problems that cannot be solved. If you as a manufacturer choose this approach, you accept the downsides, as do your users. There are good reasons why Linux is not anywhere close to matching Windows as a platform for gaming, and this is one such reason.

Certainly this approach presents certain problems. Did you ever hear me say that I think the Linux approach is perfect? I don't think so. I merely claim that it's legitimate, and has its advantages. Open-source, community work, agility in certain places, the ability to fix trivial bugs and introduce small new features yourself, etc. You, on the other hand, seem to think that it's terrible, has nothing but bad things, and has absolutely no right to exist in the business world. At least that what I deduce from your words. Never mind that the business world disagrees with you. Oh, are they all too just "script kiddies out of college who never had a real job"?

Scali wrote:

In Windows the only thing that may change from time to time is the driver model, but that's only once every so many years, not a few times a month, like with linux.

You are exaggerating by several orders of magnitude. The fact that there are many commits a month to the Linux source tree does not mean that the entire OS/driver interface change several times a month. In most areas (especially old and stable ones) changes are very slow, and interface changes are very much frowned upon. Individual driver / subsystem maintainers routinely reject patches to old drivers that do not have a very good justification (like fixing a serious bug). I have participated in such discussions, as a submitter, and a maintainer, and as a casual observer as well. I have some idea how these things work. If you think that you can just come up with a patch that breaks an interface, just because you think the new way is better, and get it accepted, you are in for a big surprise, if you actually attempt it.

BTW, there are Linux flavors (typically Enterprise-oriented) that specifically and purposely adopt a different approach - one a lot like Microsoft's - slow, controlled changes, infrequent updates, favoring interface stability. They maintain their own trees, are very selective in patches they accept, have their own developer fleet and release process. You can use one of these if you don't like the thrash that's going on with the upstream. Surely, as an experienced developer, you know all this. You just choose to ignore it for the sake of this argument.

Scali wrote:

And when this is done in Windows, there are good reasons for it, and the new interface will be well-documented.

There are good reasons for everything done in Linux as well. You will just not convince me that the top Linux developers/maintainers are somehow stupider, or do a worse job, than the top Microsoft engineers/architects. Especially since the same Microsoft engineers frequently contribute to Linux. Heh.

As for documentation - well, we talked about this, but I will just repeat: Microsoft's documentation is generally excellent. But, funnily, it is not always easy to locate. There were a few times when the question I wanted answered required hunting through a seemingly unconnected set of MSDN pages via random Google search words, until I found what I needed, sort-of. And God forbid if you are actually dealing with new stuff fresh out of development, which may not even be properly indexed yet. Good luck ploughing through the plethora of docs on their insider channels, and the DDK documentation. And Microsoft is so big, that even their own people likely have no clue, and will not be able to help you, unless you happen to have direct connections to the specific engineering team working on that specific thing. Then suddenly things really can be resolved in days or hours, as you claim. But it's not really easy to get these direct connections.

Scali wrote:

In fact, even though the *interface* for the drivers changes, the actual development environment, not so much.
Microsoft has some 'boilerplate' in the DDK, which allows you to very easily compile basically the same driver for various versions of Windows, so abstracting between XP and Vista+ driver models isn't all that difficult really.

Yes, Microsoft did a lot of good (and hard) work to allow you to target multiple SKUs easily. I am not diminishing the quality of their tools or developments kits in any way. But don't forget that a lot of this awesome work Microsoft did is just because their OS is closed-source. It simply could not be done any other way. If they didn't bother providing developers with the good tools, nobody would be able to properly develop for Windows at all.

Linux solves this problem differently - you just get the source, and can do whatever you want. You can compile your driver as part of the kernel, or directly against the kernel sources and then it will naturally work with any kernel you compiled it with. It is still not as nice as working with a well-set-up Windows DDK environment, but I feel that the tradeoffs are, overall, reasonable.

Scali wrote:
dr_st wrote:

Is that why you see plenty of Windows drivers for OEM hardware written by third-party hobbyists, WHQLed, offered through Windows Update?

Ah, goalposts moeved again? Now it is not enough that you can write a driver, it has to be WHQLed and offered through Windows Update as well?

You know, let's go back to your original example, because I think it is important. The WHQL and Windows Update are irrelevant here, so I take them back.

You started telling about your home server that you built around a custom-compiled Linux kernel. Why didn't you just use Windows? Is it because Linux in this case gave you something that Windows, in principle, could not give you? Hmm...

Then you mentioned that you needed to upgrade and whoops, your kernel did not include a driver supporting your new NIC. Stuff happens. And looks like they didn't even have a driver written for that older version of the kernel. You know, the kind of stuff that happens with Windows all the time, as in the few examples I brought up in 30 seconds cause they are fresh in my mind, and you brushed off.

And your answer is... "You know that you can also write Windows drivers yourself, right?" Yes, I know! Why didn't I think about it? But you can also just write Linux drivers! In fact you already had the source of both Linux drivers in your example, and the source of both kernels with everything that changed between them. Surely by looking at them, and comparing, you would be able to adapt driver A to support NIC B, or port driver B to the same kernel that works with driver A. But you claim that it's an easier task to write a Windows driver from scratch, by basing it on some open-source Linux driver that would probably be 10 times less similar to what a Windows driver should look like, than the two Linux drivers in your example. Is that really your claim?

Or is it that you started with a task for which Linux is inherently more suitable than Windows, encountered a problem, that is an order of magnitude easier to solve in Linux than in Windows, and you use it as an example to why Linux sucks and Windows is awesome?

And I'm ranting like a moron? 😐

Scali wrote:

I'm not just talking about the linux maintainers obviously. They are only a very small subset of the total linux community.
But even then, Linus himself is a fine example of someone who was merely a student at the time, and still I don't think he ever had a regular job at a regular company.

And this matters because why? Because we all know, that actual development of productive tools can only be done by suit&tie-donning clean-shaven men at "real jobs"? Because no core technology, or programming language, or usable operating system, or software was ever done anywhere else, like a university, or someone's basement? Get over yourself. You are arguing a point here that, even if true, has absolutely no weight.

Even if Torvalds never contributed anything to the software world besides his initial work on the Linux kernel and on git, he would still deserve his place in the pantheon of elite programmers, which he rightly has. Never mind that he may be a jerk, and that he may be ignorant about many things that he's not dealing with. He still proved that he can be more productive than the average team of 10 developers.

P.S. Where exactly was Bill Gates (a person whom I personally appreciate deeply on many levels) when he wrote MS-DOS? What was Microsoft at the time?

Scali wrote:
dr_st wrote:

You really should just accept that there can be more than one opinion, and people may prefer different ways to do things.

Not really. Some things are just wrong, period.

Right. Some things are just wrong, period. This is not one of them. You remind me of another argument (thank God I only witnessed it, never participated in it) where a "senior software developer" was arguing with foam at his mouth that "GOTO is always wrong, period, and there is not a single case where it is okay to use it".

Scali wrote:

If you don't see what is fundamentally wrong about my example of grabbing some text-data from a filesystem and then putting it through a string-parser in order to get a handle to a kernel-object, then I can only pity your lack of understanding of software engineering in general, and even more so the character flaws that make you brush this off as a 'difference in opinion'.

No. It's far better to get a handle, and pass it to an object manager, who will then transfer it to another handle, to a different subsystem manager, and from there it will provide a handle to a reference to another object, which you can then pass to the reference mapper, that will eventually get you the class object, and from there you will take it to the class manager, to get the actual instance of the kernel object of your choice (which in the end is being retrieved for you by exactly the same type of algorithm that runs on an array and compares strings). And if there is a bug there, then your system is just as vulnerable to string/buffer overflow exploits. But, hey, at least it's hidden from you under 10 nested levels of obfuscation. Now that's good software engineering! 🤣

Luckily for me, my job does not depend on your personal assessment of my understanding and my character. Luckily for you, neither does yours depend on mine. 😀

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 171 of 249, by Scali

User metadata
Rank l33t
Rank
l33t

Let's make this short and sweet.
Most of it is so rampantly tangential that it is an excellent illustration of my earlier assesment, but does not warrant any response.

dr_st wrote:

Even if Torvalds never contributed anything to the software world besides his initial work on the Linux kernel and on git, he would still deserve his place in the pantheon of elite programmers, which he rightly has.

He does?
Both linux and git are horrible botch-job excuses for software.
The (lack of) design in linux was already debated at length by people such as Andrew Tanenbaum, and rightly so.
Similar arguments could be made about git for example (where exactly does one get the idea that implementing a version control system in shell-scripts is a good idea? And worse: who thought it was ready for the rest of the world and started making it into a standard, long before it was anywhere near production-ready?).

What exactly has he ever done that would make him 'elite'?
Linux is a clone of UNIX, which was already widely documented in numerous books, including full source code, long before he started. In fact, he started out with MINIX as the basis, one of the many UNIX clones with full source avaiable, as well as extra documentation.
Git is also basically a clone of BitKeeper. Not exatcly something new, just a reimplementation of existing ideas. And not even a good reimplementation.

dr_st wrote:

P.S. Where exactly was Bill Gates (a person whom I personally appreciate deeply on many levels) when he wrote MS-DOS? What was Microsoft at the time?

Firstly, Bill Gates did not write MS-DOS. Tim Paterson did.
Secondly, DOS was not Bill Gates' first product. Microsoft started with a BASIC interpreter aimed at microcomputers.
By the time MS-DOS came around, Microsoft was already a well-established corporation, which supplied the BASIC interpreter for a great number of machines, including IBM, Apple, Commodore and Texas Instruments (and even Atari's BASIC was based on code they bought from Microsoft).
In fact, the reason why IBM approached Microsoft as a supplier for their upcoming PC was because Microsoft was already a supplier for their BASIC.
Before Windows and MS-DOS, Microsoft more or less already had a 'near-monopoly' on BASIC, which was pretty much used as the makeshift 'OS' on early home/personal computers.
It seems that you think Microsoft/Bill Gates started with DOS, but that was actually their 'second generation' product line. Windows being the third.
Which gets me back to why it's important to have prior experience, so you already know what it's like to develop a new product, put it to market, handle support requests from clients and such.
By the time Microsoft started with MS-DOS, they already had years of experience developing, maintaining and supporting BASIC for a large number of different computer systems.

And unlike the linux kernel, Microsoft BASIC was actually a 'hack' in the good sense: They managed to stuff an incredibly powerful and efficient programming language into the very limited resources of 1970s 8-bit CPUs and memory expressed in single-digits of kB.
The way they initially developed their BASIC with a custom form of emulation/cross-assembly, so they could develop the BASIC interpreter for the Altair without having access to the hardware, and using a PDP-10 instead.
The fact that it actually worked the first time they ran it on the Altair (with Bill Gates having to dial in the loader manually on the front panel of the Altair) says enough about just how great these guys were at what they do.
That's more or less in the category of The story of Mel.
Microsoft even included an easter egg in the BASIC code, so they could check whether other vendors ripped off Microsoft's code in their BASIC implementations: http://www.pagetable.com/?p=43

This concludes our history lesson...
Also, technically DracoNihil brought linux up in this thread. You however were the one to make it into a linux-vs-Microsoft war, with your first contribution picking out the 'linux' word, and more than 50% of that post being entirely off-topic linux-rhetoric.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 172 of 249, by dr_st

User metadata
Rank l33t
Rank
l33t

Short and sweet it is then. 😀

I merely picked out a single statement you made about how drivers are developed in the Linux world and challenged it. In parallel I disagreed with how you seemingly let Microsoft completely off the hook for every problem that comes up in the Windows ecosystem, and are ready to blame everybody else but them.

Apparently the combination of the two got you to assume that I am some sort of Linux fanboy, Windows hater, and thus I needed to be "educated".

As much as your initial assumption was off the mark, I do not regret the debate it spurred, because I think that a lot of useful information was shared, and different points of view, which can be interesting to other readers (which IMO is the main value of forums).

Specifically, your views about Linux and git, and the reasoning you give, suggest to me that in your view what defines good software is how it is architectured and designed, not how useful it is. Seemingly, to you it is more important to follow "good practices" (according to your guru of choice), than to make a product that actually solves somebody's problem.

I don't know if this is how you actually think, but this is what it looks like from what I saw in this discussion. And this is just not a view I subscribe to, either as a user or a developer.

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 173 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
dr_st wrote:

and are ready to blame everybody else but them.

No, I will blame Microsoft if there is any cause, but so far we have not been able to establish any.

dr_st wrote:

Specifically, your views about Linux and git, and the reasoning you give, suggest to me that in your view what defines good software is how it is architectured and designed, not how useful it is. Seemingly, to you it is more important to follow "good practices" (according to your guru of choice), than to make a product that actually solves somebody's problem.

Again, this is the umpteenth example of you extrapolating things for no apparent reason, just as the blame-thing above.
This says a whole lot more about you than it does about me.
Firstly, "good practices" certainly aren't more important than a useful product or solving a problem. Rather, they are a necessary, but not sufficient condition for developing useful products and solving problems. Do you understand the difference?
Secondly, "guru of choice", again, no idea where that came from. Your mind seems to be playing tricks on you, you're projecting an awful lot of things on me.
I certainly do not ascribe to the views of a single person, or any kind of 'guru' or whatever.
Rather, I try to pick up and learn things anywhere and everywhere I can, and I am well experienced enough to understand that even very good software developers ('gurus' if you like) may not always have the best answers to everything. Even they are human, and as such, not infallible.
The key word here is 'critical thinking': never just take things for granted because someone says this, or that company does that etc. That leads to cargo-cult software development. You should always understand what you are doing, and make informed decisions, which you can defend, because they are based on solid logic reasoning.

The above interchange of how to translate GPU specs to performance should be an excellent example of how I understand what I am talking about, where to most people it's more of "this number is larger than that number". That's the result of critical thinking right there.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 174 of 249, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
dr_st wrote:

and are ready to blame everybody else but them.

No, I will blame Microsoft if there is any cause, but so far we have not been able to establish any.

Well, I'm blaming all three. It's a problem with Microsoft's application on AMD's chipset built into HP's hardware. It's up to them to collectively put their heads together and work out a solution. We don't know what process was followed between the three organizations, nor what may have happened within Microsoft, but the only way they would escape blame would be if they were on the "We Should Fix This" side of the argument and were somehow prevented from doing anything (such as releasing an application note) on their own.

All hail the Great Capacitor Brand Finder

Reply 175 of 249, by dr_st

User metadata
Rank l33t
Rank
l33t
Scali wrote:

Again, this is the umpteenth example of you extrapolating things for no apparent reason, just as the blame-thing above.

Kind of like you extrapolated me to be a "Linux nerd"? 😖

I did say very specifically: "I don't know if this is how you actually think, but this is what it looks like from what I saw in this discussion." Maybe you can review your statements and think why this is how you came across.

Scali wrote:

Secondly, "guru of choice", again, no idea where that came from.

Because the definition of "good practices" often varies between people, even experts. So at some point one has to choose whose ideas "feel more at home" with him, and whose advice he wants to follow (whether a single person, or a group of people, or a "camp"). It is by no means a specific statement about you. We all are like this.

Scali wrote:

Rather, I try to pick up and learn things anywhere and everywhere I can, and I am well experienced enough to understand that even very good software developers ('gurus' if you like) may not always have the best answers to everything. Even they are human, and as such, not infallible.

I could not agree more. That is why, no matter how strongly I may feel about something, I try to refrain from making blanket statements.

Scali wrote:

The above interchange of how to translate GPU specs to performance should be an excellent example of how I understand what I am talking about, where to most people it's more of "this number is larger than that number". That's the result of critical thinking right there.

As long as we are patting ourselves on the back, allow me to do the same: Not you, nor anyone else here, and in fact, very few people in the world have any business teaching me about critical thinking. 😀

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 176 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
dr_st wrote:

Kind of like you extrapolated me to be a "Linux nerd"? 😖

No, that was right on the money.

dr_st wrote:

Maybe you can review your statements and think why this is how you came across.

As I already said, you're the one who keeps throwing straw men into my face. The problem is with you, not me, as I have already said. Might have something to do with the above.
So, perhaps this is an excellent time for you to review your own statements, and re-read what you were responding to, and how a lot of things that you are 'responding' to, aren't actually in anything I said.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 177 of 249, by dr_st

User metadata
Rank l33t
Rank
l33t
Scali wrote:

The problem is with you, not me, as I have already said.

OK, we shall leave it at that. 😀

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 178 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

Edit: Does your notebook uses Super LFM under Aero?

Doesn't look like it... I only see 667 MHz FSB, no matter what I try.
Not sure what my BIOS supports exactly, there are no options to enable or disable, so I didn't even know whether it enabled EIST or not.
I found a nice utility called ThrottleStop, which allows you to see the different options: http://forum.notebookreview.com/threads/the-t … p-guide.531329/

It seems my BIOS didn't enable EIST, and apparently not SLFM either.
One thing is for sure, it allows me to throttle the machine much further than the default options do: If I turn the chipset clock modulation all the way down, the system becomes very slow, and Aero becomes quite unresponsive.

They give an example of SFLM on an X9100 though, where the FSB is indeed halved. So I guess the functionality is there, but not enabled on the simpler Core2 Duos, like mine. Mind you, since the X9100 is a Core2 Extreme, I doubt that halving the FSB would destroy Aero performance. Who would accept that on a super-expensive Extreme system? 😀

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 179 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

Hi,
Thanks for the info! This confirms what I have experienced so far. In case of Intel's platform with shared memory notebook manufacturers had a choice and I think they chose the right one. Many of the manufacturers (all I have met) had not enabled too aggressive power saving features. Halved (video) memory bandwidth could be detrimental to Aero's performance and full speed RAM in itself was not a big sacrifice regarding power consumption. Although it's still not clear that earlier Core Duo/GMA950 combo's poor Aero performance was due to power saving features or not. On the AMD side the situation was worse because of architectural peculiarities so it was impossible to preserve both good performance and good power saving at the same time using Aero (the same efficiency level that was possible on XP). I think manufacturers choose the worse alternative.
On Vista using Aero, disabled 'powerplay' and 1200 Mhz core speed (DDR2-480) seems to be the usability threshold. I have never met a mobile K8 platform where a power profile similar to this were the default.
Now I finish thread hijacking...
Bye

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper