rasz_pl wrote on 2022-02-13, 11:08:
Adrian himself has trouble fully understanding sampling theorem https://youtu.be/8ts5J09Y7Gc?t=1380 or at least articulating it. "top and bottom, terrible sounding" audiophoolery full steam ahead.
Adrian's limited understanding of the sampling theorem is also supported by his oppinion on the sinx/x reconstruction method. On the other hand: If you don't have a proper reconstruction low-pass (i.e. a filter that removes alias frequencies introduced in the DAC), you will actually get those dreaded terrible sounding staircase-like signals. The sampling theorem doesn't say: "If you reproduce a sampled signal, you get a bandwidth-limited signal the perfectly reconstructs the original bandwidth-limited signal" (even if the original signal was perfectly bandwith limited before sampling). Instead, it just says "there is only one bandwith-limited signal signal that matches these samples. If you somehow manage to create a bandwidth-limited signal that hits all the samples, you get back the original bandwidth-limited signal." The sampling theorem doesn't help at all in how to reconstruct a bandwidth limited signal.
And guess what: A lot of sound cards with variable rate playback do not have variable roll-off low-pass filters on the output. I was very surprised as teen when a friend told me: "22kHz sounds better than 11kHz" by starting with a 11kHz wave file of an electronic phone ring tone, which sounded very harsh when played back through his AD1848-based sound card. He then used the sample rate converter built into Windows 95 to upsample the file to 22kHz, and played it back again, which provided a considerably cleaner sound. Obviously, Windows did a better job of removing aliases when upsampling than the analog filtering on the soundcard did.
rasz_pl wrote on 2022-02-13, 11:08:
mkarcher wrote on 2022-02-13, 09:21:
The sample rate of 48MHz makes it completely useless for anything exceeding 24MHz (because those signals will be aliased to fake signals below 24MHz), and if the analog bandwidth of 20MHz is true (no reason to doubt that, building a 20MHz analog frontend is very cheap and easy nowadays), it most likely doesn't do a good job of removing all components exceeding 24MHz (passing everything up to 20MHz, but sufficiently blocking 24MHz is a quite complicated and expensive thing).
For scopes listed BW is not a cutoff but a 3 DB drop, etched in stone as an ieee norm.
Indeed. But AFAIK, the norm doesn't require a specific steepness of the input low-pass (whether it's an intentional low-pass or just inherent in the scope design). A 6db/octave-like behaviour (aka first order lowpass) is common, but steeper roll-offs are possible, and are sorely needed if the -3dB point is so close to the nyquist frequency. I have one of those 1Gs/s scopes with the cheap 60MHz frontend bandwith limit, and I am indeed able to measure the frequency of a 125MHz signal, but I wouldn't trust the indicated amplitude or curve shape in any way. This shows that slow roll-off is quite common in scope frontends (and with nyquist being at 500MHz, it's not a problem at all).
rasz_pl wrote on 2022-02-13, 11:08:
mkarcher wrote on 2022-02-13, 09:21:
Another rule of thumb is that depending on your requirements on fidelity, the sample rate needs to be 3 times ("well, I just want to get an estimate of the amplitude and see whether there are hickups in the clock") to 10 times ("I want to see whether the rise time of the suspected square wave matches the specs") the maximum signal rate, so you get down to 4.8 to 16MHz maximum usable frequency.
Due to slow filter roll-off and potential for aliasing, if you have good anti alias filter and flat frequency response close to nyquist is fine. This listed 20MHz analog BW figure is most likely just a measured property of poorly "designed" analog front end consisting of a trimmer cap, two poly fuses and CD4051 switching in attenuation. Without proper anti-aliasing filter everything on the screen is garbage, especially if you are forced to start using sample rates below >2x natural filtering of the front end.
To be fair, "proper anti-aliasing" at lower sample rates is something even the big brands of scope manufacturers get wrong. Adrian alluded to an old Tektronix series of digital scopes, most likely the TDS-210 / TDS-220 series. If you slow down the time base, the screen starts to show the textbook example of aliasing without any dampening/filtering. If you look a 1Vpp 1.001 MHz signal at a timebase slow enough that the scope runs at 50kHz sample rate, you will see a 1kHz sine wave at the full 1Vpp (I observed that first-hand in a physics lab). Newer digital scopes are better in this regard, for one by offering a min/max mode where it doesn't just show a single sample taken at a very short point in time, but actually the minimum and maximum during the interval of a pixel or even the intensity-graded stuff (with marketing names like "megaZoom" or "Digital Phosphor Oscilloscope"). I fully agree with you on how the frontend is likely to look. The cap (why a trimmer at all? Is anyone gonna complain if the -3dB point is at 22MHz or 19.5MHz?) and the 1MOhm input impedance make a first-order lowpass, and that's it. You just don't get a better frontend at that price point.
rasz_pl wrote on 2022-02-13, 11:08:
mkarcher wrote on 2022-02-13, 09:21:
Thus, an experienced electronics engineer can tell directly from the banner specs that a "cheap 20 MHz bandwidth, 48MHz sample rate" digital oscilloscope is generally useful up to 5 MHz, might be good enough up to 10MHz, but is very likely completely useless above 15MHz (due to unavoidable aliasing of not suffiently suppressed harmonics).
Hey, its like you read my YT comment from 6 days ago under Adrians video 😀
In fact, I didn't. It's just an indicator that we either joined the same school of thought or there is some general truth to it.
rasz_pl wrote on 2022-02-13, 11:08:
mkarcher wrote on 2022-02-13, 09:21:
My impression of the presentation of that device is: The hardware delivers what an expert could expect
would you expect glitching at rated sample rate that only goes away after going down to <4x lower frequency?
17 years ago!! there was this thing called USBee AX pretty similar BOM and schematic, same Cypress FX2LP bridge, 8bit DAC, genuine opamps in the front end. Sample rate of 16Msps and realistic BW limited to 3MHz in order to be actually useful. [... ] Considering over 15 year old product did better something is wrong with hardware design of Hantek. My guess is using overclocked ~5MHz ADC as is typical in Chinese products.
Hmm, talking about op-amps. I guess Adrians device will have some moderately high-frequency op-amp in the frontend, too. Otherwise, you won't get below something like a 0.1v/div scale. I can't imagine the DAC having more than 8 bits, so at a typical reference voltage of 2.5V, you will get 10mV/step (give or take a factor of 2 for bipolarity, for a 1.25V voltage). This would be 10 steps per division at 0.1V/div, making 0.05V/div with just 5 steps/div borderline and 0.02V/div completely useless.
No, I don't expect glitching at the rated sample rate. I didn't watch the second video at that time and attributed the glitches just to the software not being up to the job. The OpenHantek software on the other hand shows clearly that at 48MS/s the USB streaming to Adrian's computer is unreliable. Getting stable software triggering when there are missing chunks in the stream is impossible, of course. I don't think the glitch issue is due to the ADC being overclocked (which is still very much possible), but on the digital side (e.g. firmware, relying on specific USB host chip timing or bad USB signal integrity causing lost frames)