My dCS Varese Review Now on Positive Feedback

Well, even if perceptible musical signals went all the way to 30kHz, they can be perfectly quantized/reproduced with standard sample rates of 88.2k/96k or DSD128.

No DSD256 necessary :laughing:

I wouldn’t consider my sentiments about DSD256 as ā€œnegativeā€. I don’t think anyone on this forum is negative about DSD256, but the notion that we need DSD256 because it sounds better than DSD128 is not based on facts, not to mention the absolute dearth of native DSD256 content.

If dCS announced a hardware upgrade for DSD256 support, I’d honestly think twice unless it came with other more significant sonic advancements, like a new mapper, filters, etc.

1 Like

Excellent explaination

I didn’t say it ā€œdoesn’t matterā€, what I did say was

I am not hating on higher rate DSD (i.e. higher than DSD128), but when folks pursue higher rates (some with religious zeal) on false premises, I feel like they are being taken for a ride. The idea that higher resolution files contain more musical information is a widely held misconception in our hobby and one that some companies pray on, seemingly quite cynically.

2 Likes

and, if i understand things correctly, higher rate PCM files at least, allow dCS more flexible filter options to optimize response for both the frequency and time domains. Filter 5 and 6 i think?

2 Likes

That’s right. IIUC the trade-off between the frequency response and the impulse response is somewhat unavoidable, but being able to place the filter well away from the audio band allows one to use gentler slopes which helps.

1 Like

Again, Andrew, I believe you are factually wrong about this.

While this may be true up to 20kHz, human hearing for many people extends beyond that frequency. In fact, per the prior referenced study, for 10% of sample size, the upper limit is an astounding 40% (8kHz) above this limit. There are also numerous listening studies extending the ā€œnormalā€ lower bound of audible frequencies.

I return again to the video with Andreas that @Anupc helpfully brought into the thread. One of the top audio engineers in our space, and a co-founder of a top-tier DAC manufacturer that supports both formats, states 256 is the best format for A-to-D.

If there is a reputable source showing I am wrong/have mis-understood the above, I’m happy to read it and learn more.

I further recognize that DSD256 remains a niche format today, and am not advocating for the even higher formats (and even more niche 512/1024), which are nearly 100% upsampled post-recording.

I will not pretend to be knowledgeable enough on this topic nor have the experience to provide a definitive answer, but I would say that from what I do know DSD has very specific traits as a format that make editing and other processing steps more difficult and much more sensitive to what happens there. This in turn could then mean that doing the recording ADC in DSD256 is the sweet spot for best results in the editing chain. Those considerations don’t have the same impact for the playback chain.
You concur that at some point we cross a line where it’s only marketing and specmanship, and nothing is gained anymore realistically. We are only stuck at whether the line is at 128 or 256fs.

I agree with this completely August. Thank you for bringing us back to earth : ) and re-grounding the debate.

The other point of contention is if any additional audible information from an analog source is captured at sample rates higher than 44.1 kHz.

Andrew previously asserted that this answer was No. Anu seemed to assert that at DSD128 all audible information is captured:

I don’t know technically/audibly where this limit is and would like to know…

Well I’m not sure if the analogy is fitting, but in digital photography there is a limit to the amount of megapixels that can offer additional resolution when there are all kinds of limitations due to physics e.g. sensor size and the resolving power and quality of the optics used. Similar with tv/film. As I understand it 8k has not really taken off as for all intents and purposes it offers benefits that are not that great of a step forward that the move to hd from standard resolution was or from full hd to 4k. Also you tend not to see much of a difference anymore on a regular size tv. Not too mention a lack of content.

Be it eyes or ears, there are limitations to what we can perceive, however the human brain is a powerful processor that has sophisticated neural paths that allows us to process that information and get out very specific microelements for instance on timing or location of sound because this has been essential to us during our evolution. Probably something to do with hunting or being hunted. Which in turn is a great foundation for this crazy hobby of ours.

I think we’re going to have to agree to disagree here RG.

Audio is about what we can hear. That means there needs to be musical information, it needs to be high enough in level (i.e. above the noise floor) to be perceptible, the recording and replay chains need to be capable of reproducing it and our ears need to be able to detect it. All of these conditions need to be satisfied, proving one doesn’t prove all.

IMO arguing about whether some people can hear the beating of butterfly wings while ignoring the elephant in the room of aliases of ultrasonic information appearing in the audible range is missing the point. I still maintain it is all about the filtering.

But I accept that I’m not going to be able to convince you, and that’s fine.

(By the way, without the ā€œpā€ in my first name, you alter the gender :laughing:).

There’s actually no point of contention; it is scientific fact that any band-limited analog signal can be perfectly reproduced from samples captured at twice the highest frequency component.

Even if someone happens to have a dog’s hearing ability, there are no studio microphones that have bandwidths above what DSD128 can fully reproduce. Not to mention your Amplifier, and more importantly, your Speakers, very likely have bandwidths well under what DSD128 can reproduce! DSD256 will do nothing for you :grin:

3 Likes

I’m sorry about that Anup! I was never sure if it the ā€œPCā€ was added as a computer handle, e.g. Richard-PC. Noted and thank you for clarifying!

Since we are digging into the math, or the maths, as you say on the other side of the pond, my question is this:

What is the minimum digital sample rate/bit depth needed to fully capture an analog signal that oscillates between, let’s say 15Hz and 30kHz?

Is it:
DSD128 = 1 bit @ 5.6MHz
PCM, 24 bit @ 96kHz
Or other

Thank you,
R

96/24 can perfectly reproduce music that contain frequency components as high as 48kHz, and a dynamic range of 144dB (which is significantly wider than humans can hear; typically thought to be about 120dB).

Whereas DSD being a 1-bit signal, the Nyquist-rate doesn’t quite apply like above, but DSD128 can perfectly reproduce music that contain frequency components as high as about 50kHz before the noise-shaping signal-to-noise ratio starts to be objectionable, and a dynamic range of about 123dB.

3 Likes

This ā€œmayā€ be my final question on this, but no promises ; )

Since you and I agree we need to get above 44.1 to capture the audible range, and If, as you write, DSD128 can already capture everything perfectly, why do you believe, at 25:30 in the video, that Andreas states unequivocally that DSD256 has real advantages in A-to-D?

I can’t be sure but I suspect it’s mainly because DSD256 has significantly more headroom for post-processing/mastering as compared to DSD128. I’m guessing thats the same reason why Merging pushes for DSD256 as the primary recording format on their Pyramix DAW.

2 Likes

I had a similar train of thought some posts before. Playback wouldn’t involve the kind of processing that needs that headroom.
Very simply put a 1-bit DSD bitstream is tricky for editing in the sense that a bit will only say whether it is higher or lower than the previous one and that at several million times a second. And if you jump onto that bitstream, where should 0 Volts be? Almost as in Leibniz’s monadology the bits know nothing except for what they are compared to their direct neighbour in the flow of things - sorry for the philosophical reference, although Leibniz was a formidable mathematician as well.

Editing is lossy for DSD so you need headroom to offset that. There’s probably white papers or some such I must have read many years ago. Hope I remembered somewhat correctly.

2 Likes

Hi August, it is not exactly clear, at least to me, if and how much the editing process is lossy.

I have read the Pyramix manual (the recording platform for DSD256) and this need to edit is precisely why Merging invented DXD (I don’t think most people know this).

In any event, a good way to think of this is as follows:

Let’s see you have a 5 minute DSD256 recording. If you need to edit, you don’t necessarily need to edit and convert the entire 5m file. You can convert, let’s say, 15s of the file into DXD, do what you need to do, and then reconvert to DSD256. There is some ā€œlossā€ in this process, but one has still maintained 95% of the original file as ā€œPure DSDā€ …

The wikipedia article has some clues under the mixing and mastering heading

I will try and look for some more in depth article/explanation. We could open a new topic for it.

3 Likes

Do you know how to split the thread August? (I do not…)

Maybe we create one in Audio Systems entitled: DSD deep dive (or similar…)