dCS Ring DAC - A Technical Explanation

The subsequent post is an explanation on what is dCS Apex…so only few days to read it :laughing:

Part 5 – Asynchronous Sources – USB & Network Audio

Audio sent over an asynchronous format (such as streaming to a smartphone via Spotify, playing content from a NAS via Roon, or playing music from a computer via USB) is, to an extent, the exception to the rules stated in the previous posts, in so much as jitter is not a factor for the audio data, at least until it reaches the endpoint and is converted back to the relevant format (such as PCM or DSD).

With network audio, the interface which is used to send audio data over a network is called TCP (Transfer Communication Protocol). The data which is to be transmitted from one place to another –
in this case a piece of music – is split up into multiple ‘packets’. These packets contain not only the data itself (the ‘payload’), but tags on where it has come from, where it is going, how many packets it is part of and how these packets should be reassembled to get the original data back unchanged.

Take, for example, a track from Qobuz being streamed to a dCS Bartók DAC. If a packet of data is lost or compromised, according to the TCP interface, the Bartók can simply request that packet again. When all the correct packets have been received properly by the Bartók, they are unpacked back to the correct data format (PCM, for example) and buffered before being fed to the DAC. This stage, the unpacking and buffering, effectively removes any timing link between the TCP packets and the resultant audio signal. (Read that sentence again, as it’s very important.)

Once the data has been buffered in the Bartók, the factors discussed above become relevant again. The data is now being directly dictated by the Bartók’s clock and as such, jitter becomes a factor. The accuracy of the Bartók’s clock then controls when the DAC converts the samples back to analogue voltages, so has a direct impact on audio quality. Until it reaches that point, however, jitter is simply not a factor from an audio perspective.

Asynchronous USB audio works in a similar way. There is no timing link whatsoever between the source, such as a computer, and the endpoint such as a Bartók. It does not matter if, while the USB data is being transferred, the bits are not perfectly spaced as a clean square wave. Provided the bits are received by the Bartók correctly (a 1 isn’t misread as a 0, for example) the timing is largely irrelevant. This is because, as with network audio, the data is buffered before being fed to the DAC. It is not until this point that timing becomes a factor, as at this point, it has been converted back from USB format to digital audio (eg PCM or DSD).

9 Likes

Indeed. Thanks for all this James. Superb information.

It means no need for any clocking system before the data are processed by the Upsampler, am I right ?

1 Like

Sounds so to me.

Correct. At least in terms of how clocking is discussed here. A digital device such as a network switch still has a clock but for very different purposes that are nothing to do with audio - the idea of externally ‘clocking’ something that is spitting out asynchronous data like a network switch is a bit of a contradiction.

4 Likes

Does it also means that my expensive Audioquest rj45 Vodka cable isn’t more useful that any cat 6 cable ? I am already sweating in waiting for your answer :laughing:

But I take the opportunity that you don’t sleep yet to tell you that it is a real privilege to sometimes have a chat with the designers of great gear I use every day to listen to music.
I share what Greg wrote in another thread, saying that he trusts the dCS engineers, me too, impatient to hear the new Apex :+1:

From a few of the more objective folks I have spoken to I believe the ethernet cable and switch are not in the critical signal path and won’t impact sound. They make fine audio jewelry. I am not an anti-cable guy I have a full high end loom of interconnects.

2 Likes

I think the detail about buffering is really significant.

While I’ve heard some noticeable differences using high-end switches with other audio equipment, my Bartok seems pretty immune to any noise from the network. Every part of the design seems immaculately thought through and executed.

PS I agree, it is great to have the developers contribute on here :slight_smile:

1 Like

In theory, everything sounds logical…
But still, there are other kinds of problems, since different network equipment affects the final sound quality. To deny this is to deny the experience of the huge number of enthusiasts and the market for audiophile network devices (even if some of them are snake oil)

Hmm, that sounds tautological to me. There may be reasons why poorly constructed networks, network components, non-compliant cables, or noise sources produce audible results in systems, and some of those reasons may be psychoacoustic, and some may even be real, but the presence of a market to sell to those perceptions does not in this hobby prove the reality. And with respect to James’ point, those upstream devices/components/cables/actions do not affect the timing of the (asynchronous) musical information. If one has a problem switch, that is causing an obnoxious ground loop for example, or some other noise, then by all means, one should fix that! But that is quite different from claiming that clocking a network switch makes a difference in SQ. [Full disclosure: I had an EtherREGEN in my Vivaldi speaker system, which I had purchased for its isolation capabilities, and to compare with my GigaFoil 4. And just for shits & giggles, I clocked it with both my Perf10 and Kronos1 reference clocks. Audibly, it produced nothing different from the GigaFoil. But it was fun. Eventually, I removed the ER, because it always ran hot, even on its own little metal stand that I made for it, and it routinely locked up on me, requiring a hard reboot. Those two problems were consistent in both my speaker system and headphone rack.]

2 Likes

An Obnoxious ground loop…Greg I missed you the last mounths, but now I know you are back :laughing:

I learn new english words almost every post you launch…that’s fun :wink:

2 Likes

For this reason, even a full reclock, buffer, and the rest described above does not solve all the problems. This means that the ethernet interface is not absolutely stable and immune.
I have a basic quality switch, a good liner power supply, a recommended certified cable, but one Melco changes the game. I have tried others with varying degrees of success.
I know that some manufacturers have gone a little further and added optical usb, ethernet. I think I already wrote about the experiences of enthusiasts… :thinking: :slightly_smiling_face:

1 Like

It means that the Ethernet interface is immune to clocking issues.
I’d like to learn more about noise, where it comes from, and how it propagates to the analog signal path.

A.

As would I. But at this stage of the explanation that James has been methodically publishing, the discussion was about timing. Getting thet clarity is important. If there are ways that noise can ride along the Ethernet delivery system, get through the dCS interface, and then into the analog chain, I would love to know that.

I agree that we should not assume that Ethernet is “perfect”; to do so would be to deny the possibility of future improvement. But the fact that there are “unsolved problems” does not prove anything about the Ethernet interface. It’s a version of the old truism: “absence of evidence is not evidence of absence.” Proving you’ve got unsolved problems in your system doesn’t prove there is anything wrong with Ethernet.

And what does this even mean: “does not solve all the problems”? What problems? What problems have been identified that require solving? When one speaks in such generalities, it conveys no information to someone who hasn’t sat in the room with you, heard what you heard, and then heard your explanation of what problems were identified and then which ones were solved and which remain unsolved.

It might mean we have to keep exploring how it is that changes in a network switch for example can purportedly improve some aspect of the musical information. But this is not voodoo. It’s science. The question should be: “if we think we hear a change, what is it that changed that produces the outcome?” If there is to be an actual explanation, one on which logical improvement can be based, we have to understand how. Otherwise, each box/cable-builder is just throwing darts blindfolded. And asking customers to accept a lot on faith. That’s no way to run a railroad. Whether it’s tighter tolerance resistors, a cable weaving pattern, better copper, faster relays, different PSUs, improved isolation, etc., even if the method is accidental, at some point is has to be explicable, and reproducible in order to be anything more than confirmation bias. Importantly, I am not saying it has to be measurable. Eventually, an actual change will be measurable, but I am comfortable with the mantra that “not everything we hear is measurable, and that not every measurement matters.” But that’s always an “interim state.” The goal should be to understand how and why. Because it’s the how and why which is the foundation of future improvement. Otherwise, blindfolded darts.

For the very same reasons that, to my ears, network playback generally sounded better than USB (and for good reason), I believe there must be better ways to encode, transmit, and decode musical information. I have great faith in the advancement of technology, and I can’t even imagine what quality of musical reproduction will be available to my grandchildren. But for now with what we know, the evidence that a network switch or other upstream device can improve musical information is really thin if not non-existent. Such devices might improve a specific system by solving a problem, e.g., noise, and thereby improving the presentation in that system. But not by changing the quality of the musical information.

4 Likes

Greg,
I will remember the story with Apple, jailbreak and сidia. Many options that were not in iOS were implemented by enthusiasts, and then they appeared in new iOS.
I just want to draw attention to the fact that there is an influence, and it is better for the manufacturer to understand this issue.
And yes, I will always first try the basic things recommended by the manufacturer, and only if I am convinced by some statistics of reviews on the Internet, then I will try it myself. To try everything in a row - life is not enough)))

2 Likes

Fair point. I was an early jailbreaker on some of my early iPhones. And I do think that some of that enthusiasm probably helped accelerate the path for Apple to get real apps on the phone rather than mere web widgets. I might quibble that jailbreaking was a direct response to Apple’s very constrictive early customer models, rather than a shortcoming in the science of telecomputing, but the deficiencies of some of their early antennae are also on point.

Please don’t misunderstand me. I am an all-in supporter of enthusiasts in any hobby/ activity. Customer-driven innovation is a good thing. Let’s keep hammering on the conventional wisdom. But with rationality. There was much to despise about early USB audio for example. And much to hope for if not embrace immediately about early network audio. But the improvements that have been made over the last thirty years are based on technology. It’s one thing for the customer to say “I don’t know how it works, but I know it is better” (Rossini, meet Vivaldi), but it’s another thing altogether for the maker to be unable to explain the engineering and for someone else to be unable to replicate the result. This speaks directly to your point:

Spot on. There isn’t time to try everything. Doesn’t matter how “golden-eared” someone is. For me, there needs to be some rational framework to explain the possibility before I will convert enjoyment time to non-enjoyment assessment time.

To push your memory metaphor a bit further, we could each jailbreak our phones using the same method, install the same software, and then report in a forum that it worked for one of us but not the other. There was always an explanation, some mistake or misstep along the way that could explain the differential outcome. It wasn’t always rectifiable—ah, the fun of jailbreaking—but it was understandable. Similarly, the same phone could get good reception in one area and lousy in another. Cell network problems? Maybe. But what about the antenna? That turned out to be absolutely the case in some phone models. If someone can find the equivalent in Ethernet, I am all ears. :grinning:

3 Likes

I believe manufacturers need to do a combination of both. You improve as much as you can through understanding the science, but you still need to look (listen) at the result - with, at times, contradictory results as we don’t understand the science of everything yet.

It’s much the same way with other areas of engineering such as the development of lithium batteries and superconductors. Many of the advances have been based on ‘what happens if we replace element a with element b?’. The results couldn’t have been predicted with our current understanding, but some worked (and many more failed).

1 Like

That’s not at all what I was referring to, and it’s not what “blindfolded darts” means. Trying new things, without knowing what the results might be, is a perfectly valid method of experimentation. Though it is rarely “blind.” It is usually an educated guess. One does not swap copper for vulcanized rubber and expect to improve the conductivity of one’s medium. One selects a new alternative based on what one knows about the alternative, though one might not be able to predict the outcome of the new combination. If it’s better, keep it, record it, and move the R&D forward. If it’s worse or same, record it, and also keep trying. The scientist-engineer controls for and records the inputs and outputs. And the results are verifiable. Not blindfolded darts. Experimentation is not a blindfolded process, and even accidental discovery rarely is. Simply claiming that something changes/improves/worsens something without an explanation for how, though, leaves the customer with zero reference point. That’s blindfolded. And it’s how way too many reviewers and customers and dealers seem to feel comfortable operating in this hobby. Over on digital photography, lens sharpness and color fidelity are verifiable, as are pixel light gathering and doling shutter recording speeds. One can still prefer the color and contrast of a Rodenstock lens over a Schneider, but one will have definitive reference points for explaining what one’s eyes see. With cars, “feels faster” and “is faster” are two decidedly different things, but both are explicable with objective data. My e-tron GT is decidedly not faster than either my R8 or RS7, but it feels as though it is, and I know exactly why. There is no reason we could not enjoy that framework in Audi if it weren’t that out industry and much of the customer base depend on it being otherwise.

Granted, patent rights tend to muck up this abstract discussion a bit, because inventors are incentivized to protect their secret sauce until it enjoys legal protection (those that refuse to explain and also fail to patent are telling us something, whether want to admit it or not). We may not know exactly why one “thing” (whatever it is) should be better than another. But the thing’s characteristics, including its measured properties, can give us a clue. Regrettably few manufacturers tell us much about their products—which actually does tell us much about their products—but those that do, allow us to make educated guesses about which might actually produce an improvement in our particular system. Again, that’s not blindfolded darts.

3 Likes

[edited for brevity]

I honestly doubt any manufacturer would be mad enough to have no goal to work towards and I didn’t suggest otherwise.

We’re agreed that an educated guess is better than a complete guess, but educated guesses don’t always work out (again, because we don’t understand everything there is to know about science), so I stand by my opinion that at times people ‘get lucky’, even if with hindsight people might understand why that approach worked. I did not say this is how it always or even usually happens, just that it happens. I’m not sure we actually disagree on this.

Your point that ‘One does not swap copper for vulcanized rubber and expect to improve the conductivity of one’s medium’ appears disingenuous. As the opposite has already been proven, it would be downright silly to try this.

Always needing an explanation of why something works assumes that an explanation is readily to hand and, as you point out, protection of intellectual property often gets in the way. The reality is we can’t explain everything we measure, otherwise we would be able to explain how Quantum Entanglement works (as opposed to merely demonstrating that it exists).

1 Like