dCS Ring DAC - A Technical Explanation

Excellent series. One aspect of clocking that I hope you will address has to do with the provision for an externally attached “master clock” for the Vivaldi Clock. What type of clock should this be if used? And what would be it’s overall value with other components, which rely on clocking, within a consumer audio system? I’m thinking here of network switches of maybe CD playback or streaming devices.

Part 3 – Jitter (Interface)

If a product is locking to the clock signal of an external source, such as a CD transport connected to a DAC, interference picked up by digital audio cables between the products can smear the transition times of the clock data within the signal – essentially, changing the point in time where a 0 changes to a 1, or vice versa.

Balanced lines help reduce interference induced in the cables. This is why the AES/EBU format uses 110Ω twisted-pair, shielded cable. This effectively shields the conductors in the cable from most electromagnetic interference (EMI) and flushes any that it picks up to ground, eliminating it from the signal. Any EMI that gets through to the conductors gets phase-cancelled, because each conductor is exactly 180 degrees out of phase with the other. (Since the pairs are run together, any noise will be induced in both conductors in phase with each other and, thus, cancelled.)

It’s important to ensure that a cable has the bandwidth needed for digital signals. The square waves carrying the signal have a very fast rise time between the low and high states (the 0s and 1s). A fast rise time translates to a very high frequency – up in the megahertz range. For this reason, it’s advisable to use good quality 110Ω cable for AES transmission and 75Ω cable for S/PDIF transmission that is specifically designed to carry digital audio data.

When a digital signal is passed through a cable, the cable will, to a degree, act as a filter. A poorly designed cable, which is unfit for use with the interface it is designed for (such as AES3, for example) could potentially filter out high frequencies from the signal before it reaches the DAC from the source device.

This causes an interaction between any two consecutive data bits within the signal, called intersymbol interference. Depending on the relationship between the first and second of any two bits, the transition between the two can be temporally smeared. The ideal clean vertical line of the square wave becomes more sloped, meaning the exact moment a 0 changes to a 1 or vice versa can be blurred. In short, jitter can be introduced purely from the interactions within the data itself.

If the timing data in the audio signal is being used to lock the DAC’s clock to the source’s clock, this intersymbol interference will have a negative impact on sound quality, as it can introduce jitter to the DAC’s clock. However, if the audio system is making use of a Master Clock, and the timing information embedded in, for example, the AES3 signal is no longer being used, the effects of intersymbol interference are negated. While the same filtering effect in the cable and interactions within the data occur, the intersymbol interference does not cause jitter. This is because the Word Clock signal being sent from a Master Clock is regular and does not change like an AES signal does.

It is worth noting that as the PLL used in a dCS product is slow acting, and the clock recovery circuits used are very capable, the effects of intersymbol interference are minimised in cases where the DAC needs to lock to the clock information embedded in the audio signal (such as instances where a Master Clock is not available).

The next post will discuss clock synchronisation, such as how to utilise a Phase Locked Loop to synchronise two different clock domains, like those in a DAC and connected Transport.

Part 4 - Clock Synchronisation

6 Likes

Part 4 – Clock Synchronisation

There is a problem posed when multiple digital audio devices, each with their own internal clocks, need to work together. Take the example of feeding a CD transport into a DAC. The DAC has a buffer – a section of temporary memory which stores the audio samples it receives from the CD transport. The transport’s clock dictates when a sample is sent out to the DAC, and the DAC’s clock dictates when the sample is used and converted to an analogue voltage.

In an ideal world, the clocks in the DAC and transport would be running at the exact same rate with no time variations. In reality, however, there will be variations in the clocks on average (potentially caused by the intrinsic jitter factors discussed earlier). This poses a problem somewhat different to jitter.

If the clocks are running at different rates, on average, over a long period of time, and are left to their own devices with no method of synchronising the two, there will come a point where either the buffer in the DAC has used all of the available samples from the transport, because the transport is sending the samples too slowly / the DAC is using them too quickly, or the buffer overflows because the transport is sending samples too quickly / the DAC is using them too slowly. This will result in the audio dropping out temporarily, as the DAC must drop everything and relock to the audio signal to get audio samples flowing properly again.

There are two main ways to address this issue. Firstly, there are pieces of timing information embedded within the digital audio signal that the transport gives out in S/PDIF or AES format. The DAC can look at this timing information and adjust the speed of its own clock to match. This means the clocks of the source device and DAC will now be running at the same rate, so dropouts will no longer occur.

The second method that can be employed is to lock both the source and DAC to a Master Clock. A Master Clock is a unit which sits external to all other units in a system and provides a clock signal, referred to as Word Clock, to the rest of the system. The internal clocks of all other units within the system can then be locked to this signal, meaning that on average, they are running at the same rate as the Master Clock. This means that at no point should the DAC suffer from dropouts or re-locks due to the buffer under or overflowing, as on average, the samples are being sent from the source device at the same rate as they are being consumed by the DAC.

The common factor between these two methods is that they both require a method of synchronising an incoming signal with the product’s internal clock, by way of a PLL. There a number of DACS in the high-end market which do not have the ability to match their clock domain to that of an incoming source, as the oscillator(s) run at a fixed frequency. This means that the unit will drop or repeat samples every now and again (definitely not desirable behaviour) and will have variable latency, so cannot be used for video because of the resulting lipsync drifts.

As an aside, it is worth noting that the use of a Master Clock in a dCS system does not replace the internal clock inside of the DAC. It simply acts as a stable reference for the DAC to lock itself to, and allows for DAC and source to be properly synchronised without issues such as intersymbol interference causing jitter within the audio data. The DAC’s internal clock still dictates when samples are converted, it simply adjusts its frequency over time to match that of the Master Clock. This means the DAC still benefits from having a high-quality clock close to the DAC circuitry. The clock directly controlling the audio is still part of a tightly controlled environment, while also being in sync with the rest of the system.

Phase Locked Loops (PLL)

A Phase Locked Loop, or PLL, is a circuit that works to match the frequency of an incoming signal with that of an outgoing signal. They are often used to synchronise a DAC’s internal clock to that of an incoming signal, such as SPDIF from a CD transport. A ‘phase detector’ in the PLL attempts to match the phase of the incoming SPDIF signal with that of the DAC’s internal clock. Its aim is to get the phase error as low as possible, ensuring that over time, the two clocks run at on average the same rate, and the DAC’s buffer never under or overflows.

The most common place to see a PLL in an audio product is within an ‘off-the-shelf’ SPDIF receiver chip. This chip will be utilised on the SPDIF input of a product, typically combining an SPDIF to I2S block together with a PLL. Using a third-party solution such as this can give rise to some issues. With such a chip, it can be very difficult to separate out the functions of signal conversion and clock domain matching. This becomes problematic when attempting to use a Word Clock signal as the clock master for the DAC. What’s more, if the performance of the chip isn’t up to scratch, then it is impossible to change it. AES clock extraction is a good example. This is actually quite difficult to do well; because of the structure of illegal codes within the signal, it is easy to induce jitter from the channel block marker that occurs every 192 samples (the structure of SPDIF/AES is beyond the scope of this post but in essence, the signal deliberately breaks the ‘rules’ by having periods of 3 0s or 1s in a row for various reasons, including to lock the PLL to).

At dCS, we’ve taken a different approach. dCS DACs still use a PLL, but it is a hybrid design, developed entirely in-house. Part of the PLL is digital, by way of DSP inside the product’s FPGA, and part of it is analogue. This lends an enormous amount of flexibility, and a much higher level of performance. Additionally, it is completely independent from the input source. We are also able to carry out functions like dramatically altering the bandwidth of the PLL. This allows the DAC to lock very quickly to a source, thanks to a wide bandwidth on the PLL. The bandwidth can then be tightened over time to reduce jitter.

This approach ensures that, within a dCS product, the clock and data paths remain independent. There is a part of the product’s FPGA which works solely to extract the clock embedded in, for example, the incoming AES signal (again, this is done using a bespoke design, rather than an off-the-shelf chip); another part which works to retrieve the audio, another for routing it, then processing it, and so on.

This gives us a tremendous amount of flexibility in terms of how we handle, for example, Dual AES: we can run the signal, have a separate Master Clock input, have the DAC act as the Master Clock for the whole audio system, tolerate different lengths of cables in Dual AES, and deal with phase offset between clock and audio, and all of this can be done without adding latency to the audio, meaning it can still properly integrate with video. We are also able to hide commands embedded in the non-audio bits of AES, which allows us to have, say, the Vivaldi DAC (a non-network equipped product) controlled by the dCS Mosaic Control app.

This diagram shows a simplified example of how a digital source (a Rossini Transport), a DAC (the Bartók Headphone DAC) and a master clock (the Rossini Clock) work together. The overall performance of the system is reliant on each of these stages performing correctly – each oscillator, PLL and output stage needs to operate at a high level to achieve optimum performance.

Clock Dither

The setting for Dither can be found on the dCS Rossini and Vivaldi Clocks. Dither is commonly found in digital audio, where it is used to expose dynamic resolution below that of the least significant bit. In the aforementioned Clocks however, the dither is applied to the time domain instead of the amplitude domain.

PLLs exhibit what is known as a ‘dead band’ in their phase detectors. When the input and output frequencies are close to being synchronised, they lose sensitivity. The PLL then drifts until the difference in frequency is large enough to cause the phase detector to once again become active and drive the PLL back towards being synchronised.

This is where the dither comes in: Perhaps counter-intuitively, if very small, random variations in the timing of the clock signal edge are applied when the phase error is very low, it gives the PLL something to latch on to and correct (as it pushes the phase error slightly back into the area where the phase detector can correct well). The dither is then filtered out in the PLL before it outputs the final clock signal. In practical listening this is a good trade-off and actually improves system performance. In essence, the dither setting on the Rossini Clock keeps the Bartók DAC’s clock very accurate even when the PLL is working in a less sensitive, low phase error area.

Part 5 – Asynchronous Sources – USB & Network Audio​

10 Likes

Great stuff. This post is missing the link to the subsequent post at the end
Thanks
Rudi

The subsequent post is an explanation on what is dCS Apex…so only few days to read it :laughing:

Part 5 – Asynchronous Sources – USB & Network Audio

Audio sent over an asynchronous format (such as streaming to a smartphone via Spotify, playing content from a NAS via Roon, or playing music from a computer via USB) is, to an extent, the exception to the rules stated in the previous posts, in so much as jitter is not a factor for the audio data, at least until it reaches the endpoint and is converted back to the relevant format (such as PCM or DSD).

With network audio, the interface which is used to send audio data over a network is called TCP (Transfer Communication Protocol). The data which is to be transmitted from one place to another –
in this case a piece of music – is split up into multiple ‘packets’. These packets contain not only the data itself (the ‘payload’), but tags on where it has come from, where it is going, how many packets it is part of and how these packets should be reassembled to get the original data back unchanged.

Take, for example, a track from Qobuz being streamed to a dCS Bartók DAC. If a packet of data is lost or compromised, according to the TCP interface, the Bartók can simply request that packet again. When all the correct packets have been received properly by the Bartók, they are unpacked back to the correct data format (PCM, for example) and buffered before being fed to the DAC. This stage, the unpacking and buffering, effectively removes any timing link between the TCP packets and the resultant audio signal. (Read that sentence again, as it’s very important.)

Once the data has been buffered in the Bartók, the factors discussed above become relevant again. The data is now being directly dictated by the Bartók’s clock and as such, jitter becomes a factor. The accuracy of the Bartók’s clock then controls when the DAC converts the samples back to analogue voltages, so has a direct impact on audio quality. Until it reaches that point, however, jitter is simply not a factor from an audio perspective.

Asynchronous USB audio works in a similar way. There is no timing link whatsoever between the source, such as a computer, and the endpoint such as a Bartók. It does not matter if, while the USB data is being transferred, the bits are not perfectly spaced as a clean square wave. Provided the bits are received by the Bartók correctly (a 1 isn’t misread as a 0, for example) the timing is largely irrelevant. This is because, as with network audio, the data is buffered before being fed to the DAC. It is not until this point that timing becomes a factor, as at this point, it has been converted back from USB format to digital audio (eg PCM or DSD).

9 Likes

Indeed. Thanks for all this James. Superb information.

It means no need for any clocking system before the data are processed by the Upsampler, am I right ?

1 Like

Sounds so to me.

Correct. At least in terms of how clocking is discussed here. A digital device such as a network switch still has a clock but for very different purposes that are nothing to do with audio - the idea of externally ‘clocking’ something that is spitting out asynchronous data like a network switch is a bit of a contradiction.

4 Likes

Does it also means that my expensive Audioquest rj45 Vodka cable isn’t more useful that any cat 6 cable ? I am already sweating in waiting for your answer :laughing:

But I take the opportunity that you don’t sleep yet to tell you that it is a real privilege to sometimes have a chat with the designers of great gear I use every day to listen to music.
I share what Greg wrote in another thread, saying that he trusts the dCS engineers, me too, impatient to hear the new Apex :+1:

From a few of the more objective folks I have spoken to I believe the ethernet cable and switch are not in the critical signal path and won’t impact sound. They make fine audio jewelry. I am not an anti-cable guy I have a full high end loom of interconnects.

2 Likes

I think the detail about buffering is really significant.

While I’ve heard some noticeable differences using high-end switches with other audio equipment, my Bartok seems pretty immune to any noise from the network. Every part of the design seems immaculately thought through and executed.

PS I agree, it is great to have the developers contribute on here :slight_smile:

1 Like

In theory, everything sounds logical…
But still, there are other kinds of problems, since different network equipment affects the final sound quality. To deny this is to deny the experience of the huge number of enthusiasts and the market for audiophile network devices (even if some of them are snake oil)

Hmm, that sounds tautological to me. There may be reasons why poorly constructed networks, network components, non-compliant cables, or noise sources produce audible results in systems, and some of those reasons may be psychoacoustic, and some may even be real, but the presence of a market to sell to those perceptions does not in this hobby prove the reality. And with respect to James’ point, those upstream devices/components/cables/actions do not affect the timing of the (asynchronous) musical information. If one has a problem switch, that is causing an obnoxious ground loop for example, or some other noise, then by all means, one should fix that! But that is quite different from claiming that clocking a network switch makes a difference in SQ. [Full disclosure: I had an EtherREGEN in my Vivaldi speaker system, which I had purchased for its isolation capabilities, and to compare with my GigaFoil 4. And just for shits & giggles, I clocked it with both my Perf10 and Kronos1 reference clocks. Audibly, it produced nothing different from the GigaFoil. But it was fun. Eventually, I removed the ER, because it always ran hot, even on its own little metal stand that I made for it, and it routinely locked up on me, requiring a hard reboot. Those two problems were consistent in both my speaker system and headphone rack.]

2 Likes

An Obnoxious ground loop…Greg I missed you the last mounths, but now I know you are back :laughing:

I learn new english words almost every post you launch…that’s fun :wink:

2 Likes

For this reason, even a full reclock, buffer, and the rest described above does not solve all the problems. This means that the ethernet interface is not absolutely stable and immune.
I have a basic quality switch, a good liner power supply, a recommended certified cable, but one Melco changes the game. I have tried others with varying degrees of success.
I know that some manufacturers have gone a little further and added optical usb, ethernet. I think I already wrote about the experiences of enthusiasts… :thinking: :slightly_smiling_face:

1 Like

It means that the Ethernet interface is immune to clocking issues.
I’d like to learn more about noise, where it comes from, and how it propagates to the analog signal path.

A.

As would I. But at this stage of the explanation that James has been methodically publishing, the discussion was about timing. Getting thet clarity is important. If there are ways that noise can ride along the Ethernet delivery system, get through the dCS interface, and then into the analog chain, I would love to know that.

I agree that we should not assume that Ethernet is “perfect”; to do so would be to deny the possibility of future improvement. But the fact that there are “unsolved problems” does not prove anything about the Ethernet interface. It’s a version of the old truism: “absence of evidence is not evidence of absence.” Proving you’ve got unsolved problems in your system doesn’t prove there is anything wrong with Ethernet.

And what does this even mean: “does not solve all the problems”? What problems? What problems have been identified that require solving? When one speaks in such generalities, it conveys no information to someone who hasn’t sat in the room with you, heard what you heard, and then heard your explanation of what problems were identified and then which ones were solved and which remain unsolved.

It might mean we have to keep exploring how it is that changes in a network switch for example can purportedly improve some aspect of the musical information. But this is not voodoo. It’s science. The question should be: “if we think we hear a change, what is it that changed that produces the outcome?” If there is to be an actual explanation, one on which logical improvement can be based, we have to understand how. Otherwise, each box/cable-builder is just throwing darts blindfolded. And asking customers to accept a lot on faith. That’s no way to run a railroad. Whether it’s tighter tolerance resistors, a cable weaving pattern, better copper, faster relays, different PSUs, improved isolation, etc., even if the method is accidental, at some point is has to be explicable, and reproducible in order to be anything more than confirmation bias. Importantly, I am not saying it has to be measurable. Eventually, an actual change will be measurable, but I am comfortable with the mantra that “not everything we hear is measurable, and that not every measurement matters.” But that’s always an “interim state.” The goal should be to understand how and why. Because it’s the how and why which is the foundation of future improvement. Otherwise, blindfolded darts.

For the very same reasons that, to my ears, network playback generally sounded better than USB (and for good reason), I believe there must be better ways to encode, transmit, and decode musical information. I have great faith in the advancement of technology, and I can’t even imagine what quality of musical reproduction will be available to my grandchildren. But for now with what we know, the evidence that a network switch or other upstream device can improve musical information is really thin if not non-existent. Such devices might improve a specific system by solving a problem, e.g., noise, and thereby improving the presentation in that system. But not by changing the quality of the musical information.

4 Likes

Greg,
I will remember the story with Apple, jailbreak and сidia. Many options that were not in iOS were implemented by enthusiasts, and then they appeared in new iOS.
I just want to draw attention to the fact that there is an influence, and it is better for the manufacturer to understand this issue.
And yes, I will always first try the basic things recommended by the manufacturer, and only if I am convinced by some statistics of reviews on the Internet, then I will try it myself. To try everything in a row - life is not enough)))

2 Likes