Rossini Clock, dither switch on or off, what sounds the best, is there a difference?

I have the Rossini DAC and Clock. On my Clock are switch for Dither 1 and 2. Is there a difference is sound quality when they are on or off?

The purpose of the switch is to let you make up your own mind. Only you will know which sounds the best to you or if you can’t hear any difference. There is no “right” answer.


The idea with this function in short is that small dither is inserted to exercise the clocks ccx crystal to make it do fast self corrections to the locked frequency, so it does not slides slight out of sync.

I feel the best performance is to use dither for a short while and then turn it off when you want the best possible timing focus. If you feel that the timing and focus fades away one day, just turn it on for a while.
From a technical point of view dither stress the clock to repeatedly correct so it can wear
slightly faster, and you will get a more clean clock signal with it set to off to my ears, so use it from time to time.

That is my best advice from experience.

1 Like

I don’t believe thats the case at all.

The Dither in dCS’ clocks do not affect the clock accuracy nor the stability of the (reconstructed) clock signal within the DAC since the Dither is random in nature.

If I’m not mistaken, the Dither is designed to improve the Phase-Lock-Loop’s (PLLs) recovery of the incoming clock signals by forcing the clock signal’s edges out of any “dead-zones” within the PLL’s operating range, thus allowing the PLL to better converge on the clock signal. So, best to leave Dither ON all the time in my opinion.

It has nothing to do with short or longer term clock stability or accuracy; which is what you‘re suggesting.

I’m sure the dCS folks can confirm either way.

I believe that is correct. I remember that explanation from dCS when their first home audio system clock ( the Verona) was released. In those early days I was loaned one for a short period but could not hear a significant difference when it was installed in my ( then) Verdi/Elgar plus/Purcell plus rig. Later I changed my mind and purchased one and have retained a clock in the dCS stacks that I have owned since.

The idea of dithering the clock was ( from memory) suggested to dCS by PhD student who was gaining experience during a summer vacation posting at dCS. This was a new concept and had not existed in the preceding professional system clocks. Hence a switch was provided so that the user may make up his/her mind whether to use it or not. Obviously some users may object to adding a small amount of noise ( dither).

My own experience is that initially I could not even hear the addition of the clock per se! That was a long, long time ago and I could now not happily listen without one ( as I found out back when my old Paganini Clock went back to the factory for upgrading to Clock 2 and I was without one for a couple of weeks).

To me the effect of adding dither is extremely subtle. Of course it does not extend the bass by an octave nor let you know what colour socks the guitar player is wearing :slightly_smiling_face: It is not an “effect”. That switch provides a still useful option.

This isn’t an easy one to answer without the use of some diagrams I am afraid, but I’ll do my best!

When synchronising two clocks, for example a Rossini Clock and that in a Rossini DAC, a Phase-Locked-Loop (PLL) is used in the DAC. This employs a ‘phase detector’ to essentially match the phase of the incoming master clock signal with the DAC’s internal clock. It tries to get the phase error as low as possible.

Phase detectors work very well when the phase error is quite high (where the two clock signals are a fair bit out of phase), but ironically they lose sensitivity as they get very close to the target phase. This is where the dither comes in: Perhaps counter-intuitively, if you apply very small, random variations in the timing of the clock signal edge when your phase error is very low, it gives the PLL something to latch on to and correct (as it pushes the phase error slightly back into the area where the phase detector can correct well). This dither then filtered out in the PLL before it outputs the final clock signal. In practical listening this is a good trade-off and actually improves system performance.

In essence, the dither setting on your Rossini Clock keeps your Rossini DAC’s clock very accurate even when it is working in a less sensitive area.


I’ve been reading up on external clocks recently and found that many people like to use an external high-accuracy 10Mhz external reference with their Vivaldi clocks. According to manufacturers of these reference clocks, a key attribute for audio reproduction is phase noise.

Based on what @James said above, it sounds like an extermemly low phase error may actually not be advantageous since it will keep the PLL in the less sensitive zone.

Anyone have thoughts on this? Am I misinterpreting something?

A related question is then does turning on dither make the use of an external 10Mhz ref with ultra-low phase noise pointless?

Not trying to ruffle any feathers, I just find this topic very interesting and I’m curious how these concepts interact!

Here’re my thoughts on your question;

The Dither introduced is random in nature, so, would average to zero over time. Having an external 10MHz Clock source which is more precise, stable, and with less jitter than dCS’ own internal clock would/could still improve that “zero point” frequencies for the PLLs to lock onto. So, introducing a better 10MHz external clock is not “mutually exclusive” when coupled with dither, IMHO.

That said, since dCS only publishes the accuracy specs, not stability or jitter for their clocks, it’s a bit of a trial and error process on getting hold of a “better” external 10MHz master that might positively impact sound quality :wink:

Jeff, my own experience is that dither “on” on both base rates sounds best to me, and the addition of an external clock added additional depth of image and focus. Not as much as the Vivaldi Clock itself, but an audible improvement to an already magnificent illusion. I acknowledge I was probably influenced/encouraged in my thinking by some of the posts at WBF, where there is an ample oversupply of collegial lunacy. :grin: However, I also note that my non-audiophile wife noticed the improvements as well without prompt [I’m not claiming that proves anything; she may have simply picked up on my enthusiasm, but she’s not known for authenticating BS].

Phase noise is a rather complex one and difficult to explain here, but in essence how much of an impact it actually has on the Master Clock output is dependent on the type of PLL used in the Master Clock. If it uses a fast acting, tight loop, the quality of the Master Clock output is very dependent on the characteristics of the reference clock input. In this case, factors like phase noise become very relevant, as they will have a direct impact on the Master Clock output.

On the other hand, you may have a slower acting loop where factors like jitter are much more dependant on the local oscillator inside the Master Clock, but as a result the loop is less susceptible to inducing jitter from the reference clock signal. dCS products use this type, with a slower acting PLL. What this means is that if you are feeding a reference clock into the Vivaldi Clock, phase noise isn’t really an issue as short term issues like this are averaged out over time.

Not quite, although I can see where you are coming from. Low phase error is always better than high phase error, but low phase error with dither is better than low phase error without dither.
It is also worth noting that the dCS PLLs in testing do not exhibit this dead-zone, but listening tests to clock dither show that it does provide an audible improvement.

Correct, and we also filter it out before the final output of the clock signal so there is no trace of it on the actual clock output.

Technically we do publish these. The accuracy spec is jitter over time. It shows how many parts per million, or ppm, the clock output is. As far as I am aware this is an industry standard measurement so should give a good indication of one clock’s jitter and stability against another. The guaranteed accuracy of above +/-1ppm for our current Master Clocks is over a temperature range of 10°C to 30°C, so you have that accuracy against both time and temperature factors.

On the 10MHz clock topic… One thing I would like to point out (which I am sure many on this thread are aware of, just to add to the collective knowledge here) is that a 10mHz clock signal is not inherently better than a clock signal generated by a dCS Master Clock. Conversely, if you are generating a clock signal specifically for use in audio products, it makes far more sense to use a direct multiple of the signal you are clocking, in audio being direct multiples of 44.1kHz and 48kHz. We use clocks centered at 22.5792MHz and 24.576MHz. These rates are 2^9 of the base frequencies, meaning this rate can easily be divided down to the audio rates in powers of 2. 10MHz master clocks use rate multipliers to convert to digital audio rates, which generate more jitter and tend to have a dirtier spectrum. As far as I have ever been able to find out, the reason 10mHz is prevalent in clocking is likely historical and due to it simply being a round number. No real rationale for audio purposes, unlike the rates we use.


Thanks for the response @James!

This is a very interesting piece of information. Does this imply that with the dCS master clock there’s no real advantage to using a 10 MHz reference clock with phase noise of say -115 dBc/Hz vs -110?

How does this point relate to using a 10 MHz reference clock with the dCS master clock?

Much less so than with a master clock using a faster acting PLL. It is my understanding that phase noise will skew the transient of the clock signal (the nice clean edge of the square wave) in the time domain, essentially meaning that phase noise causes jitter. As the phase noise, by definition, is random, it means that it will over time average out. Thus, the jitter introduced by the phase noise will average out.

Given that a dCS Master Clock has a slow acting PLL, it means the immediate short term variations in the incoming reference clock signal do not get passed on down the chain and affect the DAC as they would with a fast acting PLL. Furthermore, as the jitter from the reference clock is averaged out over time it doesn’t cause any longer term issues for the Master Clock (such as the DACs buffer overflowing or emptying causing a dropout in the audio).

If you had a fast acting PLL in the Master Clock, a reference clock with a phase noise of -115dB/Hz would cause less jitter in the Master Clock’s output than a reference clock with a phase noise of -110. In the case of feeding a reference clock into a dCS Master Clock however, the more important factors to look at will be absolute accuracy (jitter performance) of the reference clock.

It stands to show that for use in an audio system, a 10MHz clock will face challenges in terms of accuracy that a clock running at 22.5792MHz / 24.576MHz will not. It would seem counterintuitive to feed a Master Clock with a reference which is less accurate than the one in the Master Clock itself, as it will not be bringing anything to the table.

My suggestion is to look at the absolute accuracy of a clock to make a judgement on its suitability, and that inserting a 10MHz reference clock into a Vivaldi Clock simply because the input is available will not necessarily yield any benefits if the reference clock is not more accurate than that of the Vivaldi Clock (such as, as has been mentioned elsewhere on the forum, the much lower price point 10MHz clocks that tend to get a lot of traction on audiophile forums). Clocks with this level of accuracy do exist, but usually in satellite systems, telecoms and the like, typically with far more controlled environments than our listening rooms.


As I understand it, it arises from the broadcast / studio industry deciding to use the 10MHz clock output from suitably high-quality GPS receivers to synchronize digital paths across geographically separate sites.


Learn something new every day, cheers Mike! :slight_smile:

@James I got my clock today from John Quick and set it up… been playing around with it and definitely like the sound better with the Dither on, but someone above mentioned that it can cause additional wear on the clock crystals… is this the case? Should I use it sparingly?

Also, what benefit (if any) does one gain from connecting the RS232 9pin cable between the clock and the Rossini, just the sleep switch?

Sam, thats a myth, it’s not the case at all.

The random Dither function comes after the VCXO generates a stable clock, it doesn’t alter the frequency of the crystal oscillator itself. You can happily leave Dither on permanently (it’s been on in my Vivaldi Clock for nearly 8 years now :grin:).


So you just run it 24/7? Interesting, ok… gonna try it and see how it goes. Always exciting to press more buttons and see more lights! :joy:

The main Power switch my Clock has always been On, but the unit is in Sleep mode when I’m not listening. When taken out of Sleep, it’s usually instantly at the right operating temperature. :grinning:

1 Like

I have been running my wordclock(s) with dither on permanently since around the turn of the century or, more precisely, from two years after Verona was released when I purchased one. OK that covers three wordclocks sequentially; Verona, Paganini and Vivaldi but dithered all of the time.

As Anup correctly said earlier:

It has no impact on the clock crystals. See also Jame’s and Gibraltar’s discussion above in this thread.

1 Like

Sleep mode seems only to switch off the fascia display panel. That is why the clock is always at the correct temperature when you bring it out of sleep mode.

1 Like