hello folks. i have heard in dcs forums folks saying 6v rms has better S/N ratio than 2v
if someone knows why this is the case , can some one try to explain this
theoretically the base signal will always be of the lowest voltage and be of the purest form
any amplification or attenuation should have an adverse effect of signal purity which should theoretically degrade the signal rather than enhance it.
and i also feel in the Digital domain dcs should have much more control over this process and factors to mange the S/N ratio
to add to this i sometimes feel why did this needless 6v standard even came up for XLR in first place when 2v was plenty sufficient.
because with 6v with dac direct you have bit loss or reduction for volume control
alternatively if you run with preamp we have to rely on the preamp gain offset reduction or volume control attenuator to make it to listenable levels of the final output.
would there not a attenuation loss if you take a 6v signal and try bring down the voltage.
even though its done analog domain, still.
so why start with such high levels of voltage from dac in first place
why cannot we have same purity and S/N ratio in lower voltage of 0.2 or 2v
This is like answering a question about cars. Is a car with a top speed of 210 mph better than one with a top speed of 190mph? The answer is that it is irrelevant for getting from A to B.
Getting from A to B for us is enjoying music. The S/N numbers for dCS products show that the residual noise is inaudible.
Why 6V? For direct connection ( no pre-amp) to some older tube preamps a high input signal may be required. Some users prefer using 6V with a preamp that could be properly driven with a much lower input voltage. If they prefer it then fine. But it is only a preference.
dCS happen to quote S/N at 6v. For the Vivaldi DAC that is -113dB0. A great achievement but not all that significant for day to day listening when the ambient noise in your listening room even at its quietest is way louder than the dCS’ minimum noise ( if you are lucky at 35 - 40 dBSPL) .
not sure if speed analogy is applicable or not.
my main question was for whatever that S/N number be, if something can be achieved at a higher voltage i was curious what is the inhibiting factor or limitation is there at lower voltage of 2v where the S/N value is lower.
But that’s why Pete’s metaphor is apt. Whether 2V or .2V might have a better S/N ratio is largely irrelevant.
greg the point is not about the relevancy of the spec.there is no end to the relevancy of any spec argument
iam curious from an implementation point of view that what is so special about the 6v or what is that from an engineering standpoint that makes 2v S/N lower than 6v.
Ergo, Pete’s answer. dCS lets the user choose. I don’t think there’s anything special about 6V, but it’s an option. From my point of view, I use 6V because it behaves better with the volume range that I happen to find useful with my passive pre. Maybe I am misunderstanding the purpose of your original post Hari. I understand curiosity about the performance differences between .2/2/6, and perhaps dCS will chime in as to why, but I observe that differences in S/N don’t necessarily mean that one signal is more “pure” than another, precisely because of Pete’s metaphor.
leaving out the purpose of my post and relevancy of S/N. and the audible and perceptible listening impressions aside
iam just asking if S/N is say a value X at 6v , why does it have to be reduce and be lesser than X in 2v
Errmmm… SNR (or S/R)… SIGNAL-to-Noise ratio. Since the system noise is relatively flat w.r.t the output voltage, I’m sure you don’t need help to make the logical leap of why 6V has a higher SNR than 2V
As to its relevance, far far more important to focus on properly matching the setting to your Pre-amp/Amp’s input sensitivity and impedance characteristics.
thank you @Anupc . i learnt something new that noise is relatively flat. makes sense.
good to be schooled and reminded. no problem.
say if the input sensitivity of a power amp is 2v and dcs is connected directly to a power amp, does it mean that the output voltage of dcs should also be 2v
When a Power Amp is rated at 2V sensitivity, thats the input voltage that will drive the amplifier to full rated power. So, yes the dCS should be configured for 2V outputs (keeping in mind that’ll generate full rated output when the dCS volume control is at 0.0dB).
@Anupc thanks for explaining,.
but then what is the purpose and point of having 6v when most of the power amps have an input sensitivity around 2v.
- the preamp is going to bring down the voltage again by attenuation which seems unnecessary
- and in dac direct use, we would not be using enough bits on volume control for nominal listening values and it will still be lower voltage / db at most normal listening levels going out of the dac anyway.
I don’t really know come to think of it
dCS has always supported at least two voltage levels for their DAC analog outputs as far as I can remember, 2V ([peak]) being the low traditional line-level voltage. I’m not sure how 6V was chosen, maybe something from dCS’ professional side in the past.
How’s this relevant to your setup though? Are you exploring a new amp/pre-amp?
yes its not just dcs .all of dac companies are outputting 6v from xlr these days and some of the others do not even have selectable voltage output.
Yes, I am exploring a few amp/pre-amps.
Yes 2 or 6V has always been selectable with dCS DACS. Thinking about the professional aspect, the standard line level output definition for professional equipment is not the same as for domestic equipment. Unfortunately the difference is not exactly the same as that between 2 and 6V. Perhaps the result was an engineering compromise for practical reasons ( the two have different references anyway pro being measured in dBU, domestic in dBV)?
I think that what you are seeing is nothing to do with a standard ( and it certainly is not applicable to all.). It is just a natural outcome of differential balanced outputs v. single ended as the balanced option sums the voltage output of the two halves of the differential circuit. So you will see RCA output 1.9V, XLR 3.8V ( Audio Research DAC 9) or RCA 2.3V, XLR 4.6v ( Denafrips Terminator) or RCA 2.9V , XLR 5.8V ( HoloAudio May). These are all recent products selected at random. I would expect the majority of DACs you find with circa. 6V XLR output to have around 3V RCA output.
I’m not an electrical engineer so take my comments with a grain of NaCl. I’m running a Vivaldi DAC into an Audio Research 40th Anniversary Pre-amp then out to D’Agostino mono-blocks. I tried all three output settings on the DAC; 0.2, 2.0 and 6.0V and found the sound to be the most dynamic and musical when on the 6V setting (XLR cables). I also found that I had to run the volume on my pre-amp at much higher levels than with my previous DAC when I was on the 2V setting. So, my output setting remains at 6V. As a side note, I also tried going direct from the DAC to the amps but found the sound to be not as real or as musical than when the pre-amp was in the circuit. Seems counterintuitive for sure. I realize that this does not directly address your question but thought that I would share my experience.
I don’t use an active preamp, but my experience has been similar. My system sounds better with the Vivaldi DAC running at 6V with a Townshend Allegri Reference Pre attenuating the volume, than it did with the DAC at either 2V or 6V and controlling the volume with the Vivaldi.
Gah. I posted a while back about how I preferred the Bartók directly into my power amps (Pass XA60.8s), rather than through my Pass XP-30 preamp. I still do. What I was comparing:
A) Bartók at 0.6V, direct, doing the attenuation.
B) Bartók at 2V, via XP-30, with the pre doing the attenuation.
To me, A is clearly better in the huge majority of cases, across many, many genres of tunes.
But, I’m a few hours into a comparison between that and a new configuration:
C) Bartók at 6V, via XP-30, with the pre doing the work again.
And — bugger it — it’s not nearly as clear cut. It might even be that C is better. Hmm. Ok, I’m going back in. I’ll report back.
You need to be careful when comparing the sound you get when switching from 6V to 2V ( or vice versa) subjectively. The ear always prefers the louder version even if it is only single decibel higher.
The difference between 2V and 6V is 9.5dB ( actually 9.54 dB). So if you keep your Bartok volume control at 0.0 dB when you set your preamp volume control for comfort using 2V, the Bartok volume control should be set to -9.5dB whilst leaving the preamp control in the same position when you switch to 6V. Of course you could instead leave the Bartok control at 0.0dB and reduce the preamp control by 9.5dB if it is calibrated correctly. The latter is preferable from a digital resolution viewpoint but I don’t think that a setting of -9.5dB would take you into a zone where resolution becomes an issue.
I hope that will help you to clarify your decision.
Excellent heads-up — thanks, Pete.
I think (hope) I’ve been fair.
Each time I’ve done these comparisons I used the mic on my iPhone and an SPL-reading app to check the in-room level with a constant 1kHz tone.
All three configurations (A, B and C) gave the same reading. And not once did I blast 6V at full volume through the mono-blocks. Phew.