What makes 6v better than 2v

Sorry Pete, but you are mixing up quite a few parameters. SNR, THD, amp sensitivity, masking effect of amp and environment, concluding every noise and distortion until 40dB is OK, because unheard. If so, why bother to buy dCS, get a blaster from your neighbourhood store :stuck_out_tongue_winking_eye:

Not quite what I mean and you are conflating several posts concerning different aspects.

Kindly show me where I say that? But are you telling me that you are able to hear,say, -113dB noise ( dCS spec irrespective of line voltage for Bartok) v. -123 dB ( Stereophiile measurements) In a room with 40dB ambient noise via amplification with . say a S/N ratio of 80 - 90dB?

I have always said that specs only need to be adequate, the actual listening experience depends upon much more.

Yes, if relying exclusively upon commonly quoted specs you can apparently just buy a blaster from your neighbourhood store ( or something along those lines). Read Audio Science Review. They tell you that all of the time :grin:.

2 Likes

Can anyone tell me if a Bartók connected via XLR with a Gryphon Diablo 300, is it better to use 2v or 6v?
I can’t figure out which of the 2 is better. I wouldn’t want to saturate the entry of the Diablo 300. The 6vs seem to have more emphasis, but I don’t know if it’s just suggestion. I am tempted to ask Gryphon directly which one is best to use.

If you can’t find the input sensitivity for the amp in any of Gryphon’s material, drop them a line. Most of these people are good to deal with.

(Generally I can’t find the info, then feel embarrassed when someone from this forum finds it in 17 seconds flat. Gah!)

I think I have answered your question already last November:

1 Like

Hello, thank you, I remember, I was looking for some other feedback. In the meantime I have tried the exact same cable in RCA and XLR, it changes very little, but the XLR is slightly more accurate.
As for the XLR output of the Bartok, why if the input of the Diablo 300 is at 0.617v do you not set the output of the Bartok at 0.6v?
Now I’m listening to it at 2v, but the impression is that at 6v maybe it feels more dynamic, but maybe it’s just suggestion.

Thank you

Hello Lele,

If it helps, I prefer the 6v output (via XLR) from my Rossini → Diablo 300.
I also found the dynamics less impressive with the 2v output.

Of course, it’s system dependent (as always), so I suggest that you go with the setting that sounds best to you.

1 Like

I am with you, @Bauer Jonathan and @Lele80 . Only, you need to take into account how loud you are playing, and what peaks the music contains. Just be careful not to drive your Diablo into distortion/ clipping, or even fry its input/ pre-amp section.

1 Like

Thanks Erno. I don’t play music especially loud, but I take your point.
I’ve decided to revert to the 2v setting, live with it for a while, and then reevaluate my preference…

It seems to me this setting would just modify the ratio of gain between the dCS unit and the pre-amp, assuming your keep your final volume level the same.
It’s nice to have the option so you can choose to emphasize the voiceing of whichever component you you prefer, but it’s interesting that most people seem to prefer the higher voltage settings. Maybe the dCS components just sound better than most preamps? :slight_smile:

Hi!
I have:

Magico A5 as speakers
Dan D’Agostino Progression integrated amplifier
And the DCS Bartok

With the Bartok set on 6v especially when I listen to rock or in general with high volume my ears suffers a lot. The sound is piercing and really noisy. Hard to endure.

Then I recalled the voltage trick and I have set the Bartok on 2v. In the previous conditions seems much better. The sound more soft and not so noisy.

In general 6v seems better at low volume and 2v with high volume. With some exceptions according to the kind of music listened.

Does this make sense?
Or are just my impressions?

And someone can suggest me the more soft Bartok filters setup? That because I really would like to listen the Bartok in 6v (that I prefer) without having my ears wounded.

The subject title is “what makes 6V better then 2v”. The answer that it isn’t. Nor viewed in isolation and not in any way that can be subjectively heard. Noise is quoted as better than -113dB and measured (Vivaldi DAC) by Stereophile total noise and THD was a mere 0.00026% at 6V. Now although that figure may well increase slightly with 2V line voltage the amount is so small that it is inaudible. The -113dB noise specification I read as applicable overall. Again, inaudible in any normal circumstances.

So 2V v 6V is not a question of being better in any real sense but being preferable to you. So why? It is necessary not to look in isolation but to consider the interface between the DAC and amplifier i.e. input sensitivity. The input sensitivity quoted for your amplifier is not a figure out of the air but is an input voltage that will provide a certain level of THD. That level is probably chosen to reflect best circumstanced by the manufacturer.However if exceed the level of THD increases. This may be prefered for some reason. Indeed I can imagine some genres of music where additional distortion if not too great may be subjectively preferable especially if distortion was used in the first place e.g. overdrive in rock guitar amplification.

In some cases additional distortion may become too much io a “good thing” and that may be why :

Finally I must warn against tpo loud listening on headphones. you risk permanent damage to your hearing as well as discomfort.

https://www.who.int/news-room/questions-and-answers/item/deafness-and-hearing-loss-safe-listening

2 Likes

Thanks Pete.
So in the end they are the same.
And it is only suggestion.
So I have understand

I don’t use headphones

Sorry, I misread.

@Urbanluthier

(Pulled this off the “Show system” thread into where it should probably be)

Not being able to tell the difference was, I’m sure, dCS’ goal. So, it’s a good thing. That said, I can understand the urge to know which one measures better objectively in your system.

I agree with Pete’s point (in the other thread and here), that the Signal-to-Noise ratio differences is unlikely to be the main cause for any sonic difference as the noise floor in all cases are so low it’s mostly beyond normal hearing.

That said, the reason I took a brief look at the difference in dynamics was precisely because thats what I felt I was hearing as the difference, not so much any lowered resolution (from an increased digital volume attenuation or anything).

I think it’s a very interesting question that warrants a proper measurement, but it’s mostly an academic one; case in point, I still occasionally change the voltage depending on what I’m listening to, to suit my subjective preference at the moment :grin:

Based on information provided by Pete in another thread, Hi-Fi News recorded measurements of 109.1 dB at 2V and 117.0 dB at 6V. These results indicate that a higher output voltage correlates with an improved signal-to-noise ratio. It is my understanding that these measurements were taken with the volume control set at 0 dB. Altering the volume setting could likely yield different outcomes.

In my hypothesis, I posit that both the digital volume level and output voltage significantly influence the results. (1) digital volume control is more effective in preserving the resolution and dynamic range of a digital signal at higher volume levels, and (2) an increased output voltage might enhance the interaction with the amplifier, thereby improving both the signal-to-noise ratio and, potentially, the amplifier’s overall performance.

The degradation of SQ with upsampled ALAC at 6V is much more noticeable than at 2V. 6V lifts the Lina with the Utopia and bit-perfect ALAC to a whole new level.

Precisely, an academic issue (but it would be nice to know!) - i don’t loose any sleep over this and like you, I generally adjust my Bartok’s voltage output based on programme material so I’m between -10dB and 0dB. Subjectively I feel the 2V is more dynamic, but then again the recordings i listen to at the higher 2V voltage output; have greater dynamic swings (perhaps as much as 60dB), While listening to chamber music with .2V out a 0dB may only swing 15dB.

So I agree, in virtually all use cases SINAD is irrelevant. Which leads me to wonder why on earth is so much emphasis placed on a measurement where most DACs perform better than our hearing?

1 Like

Donald, if you weren’t aware, there’s actually no such thing as “upsampled ALAC”.

Regardless of the codec, all streaming gets decoded right on the S800 board, buffered, and then synchronously clocked-out as I²S PCM. This is exactly the same on all dCS platforms, including Lina.

It is this I²S signal which is then fed to the FPGAs for digital signal processing such as PCM Upsampling, DSD/DoP decoding/filtering, digital volume control etc. etc., and then to the D-to-A conversion stage.

In other words, as far as dCS’ digital signal processing is concerned, there’s absolutely no difference between ALAC and FLAC and WAV.