Just to clarify, what I mean is that we cannot rely only on measurements.
If it measures the same, it doesn’t mean it will sound the same.
I agree, but I would like to know why so I can better determine what is an improvement. Today it’s a bit of a trial and error situation. I would still listen of course, but knowing why something sounds different would be a plus in my book.
Same here. Exploring “the why” also helps to create a common understanding. Component X may sound great in my system, or rather improve my system’s sound, but given the lack of commonality my system might have with 90+% of others out there, understanding how it achieves that improvement might help others better understand and consider whether it could improve theirs as well. For example, at a simplistic level, a balanced power supply conditioner might do wonders in my system because of known grid and house wiring challenges I have. Knowing the problems I had, and how the conditioner improves/addresses those things could help others add or delete it from their list of candidates.
Trial & error is an incredibly inefficient way to seek improvement in complex systems. Whether someone cares to know the why is a personal choice, but knowing the why, or at least the hypothesis, can help those who wish to improve the efficiency off their upgrade path.
That’s fine; to me the problem comes in when people feel the difference must be proven/measured before they will listen for one.
Never felt that way. Just like to know what path might be most likely to produce an improvement based on an understanding of how various competitive choices might work. Life’s too short to try everything that people recommend, even in our relatively small community here. So, starting first with things that might be applicable—i.e., more or less similar systems—and might make sense—“I think it worked for me because of X or the designer says it improves Y”—are just plain helpful. It’s simply cost-benefit from my point of view, where the primary cost is time, and the hopeful benefit is musical enjoyment.
When I was younger, I would spend huge amounts of time tinkering and building, everything from amps to tone arms to speakers, and it was fun. If I had endless amounts of time, it would still be fun. And just recently, I spent weeks evaluating some clock cables. It was engaging, and I am glad I did it. I did it because folks with similar systems and experiences recommended it. I would say it was worthwhile. But I would not want to do it repeatedly.
In case of the Ring DAC APEX board, the approach of dCS was: develop better measuring tools, do more accurate measurements with them, find areas that could be improved, experiment and adapt, and let a panel listen to the results.
The result of these various adaptations is a new, enhanced [Ring DAC APEX] board that is even quieter than previous iterations, and over 12dB more linear.
The feedback from listening sessions with our Ring DAC APEX prototypes was hugely positive, with listeners noting enhanced resolution, dynamics, rhythm and timing, an even greater sense of ease and naturalness, more precise and tonally resolved voices, and more realistic timbral quality of strings, among other benefits.
This process of measuring, guided trial & error, ánd listening is beautifully laid out in the linked-below article, that I just re-read. Highly recommended.
In case of the listener, it is not necessary to know why it sounds as it sounds, but it greatly helps to understand why that is the case, and decide if it is an improvement.
Precisely. Over time I have discovered some general guidelines. For example, I have found silver from good manufacturers like Kondo to be a positive, insulation that has low dielectric constants also important (Kondo uses silk, for example). I have ideas as to why some of these are better but would appreciate numbers/measurements.
I would regard this as different though. It’s not like trying a completely different interconnect, it’s more like moving in an incremental direction from a well-defined base, so I don’t really need to do comparisons to decide to get the Apex upgrade. I did listen to the DACs at Axpona, but frankly if I had really done my due dilligence it would have had to be at home.
Which brings up the point: When the feck am I going to get to the front of the line?!
You are right, cables is a different thing.
That is my experience too. I just ordered a solid silver interconnect between my Rossini DAC and Lina headamp.
It’s nice they do things that way, though I prefer companies that start with a hypothesis, hear an audible improvement, then figure out if it can be measured.
As long as listening is a major part of the process, the order isn’t as important.
MSB’s Vince Galbo said MSB also is basically measurement based then listening based, similar to what Stephen said of dCS.
dCS did have a theory, then measured, then adapted.
After a period of investigation, and an intensive few months experimenting with circuit boards during national lockdowns, he developed some prototype boards to test out his theories.
That’s a little sad; both companies will miss sonic improvements that come from things they never thought of and that are not easily measured if at all.
Still, I can’t fault them as they did develop a spectacular upgrade that improves on sonics in a way I wasn’t expecting could be done.
Thankfully, quite often in our hobby, we don’t realize there’s anything that needs to be improved in our systems… until we hear something better.
I agree. Over the last decade, I’ve focused on discovering areas of improvement that seem less interesting to me in the past: the room and system noise. Having done so, I’ve opened up the depth and breadth of the system to eager ears. The APEX upgrade demonstrated that even a great company with superb products could reassess things, as Chris and the folks have, and discover ways to enhance the analog performance of the DAC. So, sometimes it’s the customer’s own system/environment that must become the focus, while the hardware specialists are constantly doing the same.
Well, I think this is more “statement” than reality. Obviously they listen. There are a bunch of people in these teams and they are all chipping in I am sure.
I have the impression that they accumulated many “nice to haves” over many years - eg “This new regulator will work better in the Ring DAC than the current one” or “These traces here are not ideal, maybe if we routed this way it would be better”… But as it goes with production, you can’t just be making these changes as you go. So at some point you’re like “Where’s that laundry list? Lets see what we can put together” and that’s how it goes. Surely this is my telenovela of the process but prob not far off.
At least in part.
dCS saved them up and did them as an upgrade, some manufacturers (Boulder is one) do them as running changes.
That’s (to me) more frustrating as you have to then call Boulder to see whether they’ve made any meaningful updates to their design since they built yours.
Back to the main topic… I paid for the APEX upgrade for a Rossini about 5 weeks ago. How long have people waited as of late? I can understand that if you purchased it in, say, March, things were not yet in motion and that took long. But if you purchased it in say June, how long did you wait?
I am wondering what is going on in Boston? My Rossini has been gone for nearly 6 weeks. Maybe a little update from dCS corporate?
Hi,
Your dealer should always be your first port of call for this - this is not any sort of “passing responsibility” or anything like that, your dealer will know when it was actually sent back for update and more importantly what reference numbers it was returned under as otherwise there is no way that we can know which of the units that are in for update (or have been updated or are on their way back to be updated) yours would be.
Your dealer will be able to check up on the status of your particular unit and should be able to get an answer for you on it by the next day at the outside.
Best Regards
Phil Harris