Jump to content

State Of The Flagships


wink
 Share

Recommended Posts

 

Let’s define what a flagship headphone should achieve.

  1. Bass linearity of +-5 dB from 20hz to 100hz.

    1. Bass linearity is difficult to achieve, but it should be done for a flagship.

  2. 100dB distortion should not exceed 0.8% beyond the sub-bass frequencies.

    1. No flagship should have any distortion that comes close to audibility.

  3. 100dB distortion should not exceed 1% at 30hz.

    1. No flagship should have a messy bass due to distortion.

  4. Frequency response curve should be very smooth with any resonances being very minor -- no major dips.

    1. A smooth frequency response suggests few resonances. Few resonances suggests a well engineered diaphragm and enclosure.

  5. Very small to absolutely no dip at 70hz-150hz.

    1. Those resonances are caused by an interaction of the headphone cushion and your face.

    2. Large bumps in the frequency response at those ranges suggests a poorly engineered headphone pad or that they didn’t do their testing with the human flesh as a variable.

 

These metrics used are all pass/fail checks, even though 4 and 5 seem like they could be subjective.

Edited by Nebby
  • Like 1
Link to comment
Share on other sites

Just flipping through it, taking a very very broad brush glance, it seems like his system tracks my preferences, kinda sorta, more or less. Which might suggest that you can successfully quantify your preferences. Our it might be dumb luck.

Mystery of the moment, though, is who is the author, and what is the purpose of the PowerPoint? Inquiring minds want to know....

Link to comment
Share on other sites

As to the source, there is a large InnerfFdelity logo on slide 17. Obviously the data was used from their graphs to judge. I believe it could possibly be Daniel Nishi (known as SanjiWatsuki on reddit) based on document viewing history and his posts elsewhere. http://www.reddit.com/user/SanjiWatsuki

Edited by manaox2
Link to comment
Share on other sites

Looks like it came from "SanjiWatsuki" and appeared on reddit.

 

http://en.reddit.com/r/headphones/comments/1texuh/the_state_of_the_flagships_analysis_of_the/

 

There's some things I don't agree with (like I think closed phones are sometimes better than open at the moment: HP50, Focal SPirit Pro; and in his reddit posts he says square waves are somewhat useless) but overall it's a pretty interesting and insightful look at how to interpret headphones performance from measurements.

Link to comment
Share on other sites

Seemingly interesting but actually poor analysis. It is possible for headphone A to barely pass all categories and be rated A+, while headphone B passes 5 out of 9 with flying colors, barely misses on the other 4, and is rated an F. The cutoffs for each category are arbitrary, with no correlation to how sound is perceived, making this an example of why purely "objective" reviews are 1) subjective and 2) generally useless. 

Edited by dsavitsk
  • Like 1
Link to comment
Share on other sites

I guess I think it's an example of a nice step forward. Stuff like this can act as a model for people to try to understand the measurements. 

 

Dialectic is a good thing when it's like this. It's done in a way that people can fairly criticize it. 

 

Much better than saying, "OMG these cables made a HUGE difference."

 

Anyway, I enjoyed the attempt:  http://www.innerfidelity.com/content/one-enthusiasts-take-top-line-headphones-state-flagships

Link to comment
Share on other sites

It certainly is a very crude filtering process.

Reminds me of a set of shapes where you fit the square peg in the square hole and round peg in the round hole.

The trouble is that if the pegs are substantiaiiy smaller than the holes, then any shaped peg would fit any shaped hole.

Link to comment
Share on other sites

I agree that it's probably useless, but then again, look at the list of headphones that got A+ 9/9 ratings:

 

HD800, HE60, LCD-3, HE-6, and Stax-009.  

 

Those are certainly among the best, if not simply the best in the batch tested.  The 009 and LCD-3 (in particular) have impressed me, and the others in that list ain't too shabby either.

Link to comment
Share on other sites

All I know is that I laughed out loud at the Ed.10s!

 

this.

 

Although its value is questionable, it's a good attempt, much better than other completely subjective evaluation. It's a broad brush evaluation, but similar to how I check if the headphone is worth the price tag in general.

Link to comment
Share on other sites

I'll echo the general sentiment of the thread in echoing the general sentiment of the article even if the execution could be better. At least it's an attempt at critical thinking by the community. Maybe success at critical thinking will come next, and who knows, maybe overpriced underengineered shit won't sell half as well in the future.

Link to comment
Share on other sites

...

 , much better than other completely subjective evaluation...

That was Doug's point, it is just as subjective as an evaluation using approved audiophile descriptions. The fact that "metrics" were used does not change that since the metrics are crap. In fact it is really worse because it gives the impression of objectivity and scientific rigor when in fact there is really none. Whether it was done out of ignorance or intentionally is the real question.

  • Like 2
Link to comment
Share on other sites

Anything done by enthusiasts is going to be severely flawed relative to a professionally done, statistically valid survey of subjective opinion relative to measured performance.  We'll have to leave that to Sean Olive.

 

But even with it's flaws, it's a big step toward a type of dialog including both objective measures and subjective impressions.

 

I see some significant inappropriate mixing of subjective and objective evaluation in the paper, but I think it does represent crudely a territory where some very interesting discussion can occur. 

 

Maybe the most important part of it is that the hobby is actually looking at the measurements and trying to understand them. When the hobby is dominated by strong subjectivists then manufacturers can build whatever they want and it escapes objective scrutiny. But if evaluations begin to include measured performance commonly, manufacturers will have a much harder time hiding their ineptness behind statements like "it's for pros, so it should be detailed" when what they've really done is produce a tizzy mess.

 

This thing, accurately or not, represents a trend that puts significant pressure on manufacturers to build technically proficient stuff.

 

The more talk of measurements and how they relate to the listening experience, the more we're going to get better headphones, IMO.

Edited by Tyll Hertsens
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.