Mere CD quality (16 bit/44.1kHz) is so passé. Apple is now pushing 24/96 (and telling engineers not to compress stuff to shit, God bless ’em), and audiophile download sites are selling 24/192.
It turns out this is audiophool bollocks. Monty is the guy who wrote Ogg Vorbis — he knows his stuff. There are no humans found in the last century who can testably hear over 20kHz, ultrasonics encoded at 192kHz just cause intermodulation distortion, 16 bits is provably all the information any known human ear needs and the main benefit of 24-bit is headroom.
“Modern playback fidelity is incomprehensibly better than the already excellent analog systems available a generation ago.” Monty’s advice for actual fidelity? Spend effort on decent headphones.
(Can you tell a 320kbps MP3 from the FLAC? I know damn well I can’t.)
I find it quite funny that this has been appearing virally wherever you look lately. Which brings me to believe that whoever made the first posting has something to lose by high resolution downloads becoming the norm… Hmm, let me see, yes, these are the people who brought us Ogg Vorbis, i.e. a LOSSY CODEC. So what happens if everyone moves away from using lossy CODECS? All of a sudden ‘conflict of interest’ seems like a likely response.
The bit about 192KHz causing intermodulation distortion is utter trash. So this doesn’t happen at 44.1? 882? 96KHz? Of course it does, and an order of magnitude worse. The fact is that the further you push the inter mod up the frequency spectrum, the less it affects the bits you can hear.
A couple of questions… If 192 is such a waste of time, why are all the studios recording at this resolution? Why invest in new equipment if there was nothing to be gained? Answer – because it DOES make a difference.
Barry Diament – A VERY well renowned mastering engineer and one of the few mastering engineers that point blank refuses to enter the loudness war has been quoted as saying ‘Only 192KHz sounds to me like what went into the mics’. Now who are you going to take notice of, the guy from Xiph (who, if you read all of the article is not an engineer, but is quoting from many other sources and likely to be putting 2 and 2 together to make 4000) or a well renowned mastering engineer who has more experience with this stuff than the vast majority of us, and nothing to gain/lose by endorsing one format over another.
The proof at the end of the day is in the pudding however, and having heard some of the same masters in 16/44, 24/96 and 24/192, I can vouch for the fact that 24/192 is a bigger jump from 24/96 than 24/96 is from 16/44.
Finally, what’s the big deal anyway? If the studios are recording at this resolution and storage space and bandwidth is so cheap these days, even if you can’t hear the difference, isn’t it better to have what came out of the recordings rather than something that’s been reduced in size for no apparent reason?
Your post is word salad and argument from authority (with added conspiracy theory) that is in sad contradiction of mathematics. The maths came first, and digital audio from that; if you don’t believe in the maths, you don’t believe in digital audio either.
I just believe what I hear. The maths and the digits are the means, but the music is the end and that’s what matters right?
Indeed. And if you hear something, there’s reasons why and a mechanism of mathematics, physics, biology and cognitive psychology behind you hearing it. Argument from personal testimony is of interest, since the point of music is subjective experience – but subjective experience is not a magical irreducible phenomenon, but the result of mathematics, biology, etc.
Saying “I don’t understand it, therefore it could happen this way” is being wrong and stupid. e.g. Machina Dynamica has many testimonies from happy customers, but that doesn’t make what they push anything other than ludicrous bullshit.
And, to bludgeon the point home: Audibility of a CD-Standard A/D/A Loop Inserted Into High-Resolution Audio Playback – playing 96kHz audio which was randomly switched to 44.1kHz was consistently undetectable by professional “golden ears”. The difference in commercial 96kHz recordings is that they have usually been mastered better. And, of course, you use 96khz in the recording path (so you can play silly buggers with speed) – but it’s literally useless for the end listener.
“can you tell mp3 from xxx”
Go and get something catophonic – Australian Art Orchestra for example and tear those poor little compression algorithms for a six. The Mp3s are unlistenable.
Stop listening to girlie pop Dave. :-)
Catophonic?
Sorry, caCophonic (eg Cat-o-phonic or Katzenjammer). Too much cafe bellaroma!
The MP3 algorithms work really well on female pop music (eg. I have Autumn’s ‘Red’ on vinyl, mp3 and CD – and the MP3 is well up to snuff)
At the other extreme: discordant, arrhythmic music with clanks and clinks is not going to encode happily. AAO’s “ringing the bell backwards” sounded pretty bloody awful in MP3 compared to the CD.
(Which, due to the limits of my stereo still sounds nothing like the live experience)
Oh yeah, artifacts. Lossy algorithms completely shit themselves on metallic noises. Most people just put up with it, but it’s the sort of thing that gets people resorting to FLACs.
I think David and Rich are arguing about different things.
Rich is saying that 192kHz is (reported to be) much better than even 96hHz.
David is saying that the maths says that 24bits is no better than 16bits.
I agree with both, although once you start subsampling I like the idea of some headroom.
I think Rich is actually talking bollocks he literally doesn’t understand, and when his answer to mathematics is conspiracy theory, and when called on it he retreats to nothing-is-true subjectivity, I am disinclined to be generous.