Quote:
Originally Posted by rspencer
While a common belief, this is incorrect. Or, at the least, too ambiguous to be useful.
Normalization can & often should be used, as long as it is "peak" normalization...which is basically what you go on to describe. The source is scanned, and then amplified so that the highest peak reaches a predetermined level (e.g., -0.1 dB). If the highest peak is -6 dB, then the entire wave would be amplified by 5.9 dB. This increases the volume while maintaining the dynamic range.
The "bad" normalization is "RMS" normalization. It sets the RMS average to a predetermined level, compressing any over peaks. This squashes the dynamic range. While many consider it "bad," it does have its uses and is sometimes the better way to go, but most often not.
While the second pic could illustrate normalization using a limiter, I don't think that's the case. The peaks are not as uniform as one would expect had a limiter been used. It looks to me more likely RMS normalization.
|
biggest thing to pay attention to is that the 'normalization' is applied to the show as a whole, based on the highest peak of the
entire program material. otherwise the volume changes at every cue stop & is really annoying. not sure if any progs do this anymore, iirc nero used to do this shit
for the original question of the thread, it looks like compressor to me as well.
No members have liked this post.