|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
I know these are due to come out (Denon and Pioneer DV-47 are already out). Any others that are to be expected ?Thanks.
Max
Follow Ups:
Um, you know that DVD is 8-bit / 13.5 MSamples/sec, right? Also, consider that higher frequency DACs tend to have lower performance (e.g. more distortion & noise) and run hotter. I'm no expert on DACs, but I think the only reason to use more than 2x oversampling (i.e. 27 MHz for interlaced, and 54 MHz for progressive) is if you want to cut corners on your analog filters.
There's simply no reason to use 12-bit DACs, with 8-bit data. Really, anything more than an ultra-linear 9-bit DAC shouldn't get you any more performance. Specs like that are just marketing games. The real bottlenecks are elsewhere in the player, and elsewhere in your system.
Also, keep in mind that we're talking about DVD, which compresses a 126 Mbit/sec stream into < 10 Mbits/sec (often, much lower). The kind of performance you're talking about not only exceeds that of the film and studio equipment used to produce and master the source, but IT THEN GETS COMPRESSED.
And, in many cases, the video might actually get digitized by an 8 to 10 bit, 27 or 54 MHz ADC, in the TV, and then re-converted via an 8 to 10 bit, 27 or 54 MHz DAC.
The real solution is a digital interconnect. That'll be much more exciting news, when such players start to hit the streets.
Hello -Basically, the need for higher resolution DACs comes about because the DACs are not perfect. A 10-bit DAC is required to get even close to realizing the full 8 bits of resolution encoded on the disc.
Please allow me to explain further. As noted, DVD is an 8-bit format. However, the sync pulses are stored as special codes within the data stream. The DAC must convert these coded sync pulses into an analog output signal. This requires approximately 1 additional bit of range on top of the 8 bits required for the video signal itself. The very earliest DVD players all used 9-bit DACs for this reason.
(As an aside, some DACs have separate internal current sources used to generate the sync pulse. These effectively add an additional bit of true resolution. An example of this is the Analog Devices ADV-7122 used in the i-Scan. This is nominally a 10-bit part, but will match the resolution of an 11-bit part that doesn't have these separate current sources.)
However the problem is that no DAC is perfect, as they all exhibit non-linearities to some degree. A really good 10-bit DAC will be linear to +/- 1 least-significant bit (LSB), which is equal to 0.5 of the second LSB (or 9th bit). Since we need 9 bits just to recreate the video signal plus the sync pulse, this means that a really good 10-bit DAC will have an error of a half a bit on the video data itself.
We have now effectively turned a barely adequate 8 bit standard into a 7.5 bit playback system. Most 10-bit DACs used in cheap DVD players will have linearity errors greater than +/- 1 LSB, compounding the problem.
While the non-linearity of a higher resolution DAC will often be greater in terms of LSBs, these LSBs are much smaller than those of a 10-bit DAC. For instance, the 14-bit DACs used in the Ayre D-1 are specified to +/- 3 LSBs for integral non-linearity, and +/- 2 LSBs for differential non-linearity. (Integral non-linearity refers to the maximum error over the entire range of the DAC's output. Differential non-linearity refers to the maximum error from any one output level to an adjacent output level.)
This translates to an error of less than 0.1 LSB at the 8-bit level found on the disc itself. So now we are able to achieve 7.9+ bits of resolution in the total playback system, which is a noticeable improvement over the example noted above using high-quality 10-bit DACs (let alone average quality 10-bit DACs).
When processing straight video, there is an area of diminishing returns when going past 12 bits. We went to 14 bits in the D-1 as an example of what could be achieved with cost no object. (Please note that our ground-breaking design paved the way for the higher resolution parts that are now coming out in the mass-market players of today. The Analog Devices video group has consulted us on what features should be included in their product lineup, and they are now clearly leading the field in video conversion.)
However if there is any video processing going on (brightness, contrast, gamma, et cetera), then additional resolution is required in the DACs to avoid a loss of resolution in the final output signal. In this case, a 14-bit DAC would be required to achieve the resolution of a 12-bit DAC in a system without video processing. That is one reason why we don't offer any such adjustments in the Ayre D-1. Higher performance can be achieved by making these adjustments in the analog domain in the monitor itself.
Hope this helps,
Charles Hansen
Wow, thanks for the info - it's always good to learn something new!!Anyhow, you've addressed the issue of DAC precision, but how about sampling frequency? Is the performance of 108 MHz DACs not generally much lower than that of 54 MHz DACs?
> However if there is any video processing going on (brightness,
> contrast, gamma, et cetera), then additional resolution is required
> in the DACs to avoid a loss of resolution in the final output
> signal. In this case, a 14-bit DAC would be required to achieve the
> resolution of a 12-bit DAC in a system without video processing.
> That is one reason why we don't offer any such adjustments in the
> Ayre D-1. Higher performance can be achieved by making these
> adjustments in the analog domain in the monitor itself.I don't quite believe that, however. If all the processing is done after resampling (108 MHz is 4x oversampling), and carries full intermediate precision, until it's dithered down to the precision of the DAC, I think a good 12-bit 108 MHz DAC would be hard for analog processing to beat. Certainly the analog performance would be much less stable, over time, and with varying environmental conditions.
BTW, couldn't you test each unit, off the line, and program it to digitally compensate for much of its integral (and even some differential) nonlinearity?
Hello -I'm afraid I can't answer your first question very well. Certainly in the case of A/D converters the resolution falls off as the speed increases. This is because it takes time to determine the level of the analog input signal with precision. However, the D/A converters don't really have the same types of limitations. In the D/A converter, you are just turning current sources on and off. Obviously there must be some upper limit to the operating frequency, but I can't answer specifically if there would be significant degradation going from 54 MHz to 108 MHz. It probably depends on the part itself, and how it is designed.
Analog processing will give better overall results than will digital, at least in this situation. We've already determined that one needs around 12 bits in the DAC to *accurately* retrieve the 8 bits off the disc. If you wanted to do processing in the digital domain without losing accuracy, you would need a 14-bit DAC with a linearity error (specified in LSBs) equal to that of the 12-bit DAC. I don't think that such a DAC exists. The new Analog Devices series has both 12-bit and 14-bit DACs, but I would assume that the 14-bit parts have a greater non-linearity (in terms of the LSB for each DAC). I don't know for sure, because these parts are so new that the full data sheets don't yet exist.
Also, keep in mind that the 12-bit parts are less than $20, while the 14-bit parts are around $100. This is an extraordinary amount of money to pay for this feature. Added to this is the fact that these type of adjustments belong in the display, not in the source device. Putting the adjustments in the source device means that every device will need its own duplicate set of adjustments, adding even further to the cost of the overall system.
The stability of the analog controls are really a non-issue in this application. For instance, you don't worry that the volume level of your stereo changes from one day to the next, even though it probably has an analog volume control.
And unfortunately, you cannot compensate for the non-linearities of the DACs. If the DAC always exhibited the same errors at all times, you could. However, these errors fluctuate with time, temperature, and other factors. This is not really a problem though. The errors I specified in my previous post are guaranteed over the full operating temperature range of the part, and we are still able to achieve 7.9+ bits of accuracy. There really wouldn't be much point to try and achieve 7.95+ bits of accuracy. This is like worrying about the difference between 0.002% and 0.001% distortion in an audio preamplifier.
Hope this helps,
Charles Hansen
First, I really appreciate your replies. I hope I don't seem too argumentative, but there are just a couple more things I don't quite understand.The way you've characterized noise performance of the DAC in bits is a nice conceptual model, but lacking in specific details. Is this absolute performance, or what percentile of noise falls within the ranges your specifying? As I've said, I don't know much about DACs, so perhaps this noise is really fairly addative. But, as it relates to quantization noise from digital processing, I assert that the DAC noise you've stated will *hide* nearly all of it. In terms of absolute ranges - yes, the most-significant bit in which you'll find noise will now add to whatever you previously had, but statistically, I believe you'll start to approach a gaussian distribution with a sigma that increases slightly (much more slowly than addative), each time you add in a new noise source. This means that noise won't creep into the signal nearly as fast as you suggest.
Furthermore, I was assuming that the dither was added after oversampling, when I was talking about processing. The analog filtering will turn the dither into slightly more precision, in the signal band.
Another thing to keep in mind is that the signal you're converting is COMPRESSED!! The level of performance you're talking about is really better than the DVD medium is designed to support. Except in some carefully constructed test-cases, DVDs just can't supply 8 full bits of precision. Personally, I like my source material to be the limiting factor of my A/V systems, but beyond a certain point, you're just helping the viewer see how bad the compression artifacts of DVDs really are. I actually want to try adding dither in the DCT-domain, based on the quantization matrix. I predict that the image will look more noisy, but you won't see any ringing artifacts. Unfortunately, the additional digital noise may not please some cadence-based line doublers, (though that could be remedied, if they determine cadence from the non-dithered stream, and then line-double the dithered one).
Finally, I agree that it doesn't make a whole lot of sense to put this digital processing in the player, but then it doesn't make a whole lot of sense to put the DAC in the player, either. Ideally, you wouldn't need any of it. However, the nice thing about digital processing is that, unless it's designed by a moron, you can turn it off without incurring any performance penalties for it being there. Also, putting gamma-correction in the player will result in better system-wide noise performance, if the gamma needs to be increased. Same thing with black-level control.
Hello -Not to be contrary, but you do seem somewhat argumentative:
1) I have never talked about noise performance. My posts were regarding the linearity of the DAC output.
2) I have never talked about dither. To the best of my knowledge, no video DACs employ dither, although this might not be a bad idea.
3) You are right that it makes more sense to use one set of video DACs for all sources. But until you can purchase an SDI-out equipped VCR or HDTV tuner, we will have to live with that redundancy.
4) Even in you turn the (unrequired) digital processing off, you are paying the penalty of a higher cost for the unit.
5) You will still achieve better overall performance, and at a much lower cost, by making picture adjustments in the analog domain, for the reasons noted in my previous postings.
Best regards,
Charles Hansen
> Not to be contrary, but you do seem somewhat argumentative:Perhaps, or maybe persistent and just a little slow, sometimes. ^_^
> 1) I have never talked about noise performance. My posts were
> regarding the linearity of the DAC output.Oops, my bad.
> 2) I have never talked about dither. To the best of my knowledge,
> no video DACs employ dither, although this might not be a bad
> idea.I was talking about dithering the results of any processing down to the precision of the DAC. My assumption is that any digital processing would accumulate up to like 32 bits of precision per sample, and then you would dither this down to like 12 or 14 bits, probably either in software, an ASIC, or a FPGA, just before sending it to the DAC. It's especially good, if the first thing you do is oversample the signal. Then, dithering has the advantage of moving even more of the quantization noise, introduced by processing, outside of the signal band, where the analog-domain filters will get rid of it.
> 3) You are right that it makes more sense to use one set of video
> DACs for all sources. But until you can purchase an SDI-out
> equipped VCR or HDTV tuner, we will have to live with that
> redundancy.You mean DVI-out, right? SDI is only a studio thing, and its lack of support for any encryption protocol will mean that it will stay that way. Audio and control are also lacking from SDI, but that alone wouldn't have kept it out of home theater equipment.
BTW, I can personally attest to the inadequacy of 8 bit video. I once looked at some gradients on an SDI monitor, and the quantization artifacts I got, with 8 bits per sample, were as clear as day!
> 4) Even in you turn the (unrequired) digital processing off, you
> are paying the penalty of a higher cost for the unit.Eh, it could be built into an MPEG decoder virtually for free, or done purely in software. It's orders of magnitude computationally simpler than a good line doubling algorithm.
However, I was speaking purely on the basis of performance.
> 5) You will still achieve better overall performance, and at a
> much lower cost, by making picture adjustments in the analog
> domain, for the reasons noted in my previous postings.Okay, beyond the purely theoretical level, I have very little knowledge of analog circuits, so I'll certainly take your word for it.
Anyhow, it sounds like we basically agree on everything, except those which I don't understand. ^_^ Thank you for taking the time to share so much valuable information!
CD is only 16-bit -- we've used 18, 20 and 24-bit DACS to process the data. Based on your info, we don't need anything higher than 17 bits. I don't think anyone uses less than 20-bit DACs today.
Increasing the sample rate is not to cut corners on the filter. It's to allow the use of a filter that operates farther away from the range of the desired signal. Filters have detrimental effects -- operating at a higher frequency decreases those effects. In the audio world, Pioneer had a DAT player that was only 16-bit, but it sampled as high as 96kHz. When the 44.1, 48 and 96kHz recordings were compared, the 96kHz recording sounded smoother (aka "better"). Same amount of bits used to record, but less "filter" effect. The company dCS has published many articles about sample rates and filter interaction. For whatever reason (technical or otherwise -- maybe voodoo), higher sample rates sound better. I should add that we're talking equal quality here -- don't compare "Everything for $1" with "Tiffany's".Many players also use higher sample rate video DACs because they upsample part of the video signal (can't remember which part right now). The component video is recorded at 4:4:2, but some players upsample that 2 to a 4 -- making the signal 4:4:4 instead. I can't remember the exact reason for this (something about signal delays, I believe), but it's said to make the picture better/smoother/more "film-like".
As far as digital video goes, for $1000 you can get the mod done to your DVD player right now. There are 2 companies that do this mod (grabbing the digital component signal from the DVD before it's converted to analog) which allows a direct digital connection to a display capable of accepting this signal. I believe some plasma displays have this capability. Granted, it's not "on the street", but it's available. And no, the studios most definitely don't want you to have it.
For what it's worth, the new hi-end DVD players (Denon, et al) use 14-bit/108 MHz video DACs. The Ayre DVD player used a 14-bit video DAC in last year's model, but I believe it was only (!) a 54MHz DAC. I also believe last year's Camelot DVD player use a 14-bit video DAC (but don't quote me on that one).
Sorry if that came across harshly. I was just trying to call into question the hype about these specs.Video performance is one of those things that should be fairly easy to quantitatively measure. Also, it's the over-all performance, not the specs, that we really care about.
We have some digital scopes, at work. This summer I'm going to try to put together some test pattern DVDs, and capture some sammples. I have to use my own test patterns, though, so that I can do computational analysis on the results. The scopes may only be 8-bit, but then so are DVDs. Fortunately, the scopes' bandwidth is several hundred MHz.
This post is made possible by the generous support of people like you and our sponsors: