Saturday, March 22, 2008

WARNING: Our Scanner May Not Work With Your PACS
...or should that be "Our PACS May Not Work With Your Scanner?

On the heels of the prior private tag post comes some further information from GE, which provides paranoiacs like me more to worry about.
This is a known issue with the AGFA system in which the AGFA PACS does not support WW/WL > 13 bits. Per the DICOM standard the ww/wl values can as large as the image data. Per the HDMR2 DICOM and Annotation SRS (DOC0084074 rev 5) section 2.1.2 Smallest Image Pixel Value,Largest Image Pixel Value, MR image data is stored as 16 bits.

It seems that Agfa is only supporting 16 bit for a small number of modalities. The Impax developers are working on a solution to this problem. They are trying to work the fix into the next service pack or perhaps the next service update. I recommend that the you contact their AGFA representative regarding this issue. . .
Ummmm... Guys, this is a real problem, and I think both vendors really should have disclosed it. Unless perhaps this bit problem is more widespread than we know. I would like to know if any PACS out there can deal with 16-bit output from a scanner without modification. If so, please let me know which product can do so. If not, someone please tell me why the scanners' main output is in a form that the majority of the PACS systems cannot handle. And if Centricity (and IntegradWeb for that matter) can't deal with the 16-bit output, then this makes no sense at all.

Add as an aside the fact that every Barco monitor I've ever seen attached to a PACS system runs at 8 bits, or at least that's what the Windows Display Properties control says on these machines.

Frankly, I'm confused. And very, very concerned about what we are seeing or not seeing. I am disappointed in both vendors for not making this situation clear. While I think my group of radiologists is top notch, I find it difficult to believe that we are the first to have discovered the discrepancy.

So, who is at fault here? Is it Agfa for not being able to receive the full output from the GE MRI, or GE for sending data that the PACS cannot completely digest? Let's be democratic and blame both of them. And both need to work on the solution. Which I'm assured is in progress.

8 comments :

Anonymous said...

For 3D renderings this a critical issue. However even though most vendors support above 8 bit precision no of them does perform real HDR rendering.

http://en.wikipedia.org/wiki/High_dynamic_range_rendering

That being said for 2D image display the lack of bits is not really an issue, unless you perform any kind of operation on the data prior to display, because no matter the amount of bits it will be clamped to 8bits.

Anonymous said...

You're tight about the 8-bit windows pallettes on many workstations. After paying the extra for monitors capable of 10-bit LUTs, a lot of software simply uses the Windows entry into the device driver - i.e. 8 bits only (or less). The software has to be written specifically using Barco API to take advantage of extra 2 bits. How to tell? If you change the pallette settings in display properties & the image changes, then you're not using the API and ergo, not the full 10-bit LUTs either.

Anonymous said...

Assume everyone knows that the whole point of window/level is to transform pixel intensity data from a higher dynamic range down to something can be displayed on conventional video gear (and perceived by the human eye).

Two observations, then:

[1] It is a safe bet that the video display of the console on the GE scanner is also displaying only 8-bits of grayscale. So the whole discussion of the of 8-bits versus 10-bits on the display hardware side is likely an irrelevant side show to the discussion about the internal processing of the data. My opinion on this issue: Based on my review of the literature the extra 2-bits don't provide any statistically measurable clinical benefit to radiologists reading real studies and would be a huge engineering cost to support on an on-going basis. You are unlikely to ever see a wide-spread embracing of a 10-bit hardware display model by PACS vendors. 8-bits grayscale is here to stay as the output medium (albeit re-mapped to 32-bit RGB color displays more and more often)

[2] Assuming the GE summary is correct (which is always risky in cross-vendor statements), I can see why the Agfa engineers might have thought they could get away with not using the full potential 16-bits of of acquisition data. In practice, most scanners don't actually produce more that 12-bits, and designate the other 4-bits of the standard 16-bit MONOCHOME1/MONOCHROME2 values as padding. However, MOST is not ALL and there are devices (particularly outside of MR) that quite legally represent acquisition pixel intensity over the entire allowed 16-bit range (and don't get me started on signed vs. unsigned pixel representations). Sounds like someone was trying to do a clever memory or performance tuning of their image processing pipeline code based on an assumption about standard practices and ended up outsmarting themselves when encountering new kinds of perfectly legal images.

If your biomed folks can get a full 16-bit DICOM stepped-square test image loaded on both the scanner and the PACS, then the GE claim can be tested by measuring the on-screen intensity using a photometer. Or you could just ask Agfa if GE has the situation summarized correctly.

If Agfa is binning the 16-bit data down to 13-bits before sending it through the W/L transformation to 8-bits for display on the screen, then they do indeed need to address it because you would be loosing contrast that would be very perceptible in tight windows (e.g. as window width nears 256).

Anonymous said...

Fiddling with the original data is risky, there is bounded to be quantization errors.

Its true that if you *just* display the image data then it might be just tolerable. Although due to different discretization schema's even the final image on screen is likely to vary!

I would like to see any literature stating that the loss of a *few* bits does not pose any risk..

DHS said...

I remember reading something about ten years ago suggesting that the eye can't perceive more than 7-8 bits worth of grey. I agree with Anon 2:44 that this is the whole point of windowing - to reduce the (for example) 11 bits of CT data into that 7 bits.

I disagree, though, that the loss of contrast would be perceptible - what is the SNR of the scanners in question? Are such very tight windows actually usable? Assuming a -1000 to 1000 HU scanner, discarding at 12 bits means granularity at the 0.5HU scale - is it useful to see that one voxel is 0.5HU more than the one next to it?

I can't comment for other scanning modalities, of course - but 12 bits of SNR is a lot.

Anonymous said...

This has been an issue for quite some time. In 2000 I struggled with this very same problem between a Hitachi (Airis) MRI scanner(Cedara DICOM Toolkit) and a Emed Pacs.(Cedara DICOM Toolkit)((NOTE THE IRONY HERE?)). After literally hundreds of hours of troubleshooting, finger pointing and denial by Hitachi engineers, the problem was discoverd by Herman O. (Thanks Herman!!) Comparing both DICOM statements Emed only accepted up to 12 BIT images and Hitachi was sending ONLY 16 BIT images. Hitachi quickly implemented a patch to the software that enabled the global option to enable a 12 bit image or a 16 bit image. I am surprised that other vendors havent figured this out yet. For criminy cripes sakes this was 8 years ago.

Anonymous said...

Any display system that offers control of window leveling is in fact handling > 8 bits images. Having a scanner generating more bits than can natively be displayed by a monitor is not itself a problem. That is, as long as the display software offers visualization options allowing the user to explore this dynamic range. It is disgraceful if, in 2008, display software is clipping (or just linearly w/l) > 8 bits of image into 256 display levels. If there is any blame, it has to lie on the display software side.

Having more than 8 bits of image data means increased dynamic range:
- increased ability to represent quantization error/detector noise while still capturing entire signal range (being able to see the noise allows you to better interpret very faint signals, better able to express your intensity measurement errors)
- much lower risk of image saturation (much reduced need to tweak scanner settings, less need to retake images)
- ability to explore the dynamic range (see high intensity signals, adjust display settings to examine lower intensity signals)
- greater ability to digitally process the image (segmentation, higher precision intensity measurements)
- allows more advanced visualization (e.g. HDR viewing as suggested by a previous poster, which is effectively an optimized way of representing the complete intensity range in a more constrained (256 gl) display range).

All this is good, now the display software just has to handle it, end of story.

HDR images look a little surreal, but are a great way of seeing whatever intensity distributions lie within your image data. In the same way that a map view (full image with zoom area displayed) allows you to understand the spatial context of a zoomed image sub-area view, an HDR view would allow you to understand the intensity context of a standard w/l view.

David Clunie said...

Hi Sam

As should be clear from GE's explanation, speculating about the cause of a problem in the absence of a thorough analysis probably does more harm than good.

To that end, would you mind sending me one of the images from your new GE scanner that you know looks "bad" on your PACS, preferably having obtained it directly from the scanner somehow, rather than having sent it through the PACS. I would prefer to evaluate it before adding to the confusion in this thread.

I can then send you some test images that the Agfa should be able to display correctly, if it properly supports the standard, recognizing that DICOM does not specify conformance requirements for image display (except when presentation states are involved).

David