Wednesday, November 30, 2011

Disruptive Technology

I'm sitting at one of the RSNA Bistro venues, having just spent $20 on a mediocre buffet meal which did at least consist of some mildly healthy alternatives. I've got a couple of things to tell you about, some from the meeting, of course, but one gleaned from FoxNews while perusing the net over lunch.

Let's start with the fun stuff. I think I've stumbled across the next revolution in photography, and truly this is a disruptive technology. I'm referring to the new Lytro camera, which uses "light field" imaging instead of regular old, well, light.  Here are the three models, the middle version having 16 GB of storage for 750 images, and the others coming in at 8GB for 350 images.

I'll refer you to Lytro's site for an explanation of what goes on in this little box.

Basically...
Capture living pictures with the press of a single button. By instantly capturing complete light field data, the Lytro gives you capabilities you've never had in a regular camera...

Since you'll capture the color, intensity, and direction of all the light, you can experience the first major light field capability - focusing after the fact. Focus and re-focus, anywhere in the picture. You can refocus your pictures at anytime, after the fact.
And focusing after the fact, means no auto-focus motor. No auto-focus motor means no shutter delay. So, capture the moment you meant to capture not the one a shutter-delayed camera captured for you.
And here is what you can create. Click anywhere on the image to refocus, double-click to zoom.


This is the start of something big, I think, although it will probably take quite a while for this to migrate into mainstream photography. Of course, it took quite a while for digital to overtake film. You saw it here on DoctorDalai.com first.

On to things Radiologic.



I attended a seminar on the lifeIMAGE LINCS, the lifeImage Network Cloud Service, narrated by CEO Hamid Tabatabaie, former CEO of AMICAS if you didn't know. LINCS is now fully operational, and it is being used at multiple centers. Hamid showed us a live view of user stats, and the system is quite impressively active. For the full explanation, check the lifeIMAGE website. In brief, the system facilitates easy, HIPAA-compliant sharing of studies between institutions, with the idea of empowering physicians themselves to "be the network". Most every permutation is considered, as long as someone in the equation has a LINCS account. The study can be sent or received with a few clicks among LINCS members, and if a "foreign" study is to be imported to LINCS, appropriate electronic paperwork is presented. A study can then be nominated to be uploaded to PACS, pending approval by whichever human you designate.

Two partnerships offering viewer options and more were announced:

  • lifeIMAGE is demonstrating a technology intergration with Vital Images, an advanced visualization and analysis software company, which shows Vital’s FDA-cleared universal viewer launching from LINCS. The two companies are also exploring a collaboration to provide on-demand access through LINCS to advanced visualization tools and comprehensive clinical solutions for cardiovascular, neurovascular and oncology imaging.
  • lifeIMAGE also has partnered with ClearCanvas, a leading provider of innovative diagnostic imaging applications, including Picture Archival and Communication Systems (PACS) and workstations. ClearCanvas offers a free version of its diagnostic workstations in an open-source format, as well as an FDA-approved clinical version, that will connect the 15,000 members of the ClearCanvas community to lifeIMAGE.
In my own humble opinion, this places lifeIMAGE on the road to creating a Cloud-based PACS, although when I suggested this to Hamid he just smiled and shook his head. Maybe someday.

lifeIMAGE literally offers us a life-saving (and disruptive) technology, and that is NOT an exaggeration. At our trauma hospital, it is more likely than not that a patient will arrive with a CD from St. Elsewhere that has not even been reported, and probably not even reviewed. And sometimes, that CD won't even load. In the best possible circumstance, we the rads spend 10 minutes loading the CD and reviewing it with the house staff. In other cases, the patient is rescanned, the new scan interpreted, and then reviewed with the residents, adding 30-40 minutes to the process (and doubling the radiation dose if anyone cares about that.) Of course, in the worst possible scenario, the patient could well be dead 20 minutes after arrival in the ED if he is the victim of severe trauma. What would we give to have the images in hand and reviewed before the patient hits the door? A few dollars goes a long way, and that's what lifeIMAGE costs when distilled down to the basics.

Not to sound histrionic, but isn't the patient's life worth that? (And no, I don't get a kickback from Hamid.) This is damn good technology, and you should, you MUST look at it.


My second disruptive technology is one you can't buy, directly, that is. Fovia sells their 3D technology not to end-users like me, but rather to PACS and Advanced Visualization companies, including Merge (where I use a limited thick-client version on my PACS), as well as GE, and Vital, among several others. The full version of their engine operates as a thin-client with server-side processing, and it works very, very well. Fovia has taken a very logical approach. "Which would you bet on as the best investment," asked Ken, Fovia's CEO, "a system that uses proprietary graphics cards, one that uses off-the-shelf gaming video cards, or one that uses the CPU of your computer and leverages Moore's Law?" Ken's answer, of course, is number three.

Fovia has bucked the system, going against the prevailing paradigm of proprietary graphics cards (viz TeraRecon) or gaming cards (nVidia, etc.) and does the graphic processing with a server's CPUs. This may seem counterintuitive at first, but stop and peek inside your computer. Even the little MacBook Airs now have a dual-core processor, and what you can buy for $1K on the street (well, don't buy it on the street, but you get the idea) outstrips anything you could have purchased for $10K 5 years ago. Add multithreading to the mix, and you can see that leveraging your investment based on the assumption that CPU's will become more powerful makes considerably more sense than assuming any other factor will accelerate to the same degree. Fovia notes a 30-50 fold increase in the speed of their product over the last 5 years, based in part on the rapid growth of CPU processing power. Fovia's system is highly scalable and flexible...the more CPU's, the faster it runs. Given Intel's recent announcement of a 50-core chip, the speed of processing might be as close to instantaneous as possible.

You will agree that Fovia's High Definition Volume Rendering (HDVR) can produce some powerful images as you will see in this gallery page iframed from Fovia (if it doesn't load, go to this LINK):


Fovia's claim to fame is the use of a frequency domain-based algorithm, for the techies among us. This involves "deep supersampling," rendering each voxel 32,768 times.  Sounds pretty involved to me.

While you can't buy Fovia directly, you can buy some products which use its technology. As an aside, I discussed with the execs the possibility of Fovia creating its own GUI, its own wrapper for the incredible viewing software. The answer? "Others have suggested that..." I guess we'll have to wait and see. But for the moment, they do a darn good job in the background.

ADDENDUM:

Dr. Robert Taylor, CEO of TeraRecon, sent me this comment on the dedicated-card vs. GPU vs. CPU debate:

I read your blog this AM and noticed the barb from Fovia about proprietary cards. To set the record straight, I just wanted to point out, TeraRecon also has a full SW option and we only use the VolumePro (VP) because it happens to be dramatically better than using software and scalable. We can now render over 70,000 slices in real time (the combination of many users working at once) from a single 2U server thanks to this technology. Today, and for the foreseeable future, that's impossible with SW (Fovia, GE, Philips) or GPU (Vital, Siemens).

When the sledgehammer (VP) is not required, we also have the nutcracker (SW), and this is why we have sold hundreds of laptop-based systems that work without a graphics board in sight. We also hope and expect that one day CPU technology will be able to do what is needed, and we're fine with that. It's the application that matters in the long run.
Thanks!
Robert

2 comments :

Unknown said...

HDVR Support: 70,000 slices is just x140 data-sets 0.25GB each. Rendering concurrently/interactively 140 slices is trivial for HDVR 1U server.

Unknown said...

More clarification from HDVR support: 70,000 slices of CT data requires just 36GB RAM and 1U blade may easily have ~twice more RAM. To render concurrently/interactively all these data by multiple clients is exactly the job HDVR is designed to do.