Skip to main content

Advanced Visualization: Unbelievably Real 3D Projections are Here

By Michael Mozdy

In December 2017, I had the opportunity to sit down with five faculty members in the Department of Radiology and Imaging Sciences to discuss a number of exciting new research and education topics. This is the second of three transcripts from this enlightening conversation. Also read the first: “Machine Learning in Medicine” and the third: “How Do Humans and Machines Perceive Radiologic Imaging Information?"

Translating radiographs, CT scans, and MRI scans into 3D visualizations and physical 3D models is an exciting development in the past few years. It certainly catches the attention of the non-medical public, probably because it sort-of feels like sci-fi technology is finally a reality. It does seem like specific, mostly surgical, improvement projects are driving these advances - what is the state-of-the-art in advanced visualization today and what is it being used for?

Quigley: The big categories for advanced visualization are virtual modeling, physical modeling, and augmented reality. The applications for these are educational simulation, procedural simulation, and direct patient care.

For virtual modeling, we can gather the content that we as radiologists want our clinicians to see and condense it down to a virtual model, then they can use it to show a patient before a procedure, to educate the patient, or to train a resident or fellow. If I could put a pulmonary resident through a virtual bronchoscopy and then test them to see if they could find the lesions that we have peppered throughout a virtual bronchiogram, then you have immediate ways of assessing their performance.

quigley1I like to think of 3D visualization and 3D printing as flip sides to a coin. The data from a virtual model can be directly put into a printing path for a 3D printer. If we’re creating data sets for a physical model, we can also feed that into a machine learning algorithm. Say we’re looking at a vessel to assess its stenosis, we can feed that data set into a machine learning model and kill two birds with one stone.

In the case of augmented reality, you can map a patient and overlay that patient’s MRI with a heads-up display before you go in and do a procedure like a hip aspiration. Limitations have been with the technology. Several of the head-mounted displays or the AR-projection displays have limited resolution. But these are getting better and better, so we’re getting things with 4K resolution in each eye – close to a diagnostic monitor – which you can overlay on a patient before doing a procedure. Our surgeons are really interested in that.

 

How does augmented reality work? Who gets to see the overlay in the operating room?

Quigley: There are a couple of projectional techniques that can be done for operating theaters. One is a projected hologram that goes over the patient and it projects data right in front of you, another is a head-mounted display, and a third is retinal projection – shaping the image to the operator’s eye, coming from a glasses-mounted display.

The holy grail of augmented reality overlay is to have the overlay move with the patient’s breathing and heart rate. There’s a huge potential there for minimizing the number of passes on a biopsy or making sure you get to the right lesion.

Mills: Using augmented reality for percutaneous treatments would be good. You could realistically use PET data to show the active part of the tumor to save people from open resection and potentially improves your treatment and margins.

 

Is the technology for creating advanced visualization improving?

Quigley: It’s getting a lot easier. We used to use home-built software to get DICOM images into a 3D modeling environment. What I was doing two years ago is almost a turnkey solution now – it’s getting easier and easier. Within a couple of minutes, we can go from a CT scan to a virtual image you can put on a head-mounted display and walk through.

mcnally2

 

It seems like advanced visualizations could impact not just the precision and knowledge of a surgeon but also the entire surgical workflow.

Quigley: For reconstruction surgery, such as a fibular graft that is re-shaped into a mandible, the way we used to do it is we’d take the mandible out on the back table, and then start cutting it with a miter saw and shaping it and fit it in and take it back out and make a few more cuts, re-angle it, and then put the plating array down. Now, we can do that pre-surgically, print a virtual fibula, then take it out and virtually cut it, and build a cut guide that can be used in the OR. When you expose the fibula, you have an autoclavable cutting guide that looks like a miter saw guide, and it creates a customized, better fit graft. You’ve shaved off an hour of operating room time. That’s what we want to do: look at savings in mortality, morbidity, and operating room time – that’s time a patient is under anesthesia – and it gives you a chance to really affect clinical outcomes.

McNally: In neuro, we’re looking to decrease fluoroscopy time and biopsy time because it blocks the scanner from other uses. Some patients have a long wait simply because of scanner availability. If you could get a scan and then move from the scanner by using virtual reality to complete the procedure in a different room, that would get everyone home sooner.

mills1Quigley: The radiation cutdown is also huge. If you performed biopsies with augmented reality, you could essentially be doing a virtual fluoroscopy.

Mills: We use a lot of computer-aided templating for orthopedic joint replacement and some of the more commonly performed procedures. They use it as a guide – it’s a tool, not a replacement. We’ve already seen it improve the function of hip arthroplasty – the number of complications has decreased considerably just by the preoperative templating, and they’re doing that on radiographs, so imagine what they could do when they add three dimensional CT or MRI scans.

Quigley: …and you can do stress modeling of the cortical thickness – so you can figure out what the shear forces are when you pack in the prosthesis; you can model where it’s likely to fail and you know if you need to augment or change your operative approach…

Mills: …yes, and to do all of that pre-operatively where you can predict what complicating factors might be before you start the surgery is really important.

 

We have some projects moving forward for educating residents with the help of advanced visualization. Can you tell me about that?

Mills: What I’m working on is procedural teaching and education. A lot of what we do currently for procedures is on-the-job training. For us that’s not such a dire thing because complications from an injection are pretty minimal, but from a surgical standpoint, there are huge implications. That’s where there’s a big opportunity around modeling, simulation, and augmented reality. It can make a huge difference for people’s performance.

I think for biopsy applications it’s great because you get patient-specific imaging – it would be great to train people on a simulation for this…

Quigley: …give someone a virtual pneumothorax and have the resident manage those complications…

Mills: …right, and that’s what we’re mainly looking at, being in resident education.

Quigley: A good simulation is to create flow models for aneurisms – a model that has a failure mode built into it so that if you overpressurize or put in too big of a coil, you get a rupture in a simulator. That’s a lot better than trying to manage that highly stressful moment in a real patient. 

quigley2

Auffermann: This discussion makes me think that these new techniques might make some of the old techniques more viable. I think about stereoscopic imaging, which had promise but the execution wasn’t so great. Newer solutions might make older technologies effective, affordable, and feasible.

 

Be sure to read part one of our panel conversation: “Machine Learning in Medicine” and part three: “How Do Humans and Machines Perceive Radiologic Imaging Information?