< content="width=device-width, initial-scale=1.0"> How Do Humans and Machines Perceive Radiologic Imaging Information? | Radiology | U of U School of Medicine
Skip to main content

How Do Humans and Machines Perceive Radiologic Imaging Information?

By Michael Mozdy

In December 2017, I had the opportunity to sit down with five faculty members in the Department of Radiology and Imaging Sciences to discuss a number of exciting new research and education topics. This is the last of three transcripts from this enlightening conversation. Also read the first: “Machine Learning in Medicine” and the second: “Advanced Visualization: Unbelievably Real 3D Projections are Here

At the recent Radiologic Society of North America (RSNA) conference in November 2017, they had an entire medical image perception lab where radiologists could experience first-hand some of the research going on in the field. It seems like two of the big questions being asked are “Why do radiologists miss evidence on scans now and then?” And “how do we train residents and fellows so that we minimize perception errors?” Is this an accurate summary of the most important questions in the field of perception research?

Mills: Those are the million-dollar questions for sure. By the way, we had a perception research project at RSNA, which was staffed by a graduate student from the psychology department.

Auffermann: There are many factors involved in perceptual errors. An area receiving a lot more attention is fatigue – even the best expert will become fatigued. This is part of the reason why machine learning offers a nice opportunity to augment human beings because machines don’t really get tired. Machine perception is analogous to human perception, so research in these areas is somewhat related.

In our department, we are working on teaching good perceptual habits to observers to help mitigate many of the more common perceptual errors. Some of our research involves eye-tracking studies to compare expert radiologists and trainees in order to see if there are patterns and areas of concentration that can be identified and taught more intentionally.

auffermann1

Machine learning research is actually closely aligned with perception research. Computers are trying to perceive abnormalities in medical imaging; basically, we’re trying to teach computers to perform tasks currently done by human beings. How do you do that? One way is to teach the computer to perceive things we can already see. The flipside is when computers perceive something we cannot, how does it relay that information back to us? I think this is a very exciting area and that there will be a wave of important research to translate what the computer is identifying into something that humans can perceive.

McNally: Phones ringing and interruptions are a problem in the day of a practicing radiologist – it would be great to have a computer do some of this phone triage work, speaking of computerized chat bots.

Quigley: The science of interruptions backs this up. We’re very bad at adapting to interruptions – they take us out of our intent-to-focus task, and diving back into things is more difficult than we think. There’s a higher probability of missing things when interrupted. Insulating our work environment can be very helpful for us to do our tasks better.

Auffermann: There are training algorithms for how to deal with interruptions. The best solution would be to avoid the interruption entirely, but there are certain techniques we can use to cope with interruptions…

Quigley: …the anchoring method and things like that…

Auffermann: …right. And that was actually the focus of our project at RSNA, where we had radiologists go into our booth and we turned off the power strip for their monitors, so basically they couldn’t see their images for a period of time, and interrupt their review of a study. We’re looking into the results of that project now.

schroeder-quigley

Quigley: A related question here is who bills for what’s being seen? This is the economics of health care – some things don’t have billable codes. At RSNA, the 3D special interest group discussed how to approach the government for billable codes for this type of research. If you’re doing something that creates patient benefit, that should be billable.

Auffermann: There are products on the market where you can send them a radiologic image and they’ll annotate the image thanks to their machine learning algorithm. Then you can send them back a validation or correction of what they’ve annotated. Vendors will charge you to use this service. But we’re also training their algorithm with our feedback, so should they be paying us? It creates a really interesting economic question – who does the training of the algorithm, when is it done, how is that reimbursed? And you could make the argument either way on ownership of the images and the data once you’ve sent them over to the company. It creates a lot of interesting questions regarding intellectual property.

Quigley: Yes - who owns the data - the patient, the hospital system? Who owns the algorithm - the company who wrote the algorithm, the insurance company using it to pay claims?

Schroeder: This conversation brings me back to how important the radiologist is. Validating algorithms and systems is a huge challenge. If you’re interacting with a system to make it better and they’re profiting from it, that’s a dilemma.

Privacy is a big thing. We have to move forward very carefully and ask some important questions: Will the data be de-identified? How do we allow specific people to look at the data? There’s a discipline to how we approach this that will be important.

Auffermann: On an interesting tangent – there are some software packages that are able to reconstruct a person’s face simply from a head CT scan, which raises the question: is it even possible to de-identify certain medical information? If not, how do we deal with this?

mills2Quigley: To Joyce’s point, Utah is in a unique situation because I think we’re ahead of the curve in addressing some of these issues. We have the resources to meet all of these challenges in-house. We have collaborations with the Scientific Computing Institute, access to huge supercomputers and state of the art graphics processing units that can do all of these big calculations, but we can keep it in-house and have it be essentially an open-source product where we’re not beholden to a corporation that may have a copy of your data, even if it’s anonymized.

Schroeder: protections to the data sets is very important. We have a great set of collaborators here at the U: computing, bioengineering, the Utah Population Database, it’s a great environment for us to be developing algorithms and know that they can be applied. It’s pretty exciting.

Quigley: It’s translational informatics.

Auffermann: It’s a really interesting time for machine learning, perception research, and making sense of big data. Radiologists are at the intersection of these fields, and we can play an important role as information stewards.

 

Be sure to read part one of our panel conversation: Machine Learning in Medicine” and part two: “Advanced Visualization: Unbelievably Real 3D Projections are Here