Skip to main content

Machine Learning in Medicine: A Faculty Conversation

By Michael Mozdy

In December 2017, I had the opportunity to sit down with five faculty members in the Department of Radiology and Imaging Sciences to discuss a number of exciting new research and education topics. This is the first of three transcripts from this enlightening conversation.

The Conversationalists
Joyce Schroeder, MD, Section Chief for Thoracic Imaging
Bill Auffermann, MD, PhD, Thoracic Radiologist
Edward Quigley, MD, PhD, Neuroradiologist
Scott McNally, MD, PhD, Neuroradiologist
Megan Mills, MD, Musculoskeletal Radiologist

Machine learning, deep learning, and artificial intelligence (AI) are hot topics in radiology these days – the November 2017 Radiologic Society of North America (RSNA) conference was abuzz with presentations on the topic, ranging from improving diagnostic and predictive accuracy for disease to improving the work life of a radiologist. What do you think are the most important developments in machine learning?

Schroeder: Coming from a computer science and electrical engineering background, what strikes me is the world of data in our digital images that we haven’t used much. We take these incredibly rich data sets in images – hundreds of megabytes of data - and we might dictate a couple of paragraphs to go into the physician’s report. It’s really kind of stunning. I don’t want to discount the art in this – not everything is easily quantifiable and there is a lot of judgement in what we do. At the same time, I’ve been investigating quantitative measures in our images, and machine learning is another step along the pathway.

We’ve started a project with colleagues at the Scientific Computing Institute using 300,000 chest radiographs from our health system and taking advantage of these algorithmic approaches to look at both diagnoses and response to therapies. We’d like to see if we can build a predictive model for how someone may respond to therapies.

Auffermann: Historically, a lot of the research in radiology didn’t go beyond the image. I observed many projects at RSNA taking the imaging, having these computer algorithms dig through the medical records, pull out pathology results, genomic results, clinical history, family history and integrate all of that into diagnosis and prognostic information. I don’t think I’ve seen this type of work done to this extent before. It’s exciting that a lot of that integration of information is being driven by radiology.

Quigley: Enterprise machine learning is a big category of machine learning research. With a well-characterized database of reports and a completely digital medical record, you can data mine all of those features. That’s the next stage of machine learning. Your imaging database can be a starting point, and then you look back into the EMR and a machine learning algorithm might be able to detect a vascular stroke factor that you or I just wading through one patient’s chart might not be able to find. But the algorithm can more easily see a patient’s genetic makeup and risk factors, and with, say, an early acceleration of stenosis, this would impact diagnosis greatly.

Auffermann: there’s so much data, it’s not something a human being can feasibly go through, but it’s something a machine algorithm can tackle much more readily. I found a recent study really exciting. When they applied an algorithm to fibrosis and interstitial lung diseases, they actually found one of the most positive predictors was a feature that human beings never use in diagnosis: virtuosity and morphology. It was one of the stronger predictors. This is an example of something we weren’t cognizant of that machine learning brings to the forefront.

McNally: We’ve been trying to figure out what predicts stroke risk. In the last 30 years, all of our diagnoses have been based on stenosis, but recently in the last 5-10 years we’ve found that it has more to do with the plaque. Our MRI improvements allow us to see things now that we couldn’t see before. Once we fine-tune the accuracy of the diseased/normal prediction, we can go back and look at what the algorithm is using to reach the diagnosis. Is it using the T1 signal or is it using something totally different that we were never expecting? My guess is that it will find a lot of potential risk factors that we never thought were present or we just ignored.

I saw a few talks that used just brain MRI scans to predict a patient’s age and gender with high accuracy. We don’t even know what structures of the brain it’s using to predict that. Not only that, you can use motion-degraded data and non-motion degraded images to motion correct by machine learning. This would help avoid repeat scans because of motion problems. There are a lot of cool machine learning applications that we haven’t even had the time to ask the question for.

Mills: Another application is logistics or workflow – machine learning used as a triage tool. It might be further down the line once we have developed accurate diagnostic algorithms, but to have a giant worklist and have emergent or concerning conditions rise to the top would make a huge difference in patient care.

Quigley: Some machine learning is already doing this to help improve turnaround time and outcomes. Other projects like osteopenia – the algorithm can look at CT scans that have been acquired over time and predict which patients are more likely to have a hip fracture.

Mills: You could essentially provide a DEXA scan on everyone without having to get a DEXA.

Quigley: What’s more, you could automate that process and have it shoot out a note to the primary care physician that says, you might want to look at this.

Auffermann: You could do that for several disease processes like coronary calcium screening (where coronary disease is the biggest killer in the US). Imagine if you did a CT and every single patient got a calcium score and a lung assessment for COPD – just the big things like carotid stenosis. You even catch lower organs and could do a prostate and colon polyp evaluation on every patient…

Mills: …an aortic aneurism, there are so many things that could be automatically detected with scans that we’re already doing.

panel group

Auffermann:  Many of them probably would be more difficult for the human observer to do readily. Have all the diagnostic checks together from one scan. And, some companies are having the output of their machine learning algorithms go directly to a dictation, like a preliminary report. One company referred to their product as a “virtual resident.”


Quigley: Neurodegenerative machine learning algorithms are already doing that – automatically detect areas of degeneration so you can treat patients earlier.

Schroeder: This changes the way we do our work as radiologists – it’s pretty exciting. All of us look forward to being able to impact the “drudgery” part of the work such as saying things over and over again, or speaking into a microphone all day long. I think machine learning algorithms as a way to assist what we’re schroeder2doing is a good way to look at it. Importantly, this helps us move towards very structured reporting where we’re tightly linked with our clinicians on what they want from an exam – what can we provide, what can we quantify – and how can we put all of that into a structured report that requires the intellectual talent of a radiologist but not the drudgery of speaking into a microphone.

It will be nice to improve systems for how we do our work. I think we’re at the end of what we can do in terms of just speaking faster and faster all day long as the work load keeps exploding and the number of images explodes. I estimated that I looked at 5 million images last year. A chest CT is a thousand images. In a normal work day, maybe you’re looking at 30 thousand images – it’s a little frightening as you realize you’re medically responsible for everything on every image. Having systems that work more efficiently is a really important objective.


The future holds a lot of possibilities. If we can link up radiology scans with genetic data, pathology reports and family history, do you think some form of artificial intelligence could, say, predict how long you’re going to live? If you had to say where you think we’re heading with machine learning and artificial intelligence, what would you venture to guess?

Quigley: You wouldn’t jump out of an airplane without a parachute. Machine learning gives you a chance to have an additional safety factor for finding and treating patient disease. It gives radiologists a second chance to draw your eyes to an area that you may not perceive initially.

Auffermann: The notion of a virtual resident is a probably a good one for the near- to mid-term. As we go down the road, we can expand into areas that the resident normally wouldn’t go, such as digging really deep through the electronic medical record and integrating that information.

McNally: It does raise a lot of ethical questions – if they could predict age from scans and all those other things from scans, it could potentially predict your life expectancy. You’d have to put in a lot of environmental factors, but I think maybe further down the road there could be a company that markets themselves, saying, find out what your expected death day would be. Probably after the virtual resident.

mcnally2Mills: I think it’s virtually unknown. It will be really interesting to see. I think in the near future, it will be another tool, like CAD [computer-aided detection] in mammography. It may help efficiency somewhat - incremental improvements and workflow – but long term, who knows?

Quigley: People are saying don’t go into radiology in med school because it will be replaced by computers in a few years. But we will continue to adapt to new tools just like we’ve used new modalities.

Schroeder: I agree with Edward and I think we’re the people who are going to help build these things. There’s a great deal of interest, but it poses a challenge for us to understand how things are working, to be recruiting collaborators to radiology from the computational sciences. We need to step up and be the people who are helping to drive this technology. To just think it’s going to be some company in Silicon Valley that is going to make a system – I’m sure they’ll make great systems and they’ll be incredible to work with – but anyone who has spent a long time in medicine is very humbled by the variability in human beings. We’ll have to be very thoughtful about the process, be very careful with validation, help to design and compile databases. We have responsibility for confirming that we’re using validated data sets, and we know what’s happening with the patients, so we can test the algorithms and see how they perform.  I think we have a lot of opportunities and responsibilities in academic medicine.

Auffermann: I think that every vocation and job needs to be concerned about deep learning – I read an estimate that deep learning will put nearly a billion people out of work in 30 years. But if anything, I think medicine is probably a safer area in terms of job loss than other sectors. It’s brings a really interesting philosophical question: when all of these machines start doing jobs that humans used to do, how is our society going to adapt to that? It’s an interesting and important question for society as a whole.

Quigley: Many people don’t realize that many mid-level management jobs have been replaced by business analytics – you can contract with a company and what was managed by a human has already been semi-automated. What AI does in medicine is that it lets the radiologist be at the center of translational and educational research, which is a nice place to be.


Be sure to read part two of our panel conversation: “Advanced Visualization: Unbelievably Real 3D Projections are Here” and part three: “How Do Humans and Machines Perceive Radiologic Imaging Information?