Skip to main content Skip to secondary navigation
Page Content
Image
Doctors examine a chest x-ray in a hospital room.

Reuters/Marko Djurica

Medical workers look over a chest x-ray of a patient suspected of having COVID-19.

A few months ago, Daniel L. Rubin, a professor of biomedical data science, of radiology, and of medicine at Stanford, received an unexpected request for collaboration. A group of researchers from China and Thailand were developing a new machine learning algorithm to improve the accuracy of radiology-based COVID-19 diagnosis and needed help to make their model more robust without compromising patient privacy. Rubin – whose research uses AI to extract biomedical information from radiology images to guide physicians – had the right tool for the challenge. It’s called “federated learning.

Federated learning was first introduced by researchers at Google in early 2016 as a method to train and evaluate machine learning models without centralizing data. Unlike traditional centralized approaches, where data sits in one location like a public repository, federated learning enables a user to fit a model on individual data sets without sharing that data. This method promotes global involvement in a project while avoiding several challenges with centralized learning, such as data privacy, security, and access rights. 

In a preprint posted in May, Rubin, a Stanford Institute for Human-Centered Artificial Intelligence-affiliated faculty member, and colleagues showed that this new way of learning could help radiologists worldwide make faster and more accurate COVID-19 diagnosis, with the hope of better tracking and mitigating the disease’s spread.

A global AI tool to detect COVID-19 from chest scans

In the fight against the pandemic, the main goal according to the World Health Organization is to stop human transmission of SARS-CoV-2 the virus that causes COVID-19 by detecting and isolating potential spreaders. Rapid and accurate diagnosis is crucial for this goal. While polymerase chain reaction–based diagnostic tools, which test whether a fragment of virus DNA is in a human sample, are fast and cheap, their accuracy is considerably low – over 20% of negative test results turn out to be positive depending on the time of testing

Computed tomography (CT) chest scans, on the other hand, can provide a faster and, arguably, more sensitive alternative for early detection and disease evaluation. The sensitivity, however, is highly variable depending on radiologists’ experience. 

To reduce this variation, Rubin’s collaborators developed an initial centralized deep learning model on CT data collected from three Tongji hospitals in Wuhan, China.

They found that this model could predict COVID-19linked pneumonia consistently from test CT scans at near radiologist level. The model, however, performed worse when it was tested on another data set. This indicated a generalization problem, which is a well known challenge in AI applications to health care and a barrier for garnering clinicians’ trust. “They’ll give you an answer no matter what you put in,” Rubin explains. “So, if you put in a picture of a dog, it will tell you if it thinks that it has pneumonia or not, even if it’s not a relevant image.”

To overcome this problem, the researchers implemented a publicly available federated learning framework called Unified CT-COVID AI Diagnostic Initiative (UCADI) allowing any hospital or institution around the world with the right infrastructure and data to join. 

To participate, a stakeholder first downloads the code and trains a new model locally based on the initial model. Once the new model is trained, the participant shares the updated version of their model with the framework, which then encrypts the model parameters to protect patient privacy and transfers them back to the centralized server. The server combines the contributions and reshares the updated parameters with all stakeholders. 

This framework not only allows participants to easily test how well the model works on their data, but also permits them to contribute to making the model better by increasing the amount of data it is trained on. 

UCADI also targets data heterogeneity – another known challenge in AI diagnostics. Differences in the number of cases, patient populations, scanners, image resolutions, etc. all add to the degradation of a model’s performance. But “if you know what those factors are, you can compensate for them during the local training process,” Rubin says. “You can’t get completely back to the same level of performance as centralized data, but you can come close.” 

In fact, the federated model on which Rubin collaborated not only performed similarly to the centralized model on the test data, but also improved the detection sensitivity significantly in other cases, suggesting that the model is robust and reliable.

The researchers hope that participants around the world will use UCADI to help health practitioners detect the virus more quickly and accurately, and save more lives. “We are all affected by this pandemic, and we all need to work together to fight it,” Rubin says. “Federated learning approaches may provide a means of overcoming a number of the challenges inhibiting global collaborations in this fight.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content