Commit 4dc72fd5 authored by Mehdi Cherti's avatar Mehdi Cherti
Browse files


parent 4db01a49
...@@ -2,4 +2,60 @@ ...@@ -2,4 +2,60 @@
Wiki for material and resources, Deep Learning for COVID XRay detection Wiki for material and resources, Deep Learning for COVID XRay detection
* [Description of available models, codes and data]( - [Description of available models, codes and data](
- The code is maintained [here](
- Next steps are available in the issues of the code repository: [Next steps](
- compute budget: dlmpdxi_cov2, **3.2 Mcore-h (i.e., 25 Kgpu-h) for two months**
- JJ: currently, it is limited until 31.10.2020, see JuDOOR.
We will apply for proper full computational time project collecting results obtained until then.
## Next Steps
- JJ: Long term vision is a system digesting different types of image modalities (not only X-Ray), continually improving generic model of image understanding (with some focus on medical diagnostics and analysis), allowing fast transfer to a domain of interest (if a new domain X appears, triggered by an unknown novel pathogen causing a disease that can be diagnosed via medical imaging, the generic model, pretrained on millions of different images from distinct domains, can be used to derive quickly an expert model for domain X)
- JJ: Directions to go:
* Uncertainty estimation : current output does not contain info how uncertain is the network about the prediction made
- making uncertainty estimate available would allow to see how confident network is on making prediction and for example be very careful with outputs that signal too high uncertainty
- a recent example from medical imaging:
- in general, look at Bayesian Neural Network methods
- good code overview here :
- short review :
- classical books and papers: McKay, Bishop, Neal
- Monte Carlo DropOut (MCD, Yarin Gal) : outdated, but can be a baseline
- original paper: ,
- original, somewhat outdated code :
- somewhat more recent code :
- Monte Carlo Dropout (MCD) is an approximate variational inference method based on dropout. The approximating distribution q(w) takes the form of the product between Bernoulli random variables and the corresponding weights. Hence, sampling from q(w) reduces to sampling Bernoulli variables, and is thus very efficient.
- amounts to training with DropOut and using DropOut during inference to obtain uncertainty estimates (by running inference several times)
- related material:,
- Discussion :
- related technique : MC-DropConnect
- Hamiltonian Monte Carlo (HMC), scalable versions
- Recent work
- also applied to hyperparameter estimation in ResNets
- some overview on uncertainty in neural nets here : <>
- another good overview : <>
- "Our model based on DenseNet can process a 640×480 resolution image in 150ms on a NVIDIA Titan X GPU. The aleatoric uncertainty models add negligible compute. However, epistemic models require expensive Monte Carlo dropout sampling. For models such as ResNet, this is possible to achieve economically **because only the last few layers contain dropout**. Other models, like **DenseNet**, require the entire architecture to be sampled. This is difficult to parallelize due to GPU memory constraints, and often results in a **50xslow-down** for 50 Monte Carlo samples"
- Ensemble methods (training a ensemble of networks, variance in the output; disadvantage is effort for multiple training; advantage is scalability and simplicity)
- good baseline : Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
"We propose an alternative to Bayesian NNs that is **simple to implement, readily parallelizable, requires very little hyperparameter tuning**, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are **as good or better than approximate Bayesian NNs**"
* Interpretability via iNNvestigate or captum packages (heat maps for relevant image regions responsible for output):
- heat map may reveal whether classifier is relying on artificial "turked" information in the image (e.g, repetitive signatures of text, etc )
- example of COVID X deep learning work that uses the packages in simple way (FH Aachen, nearby folks, may contact them for talking to Uni Klinik Aachen):
- DeepCOVIDExplainer: Explainable COVID-19 Predictions Based on Chest X-ray Images,
- example of package usage :
# Past Meetings
- [Monday, 04, May 2020](
- Further Relevant Info (to be digested into wiki) : [Notes Meeting 08.05, COVID Project Discussion and First Tests](
# Relevant links
- Radiology assistant from MILA: (same people who provide dataset) <>, <>
- here, a functionality to take over is out-of-distribution detection, where the system is able to signal that incoming image is "too far" from what the model was trained before
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment