... | ... | @@ -15,5 +15,16 @@ Further Contributors: |
|
|
- For collaboration and access to computing resources, please contact Jenia Jitsev (j.jitsev@fz-juelich.de), Mehdi Cherti (m.cherti@fz-juelich.de), or Alex Strube (a.strube@fz-juelich.de)
|
|
|
- Collaborating partners will also gain access to common code and dataset repository
|
|
|
* Projects aims are:
|
|
|
- short-term: provide strong baseline for pre-training and transfer learning for COVID X-Ray diagnostics using large-scale datasets of images from different domains (using both generic datasets like ImageNet and medical imaging datasets like COVIDx - see [Description of available models, codes and datasets](https://gitlab.version.fz-juelich.de/MLDL_FZJ/juhaicu/jsc_public/sharedspace/playground/covid_xray_deeplearning/wiki/-/blob/master/Description.md))
|
|
|
- Long term vision is a generic system digesting different types of image modalities (not only X-Ray - eg. 3D CT, or entirely differrent modalities like PET, MRI, etc), continually improving generic model of image understanding (with strong focus on medical diagnostics and analysis in this frame), allowing fast transfer to a specified domain of interest. So, if a new domain X comes up, triggered by an unknown novel pathogen causing a disease that can be diagnosed via medical imaging, the generic model, pre-trained on millions of different images from distinct domains, can be used to derive quickly an expert model for domain X. This should enable quick reaction in face of novel, yet unknown pathologies, where availability of diagnostics is initially impaired. |
|
|
- short-term: provide **strong baseline** for pre-training and **transfer learning** for COVID X-Ray diagnostics using large-scale datasets of images from different domains (using both generic datasets like ImageNet and medical imaging datasets like COVIDx - see [Description of available models, codes and datasets](https://gitlab.version.fz-juelich.de/MLDL_FZJ/juhaicu/jsc_public/sharedspace/playground/covid_xray_deeplearning/wiki/-/blob/master/Description.md))
|
|
|
- **indicate** for the public users (medical doctors, etc) how **certain / uncertain** the performed classification is, given images provided by the users
|
|
|
- **indicate** for the users on which basis the classification was made. e.g by highlighting regions of the input X-Ray image by a heat map showing which **image regions** are **essential for diagnostics decision**
|
|
|
- Long term vision is a generic system digesting different types of image modalities (not only X-Ray - eg. 3D CT, or entirely differrent modalities like PET, MRI, etc), continually improving generic model of image understanding (with strong focus on medical diagnostics and analysis in this frame), allowing fast transfer to a specified domain of interest. So, if a new domain X comes up, triggered by an unknown novel pathogen causing a disease that can be diagnosed via medical imaging, the generic model, pre-trained on millions of different images from distinct domains, can be used to derive quickly an expert model for domain X. This should enable quick reaction in face of novel, yet unknown pathologies, where availability of diagnostics is initially impaired.
|
|
|
|
|
|
#### Potential collaboration directions and topics
|
|
|
Following directions are currently envisaged, please feel free to add more:
|
|
|
* Large-Scale Pre-Training and Cross-Domain Transfer (Collaborators: JSC, ...)
|
|
|
* Uncertainty estimation and signaling (Collaborators: JSC, ...)
|
|
|
* Methods for validation of diagnostics and explainable output (Collaborators: JSC, ...)
|
|
|
* Learning from high resolution images, multi-scale architectures (> 512x512)
|
|
|
* Learning from Multi-Modal datasets (e.g, 2D X-Ray or 3D CT scans) (Collaborators: JSC, ...)
|
|
|
* Transfer across different hardware architectures (e.g, mobile devices) (Collaborators: JSC, ...) |