... | @@ -27,17 +27,18 @@ Further Contributors: Mehdi Cherti (MC) (Helmholtz AI HLST, JSC) |
... | @@ -27,17 +27,18 @@ Further Contributors: Mehdi Cherti (MC) (Helmholtz AI HLST, JSC) |
|
- **indicate** for the users how strongly the provided images are **out-of-distribution**, signaling whether pre-trained model is likely not able to produce useful diagnostics on-fly for the given images and potential need for further re-calibration / fine-tuning on the new images before attempting diagnostics
|
|
- **indicate** for the users how strongly the provided images are **out-of-distribution**, signaling whether pre-trained model is likely not able to produce useful diagnostics on-fly for the given images and potential need for further re-calibration / fine-tuning on the new images before attempting diagnostics
|
|
- **indicate** for the users on which basis the classification was made. e.g by highlighting regions of the input X-Ray image by a heat map or visualizing receptive fields of responsible activations across layers showing which **image regions** or **intermediate features** are **essential for diagnostics decision**
|
|
- **indicate** for the users on which basis the classification was made. e.g by highlighting regions of the input X-Ray image by a heat map or visualizing receptive fields of responsible activations across layers showing which **image regions** or **intermediate features** are **essential for diagnostics decision**
|
|
- Long-term vision is a **generic** system digesting different types of image modalities (not only X-Ray - eg. CT and 3D CT scans, including eventually entirely different modalities like ultrasonography, etc), **continually improving** generic model of image understanding (with strong focus on medical diagnostics and analysis of pathological signatures in this frame), allowing **fast transfer** to a specified domain of interest. So, if a new domain X comes up, triggered by an unknown novel pathogen Y causing a disease Z that can be diagnosed via medical imaging, the generic model, **pre-trained on millions of different images from distinct domains**, can be used to derive quickly an expert model for domain X. This should enable quick reaction in face of novel, yet unknown pathologies, where availability of diagnostics is initially impaired.
|
|
- Long-term vision is a **generic** system digesting different types of image modalities (not only X-Ray - eg. CT and 3D CT scans, including eventually entirely different modalities like ultrasonography, etc), **continually improving** generic model of image understanding (with strong focus on medical diagnostics and analysis of pathological signatures in this frame), allowing **fast transfer** to a specified domain of interest. So, if a new domain X comes up, triggered by an unknown novel pathogen Y causing a disease Z that can be diagnosed via medical imaging, the generic model, **pre-trained on millions of different images from distinct domains**, can be used to derive quickly an expert model for domain X. This should enable quick reaction in face of novel, yet unknown pathologies, where availability of diagnostics is initially impaired.
|
|
|
|
* For collaborators: [Helmholtz AI COVIDNetX Initiative internal](https://gitlab.version.fz-juelich.de/MLDL_FZJ/juhaicu/jsc_internal/superhaicu/shared_space/playground/covid19/-/wikis/home)
|
|
|
|
|
|
#### Potential collaboration directions and topics
|
|
#### Directions and topics for collaborations
|
|
Following directions are currently envisaged, please feel free to add more:
|
|
Following directions are currently envisaged, please feel free to add more:
|
|
* Large-Scale Pre-Training and Cross-Domain Transfer (Collaborators: JSC, ...)
|
|
* Large-Scale Pre-Training and Cross-Domain Transfer
|
|
- Auxiliary tasks and losses
|
|
- Auxiliary tasks and losses
|
|
- Unsupervised, self-supervised pre-training for transfer, e.g. via contrastive losses
|
|
- Unsupervised, self-supervised pre-training for transfer, e.g. via contrastive losses
|
|
- Generative models for unsupervised pre-training
|
|
- Generative models for unsupervised pre-training
|
|
* Uncertainty estimation and signaling (Collaborators: JSC, ...)
|
|
* Uncertainty estimation and signaling
|
|
* Methods for validation of diagnostics and explainable output (Collaborators: JSC, ...)
|
|
* Methods for validation of diagnostics and explainable output
|
|
* Learning from high resolution images, multi-scale architectures (> 512x512) (Collaborators: JSC, ...)
|
|
* Learning from high resolution images, multi-scale architectures (> 512x512)
|
|
* Learning from Multi-Modal datasets (e.g, 2D X-Ray, Ultrasound Images, 3D CT scans) (Collaborators: JSC, ...)
|
|
* Learning from Multi-Modal datasets (e.g, 2D X-Ray, Ultrasound Images, 3D CT scans)
|
|
* Neural Architecture Search for obtaining highly optimized architecture backbones (Collaborators: JSC, ...)
|
|
* Neural Architecture Search for obtaining highly optimized architecture backbones
|
|
- Transfer across different hardware architectures (e.g, ultra-low power mobile end devices)
|
|
- Transfer across different hardware architectures (e.g, ultra-low power mobile end devices)
|
|
* Data collection, preparation, maintenance (Collaborators: JSC (potential link to Juelich Datasets Initiative), DKRZ, HZDR) |
|
* Data collection, preparation, maintenance (Collaborators: JSC (linking to Juelich Datasets Initiative), DKRZ) |