[Home]
Best of Atomistic Machine Learning (28.8.23)
A ranked list of awesome atomistic machine learning (AML) projects: github
(PGI-1 / IAS-1, contact: Johannes Wasmer j.wasmer@fz-juelich.de)
JuDFT: a collection of codes that have been developed and are maintained by the department Quantum Theory of Materials of the Peter Grünberg Institut and the Institute of Advanced simulation. https://github.com/JuDFTteam/best-of-atomistic-machine-learning
How to deal with different sized input images in CNNs (18.3.2021)
Here historical overview over classical papers relevant for global pooling layer and network architectures handling varying image input sizes where the technique to avoid flattening + FC layer was used in the beginnings. ResNet is also using this in most modern implementations. The approach is - no more flatten operation, global pooling layer (GlobalAveragePooling2D in Keras/TF, adaptive pooling in PyTorch) instead.
- Network In Network, 2013
Min Lin, Qiang Chen, Shuicheng Yan
https://arxiv.org/abs/1312.4400 - Striving for Simplicity: The All Convolutional Net, 2014
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin Riedmiller
https://arxiv.org/pdf/1412.6806 - Going Deeper with Convolutions, 2014
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich
https://arxiv.org/abs/1409.4842 - Excellent recent paper on applying such networks for training of robust models transferable across image scales, with clear background on global pooling operation:
Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Elad Hoffer, Berry Weinstein, Itay Hubara, Tal Ben-Nun, Torsten Hoefler, Daniel Soudry
https://arxiv.org/abs/1908.08986- See Section 2.1, "Using Multiple Image Sizes"
- Simple explanation for beginners:
https://datascience.stackexchange.com/questions/28120/globalaveragepooling2d-in-inception-v3-example - Elaboration on the same topic by Fast.ai developers:
https://www.fast.ai/2018/08/10/fastai-diu-imagenet/
"A lot of people mistakenly believe that convolutional neural networks (CNNs) can only work with one fixed image size, and that that must be rectangular. However, most libraries support “adaptive” or “global” pooling layers, which entirely avoid this limitation. It doesn’t help that some libraries (such as Pytorch) distribute models that do not use this feature – it means that unless users of these libraries replace those layers, they are stuck with just one image size and shape (generally 224x224 pixels). The fastai library automatically converts fixed-size models to dynamically sized models."
(Meanwhile, it is a standard architectural feature in most state of the art network implementations and is of course not confined to fast.ai library)
How to implement multiple nodes computing using horovod (19.3.2021)
- @ebert1: https://github.com/horovod/horovod#supported-frameworks
- @jitsev1: Have a look on our workshop's tutorials, especially Day 2 Tutorial 2 that gives intro into "Horovodifying" single node code for multi node execution
- In general, our Intro into Scalable Deep Learning Course is a good way to start (Horovod things come from Day 2 on)
There are also code examples there how to run training on our HPC machines with Horovod on multiple nodes - A short, concise tutorial on converting single GPU training code for distributed execution on multi-node supercomputers by @cherti1: Horovod data parallel training tutorial
[Home]